Search is not available for this dataset
title
stringlengths
4
1.49k
text
stringlengths
1.13k
335k
added
stringlengths
24
24
created
stringlengths
24
24
id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
Subfunctionalization of phytochrome B1/B2 leads to differential auxin and photosynthetic responses
Subfunctionalization of phytochrome B1/B2 leads to differential auxin and photosynthetic responses Abstract Gene duplication and polyploidization are genetic mechanisms that instantly add genetic material to an organism's genome. Subsequent modification of the duplicated material leads to the evolution of neofunctionalization (new genetic functions), subfunctionalization (differential retention of genetic functions), redundancy, or a decay of duplicated genes to pseudogenes. Phytochromes are light receptors that play a large role in plant development. They are encoded by a small gene family that in tomato is comprised of five members: PHYA, PHYB1, PHYB2, PHYE, and PHYF. The most recent gene duplication within this family was in the ancestral PHYB gene. Using transcriptome profiling, co‐expression network analysis, and physiological and molecular experimentation, we show that tomato SlPHYB1 and SlPHYB2 exhibit both common and non‐redundant functions. Specifically, PHYB1 appears to be the major integrator of light and auxin responses, such as gravitropism and phototropism, while PHYB1 and PHYB2 regulate aspects of photosynthesis antagonistically to each other, suggesting that the genes have subfunctionalized since their duplication. One important family of plant genes are the phytochromes. Plants use both internal and external cues as signals to guide their growth and development, and to help them respond to their environment, such as to light quality and light quantity, temperature, moisture, or nutrient availability. Phytochromes (phys) are light-absorbing chromoproteins that consist of a chromophore and an apoprotein, which together transmit light signals and regulate gene expression in response to light (Chen & Chory, 2011;Franklin & Quail, 2010). The phy apoproteins are encoded by a multi-gene family that generally consists of a predominantly far-red (FR) responsive phy, phyA, and one or more predominantly red light (R) responsive phys. In Arabidopsis, the R responsive phys are encoded by four genes: AtPHYB-AtPHYE. Phylogenetically, gene duplication of an ancestral phytochrome gene first separated PHYA/C from the other PHYs. Subsequently, PHYA separated from PHYC, and PHYB/D from PHYE (Li et al., 2015;Mathews & Sharrock, 1997). Eventually, after the divergence of the Brassicales, PHYB/D separated into PHYB and PHYD genes in Arabidopsis (Mathews & Sharrock, 1997). PHYs in tomato have not undergone the same phylogenetic evolution as in Arabidopsis. For instance, SlPHYB1 and SlPHYB2 (hereafter simply called PHYB1 and PHYB2) are similar to AtPHYB and AtPHYD but these genes arose separately by a gene duplication event after the separation of the Solanales from the Brassicales about 110 Mya (Alba, Kelmenson, Cordonnier-Pratt, & Pratt, 2000;Pratt, Cordonnier-Pratt, Hauser, & Caboche, 1995), suggesting that any functional divergence of the duplicated genes would be unlikely to be the same in the two plant families. In contrast to Arabidopsis, mutation of phyB1 in tomato results only in temporary red light insensitivity at a young seedling stage while phyB1 adults look very similar in phenotype to WT tomato (Lazarova et al., 1998). In Arabidopsis and pea, PHYB plays a role during de-etiolation (Neff & Chory, 1998), chlorophyll production (Foo, Ross, Davies, Reid, & Weller, 2006), photo-reversible seed germination (Shinomura et al., 1996), timing of flowering (Khanna, Kikis, & Quail, 2003), the shade avoidance response (Keller et al., 2011), and the mediation of hormone responses (Borevitz et al., 2002), including lateral root initiation via auxin transport signaling (Salisbury, Hall, Grierson, & Halliday, 2007), polar auxin transport (Liu, Cohen, & Gardner, 2011), and seed germination via the regulation of abscisic acid (ABA) (Seo et al., 2006). Compared to Arabidopsis, much less is known about the functions of phys in the Solanales. In tomato, PHYB1 is involved in hypocotyl inhibition, de-etiolation, and pigment production in R (Kendrick et al., 1994;Kendrick, Kerckhoffs, Tuinen, & Koornneef, 1997; Tuinen, Kerckhoffs, Nagatani, Kendrick, & Koornneef, 1995). PHYB2 plays a role in early seedling development (Hauser, Cordonnier-Pratt, & Pratt, 1998), and, in cooperation with PHYA and PHYB1, in the control of de-etiolation (Weller, Schreuder, Smith, Koornneef, & Kendrick, 2000). Analysis of phyb1;phyb2 double mutants in tomato showed that a high level of redundancy exists between the two genes with respect to hypocotyl elongation during de-etiolation in both white light and R (Weller et al., 2000). Chlorophyll and anthocyanin production, on the other hand, was only reduced in the phyB1 mutant and not in phyB2, but the phyb1;phyb2 double mutant displayed a synergistic phenotype with less of both pigments than found in the phyb1 mutant alone, suggesting that phyB2 contributes to pigment production in a significant manner (Weller et al., 2000). Subfunctionalization of the B-class phytochromes was also shown in maize, where ZmPHYB1 was the predominant phy to regulate mesocotyl elongation in R, while ZmPHYB2 was mainly responsible for the photoperiod-dependent transition from vegetative to floral development (Sheehan, Kennedy, Costich, & Brutnell, 2007). To better understand to what degree subfunctionalization has occurred between tomato phyB1 and phyB2, we employed transcriptome profiling and co-expression network analysis. We found that tomato PHYB1 and PHYB2 exhibit both common and non-redundant functions. According to our analysis, two major areas of potential subfunctionalization are the regulation of genes involved in response to auxin and in photosynthesis. To verify the biological relevance of our genomic analyses, we tested phyB1 and phyB2 mutants for classical auxin responses, including phototropism and gravitropism, and for the rate of photosynthetic assimilation. We report here that phyB1 and phyB2 indeed differ in their involvement in some of these phenotypes, suggesting that the recent PHYB duplication in tomato has led to subfunctionalization that is different from those in maize or Arabidopsis. | Plant materials and growth conditions Solanum lycopersicum seeds of cultivar Moneymaker (Gourmet Seed, Hollister, CA, United States) and homozygous phyB1 mutants (allele tri 1 ) and phyB2 mutants (allele 2-1 (aka 70F), (Kerckhoffs et al., 1999;Weller et al., 2000) were used in all experiments. Both mutants used in this study were in the Moneymaker background (original source: Tomato Genome Resource Center, Davis, CA, USA). For RNAseq experiments, seeds were surface sterilized using 10% bleach for 15 min in ambient laboratory conditions and then sown on water-saturated, sterile filter paper in light-excluding plastic boxes. Plants were grown in a dark growth chamber at 25°C. Five-day-old seedlings of similar height were harvested under green safe light (522 nm LED), and flash-frozen in liquid nitrogen. Seedling handling and harvesting at room temperature under safelight conditions was limited to a few minutes of indirect exposure. The remaining seedlings were exposed to 60 min of red light (660 nm, using a custom-made LED display, 10 μmol m −2 s −1 ) and then selected, harvested, and frozen as described for the dark-grown seedlings. Specimens were stored at −80°C until RNA was extracted. Tissue was grown in four biological replicates under the same conditions. | RNA extraction and sequencing Tissue was flash-frozen in liquid nitrogen and pulverized with a mortar and pestle. About 5 seedlings (~100 mg) were pooled per biological replicate for each genotype and condition. Total RNA was extracted using an RNeasy Plant Mini Kit (Qiagen) according to the manufacturer's instructions. TruSeq stranded mRNA library construction was performed by the Research Technology Support Facility at Michigan State University. Paired end 125 bp reads were obtained using an Illumina Hi-Seq 2500 instrument. All data were uploaded for public use to NCBI's short read archive http://www. ncbi.nlm.nih.gov/sra/SRP10 8371. | RNAseq differential expression analysis RNAseq reads were mapped with HISAT2 to the SL3.0 version of the tomato genome with ITAG3.2 genome annotation from SolGenomics (www.solge nomics.org). First, phyB1 experiment reads and phyB2 experiment reads were mapped separately, and then they were mapped together. DESeq was used largely with default parameters to identify differentially expressed genes between wild type in the dark and wild type in R and between phyB mutants in the dark and in R, except that we used an alpha value of 0.05 for the multiple comparison adjustment. Genes identified in the phyB1 experiment alone as significantly differentially expressed (DE) by DESeq and with a abs(log2(fold change)) > 0.63, that is, changed by at least 1.5-fold between dark and R, were then looked at in the phyB1 comparison. If the gene had a log2(FC) that was significantly different from WT, we called the gene phyB1regulated. To be characterized as significantly different, the difference between the log2(FC) in WT and phyB1 had to be greater than the sum of the standard errors of the log2(FC) in WT and in phyB1. The process was repeated with the phyB2 experiment alone to identify phyB2-regulated genes. | Co-expression analysis with WGCNA From the data in which phyB1 and phyB2 experiment reads were mapped together, normalized read counts were obtained from DESeq. The variance of normalized expression was calculated across all samples (10 WT-D, 10 WT-R, 5 B1-D, 5 B1-R, 5 B2-D, 5 B2-R), and the top 8,000 most variable genes were identified. Their expression values were log transformed [log2(normalized read count + 1)] and used as input for WGCNA in R to identify co-expression modules. Beta was set to 10 for the adjacency function. Modules were obtained based on topological overlap and eigenvectors representing average expression of each module were correlated to condition (dark = 0, 60 min R = 1) and genotype (either phyB1 = 1, phyB2 and WT = 0 or phyB2 = 1, phyB1 and WT = 0). | GO enrichment analysis To determine which gene ontology (GO) categories were significantly enriched among the differentially regulated or co-expressed genes, we used the R package topGO (Alexa & Rahnenfuhrer, 2010;Alexa, Rahnenführer, & Lengauer, 2006). Only categories with p-values < 0.05 from Fisher's exact tests (weighted models) are reported. For topGO's "gene universe," GO annotations for S. lycopersicum were downloaded from the Panther Classification System (www.panth erdb.org, downloaded May 2017). | Gravitropism Wild type, phyB1, and phyB2 seeds were sown at 12p.m., 5p.m., and 1p.m., respectively, to coordinate germination times (age-synchronized) and assure equal developmental stages at the time of experimentation. Seeds were sterilized by stirring for 15 min in 10% bleach in the dark and sown into light-excluding plastic boxes with saturated paper towels and filter paper under green light. Seeds were grown in the dark in a growth chamber at 25°C for 5 days. Age-synchronized seedlings were transferred under green light to 1% agar plates, placed either in dark, under R (135 µE) from the top, or in R from opposite sides (60 µE) and allowed to grow with the same gravity vector for 1 hr. Seedlings were then gravistimulated by rotating plates 90 degrees. Photographs were taken before gravistimulation (0 hr), after 4, 8, and 24 hr. The angle of bending was measured with ImageJ. A three-way ANOVA (genotype, light condition, time) was performed in R followed by Tukey's post hoc test to determine statistically significant differences between groups. | Phototropism For phototropism experiments, age-synchronized seedlings (Moneymaker, phyB1, and phyB2) were grown in individual plastic scintillation vials filled with soil and incubated in the dark at 25°C for 5 days. Seedlings with similar hypocotyl length were then transferred to a black box illuminated with unilateral white light through a slit in the box. The plants were positioned such that their apical hook was facing away from the light source. Every hour over a time period of five hours, a set of plants was removed and scanned. The phototrophic bending angle of these plants was determined by ImageJ analysis, and data were plotted using R software. Data were analyzed by a two-way ANOVA (genotype, time) using the software R. For qPCR analysis of PHOT genes, tomato seedlings were grown as for the phototropic experiments. Material was harvested and flash-frozen at the indicated times. Total RNA was extracted using an RNeasy kit (Qiagen) according to the manufacturer's instructions. Reverse transcription was performed using the iScript cDNA Synthesis kit (Bio-Rad) with the recommended incubation times and temperatures as follows: 25°C for 5 min, 46°C for 20 min, and 95°C for 1 min. QPCR was performed on a Bio-Rad Mastercycler C1000 using iTAQ Universal SYBR Green Supermix (Bio-Rad) with an incubation at 95°C for 3 min, followed by 40 cycles at 95°C for 10 s, and 60°C for 30 s. SAND (Solyc03g115810) and RPL2 (Solyc10g006580) genes were used for normalization. Primer specificity was verified using the melt curves, and data were analyzed by the 2 -ΔΔCt method (Livak & Schmittgen, 2001). Statistical analysis was performed using ANOVA (R version 3.4.1) on log10 normalized expression values. The primers are listed in Table S6. Three biological replicates were used with five seedlings per genotype and time point per biological replicate. | Photosynthetic analysis and chlorophyll quantification Six-week-old Moneymaker, phyB1, and phyB2 plants grown in a growth chamber at 25°C under 16 hr of light were used for photosynthetic analysis and chlorophyll quantification. A LI-COR 6400XT portable photosynthesis system (LI-COR) with a standard leaf chamber and a LI-COR 6400 LED light source was used for photosynthetic efficiency measurement. To ensure best uniformity, we chose for analysis the terminal leaflet of the fourth youngest, fully developed leaf. Single leaflets still attached to the plant were clamped flat into the standard leaf chamber. The conditions in the leaf chamber were set at a reference CO 2 value of 400 mmol and a temperature of 21°C, and CO 2 uptake was measured at two different light intensities: 100 µmol photons m −2 s −1 and 1,500 µmol photons m −2 s −1 . Each leaf was placed in the standard leaf chamber before measurement and exposed to 2 min of light of the mentioned intensities in order to allow the plants to acclimate and CO 2 assimilation was measured. Matching was done after every plant to minimize errors. After measuring CO 2 assimilation, the leaf was photographed, and the leaf area was measured using ImageJ. Fresh weight of the respective leaf was also recorded, and chlorophyll was extracted in 5 ml of methanol for 72 hr in the dark at 4°C. Methanol extracts were analyzed by spectrophotometry and chlorophyll concentrations determined according to published procedures (Porra et al., 1989). Photosynthetic efficiency was calculated by normalizing the assimilation rate either for area or fresh weight. Three experimental replicates were performed with ~10 plants per genotype per replicate. | PhyB1 and PhyB2 differentially affect the transcriptome during photomorphogenesis in tomato seedlings To determine if PHYB1 and PHYB2 have acquired different functions since the divergence from their common single-gene ancestor, we performed RNAseq analysis. We grew WT and phyB1 and phyB2 mutant seedlings for 5 days in the dark and compared them with individuals of the same genotypes and age that were also exposed to red light (R) for 60 min. We then identified genes that were differentially expressed in the mutants between dark and light (Table S1). Using a threshold value of 1.5-fold upregulation or downregulation, we first filtered the data from the RNAseq analysis for genes that were statistically significantly upregulated or downregulated by light treatment in the WT. Of those genes, we considered a gene to be phyB1 or phyB2 regulated if it was either (a) upregulated or downregulated by light in the WT but not differentially regulated in the mutant, (b) oppositely regulated in the mutant compared to the WT, (c) significantly less strongly regulated in the mutant compared to the WT, or (d) more strongly regulated in the mutant compared to the WT. This data filtration yielded 121 phyB1-regulated genes, and 73 phyB2-regulated genes. In these gene sets, we identified functional enrichment gene ontology (GO) categories ( Figure 1; Table S2). To identify traits possibly subfunctionalized between PHYB1 and PHYB2 mutants, we were particularly interested in GO categories that showed significant enrichment in one phyB-regulated gene set but not the other. GO categories significantly enriched in genes regulated by phyB1 included responses to auxin (GO: 0009733), responses to cytokinins (GO: 0009735), and protein phosphorylation (GO: 0006468). By contrast, phyB2-regulated genes did not fall into these three GO categories, but instead into GO categories such as defense response (GO: 0006952) and processes involving aromatic amino acid metabolism and biosynthesis (GO: 0009095 and GO: 0006558) (Table S2; Figure 1). To gain additional insight into genes that were differentially affected by their mutations in either PHYB1 or PHYB2, we employed transcriptional co-expression analysis of the top 8,000 most variably expressed genes across all conditions and found modules containing genes that due to their co-expression status were likely to have some degree of functional connectivity ( Figure 2; Table S3). The yellow, blue, red, and light-cyan modules contained genes positively correlated to the phyB1 mutation (i.e., they were more highly expressed in phyB1 than in WT and phyB2) but negatively or not significantly correlated to the phyB2 mutation. The opposite was true for the brown, salmon, turquoise, and green modules, which contained genes positively correlated to the phyB2 mutation (i.e., they were more highly expressed in phyB2 than in WT and phyB1) but not or negatively correlated to the phyB1 mutation. These opposite expression patterns thus indicated diversified regulation between the two PHYB genes. Such diversified regulation was also seen, albeit not significantly, in the black, green-yellow, and cyan modules. Modules containing genes that were regulated by light ("condition") included the tan module (negative correlation), and the green-yellow, magenta, green, midnight blue, and pink modules (positive correlation) ( Figure 2). The green module was the only module containing genes that were significantly correlated with light (positively) and were also oppositely correlated with phyB1 and phyB2. We looked for enriched GO functions in each co-expression module (Table S4). Among these F I G U R E 1 phyB1 and phyB2 regulate expression of genes involved in different biological processes. We identified 121 phyB1-regulated genes and 73 phyB2regulated genes. Gene ontology functional enrichment analysis of these gene groups identified biological processes specifically regulated by phyB1 and phyB2. For all significant GO category enrichments, the black bars represent the number of genes with that annotation in that group (Significant) and the gray bars represent the expected number of genes with that annotation if representation was random (Expected) F I G U R E 2 Co-expression modules show phyB1 and phyB2 differently regulate gene networks involved in auxin and photosynthesis related biological processes among others. (a) For each co-expression module (indicated by color) and the genes that did not fall into a coexpression module (gray), the average expression vector (eigenvector) across conditions and genotypes was correlated to condition (dark = 0, 60 min R exposure = 1) and genotype (phyB1 column: WT and phyB2 = 0, phyB1 = 1; phyB2 column: WT and phyB1 = 0, phyB2 = 1). R 2 values from the Pearson correlations are indicated in the heatmap by color according to scale on the right as well as by their printed value in the grid with p-values below in parentheses. (b) Gene ontology functional enrichment analysis identified biological processes central to each co-expression module. Displayed here are four enriched GO biological processes for the brown, green, and blue modules. The black bars represent the number of genes with that annotation in that group (Significant) and the gray bars represent the expected number of genes with that annotation if random (Expected) functions were auxin-related processes, including auxin efflux (GO: 0010329), the auxin-regulated processes of gravitropism and phototropism (GO: 0009959, GO: 0009638), and auxin signaling (GO: 0009734), as well as photosynthesis-related processes (GO: 0009765, GO: 0009773, and GO: 0015979), in addition to a large number of other functional categories (Figure 2b and Table S4). To determine areas of subfunctionalization between phyB1 and phyB2 in tomato, we combined information from our differential expression, co-expression and GO analyses to choose physiological functions for further testing and verification that transcriptomic differences had measurable effects on phenotypes. These functions were chosen based on (a) frequency of appearance in our data as being differentially regulated by phyB1 and phyB2, (b) statistical significance of our differential and co-expression analyses data, and (c) the number of genes on which individual enrichment analyses were based. Additionally, functions for further study were chosen if they were known from the literature to be regulated, at least in part, by phyB in Arabidopsis. | PhyB1 and PhyB2 differentially modulate auxin responses in tomato seedlings To determine if our gene expression analysis had predictive power on the plant's phenotype, we subjected wild type (WT) and phyB1 and phyB2 mutants to a variety of physiological experiments. Given that auxin-related processes had been implicated as differentially regulated by phyB1 and phyB2 in both differential expression and co-expression analyses, we tested if the auxin-related responses phototropism and gravitropism were differentially affected between phyB1 and phyB2 mutants when compared to the WT. Phototropism, the movement of plants toward a light source, is achieved by the perception of blue light via the photoreceptors PHOT1 and PHOT2, eventually leading to unequal distribution of auxin along the hypocotyl of a seedling exposed to unilateral light (Fankhauser & Christie, 2015). Differential auxin concentrations then result in unequal growth on the light versus dark side of the stem or hypocotyl leading to curvature toward the light source (Fankhauser & Christie, 2015). Indeed, when we exposed 5-day-old seedlings to unilateral white light (WL) over a period of three hours, phyB1 hypocotyls displayed a significantly faster phototropic response (Figure 3) compared to the WT and phyB2 plants, indicating a differential role of phyB1 and phyB2 in the phototropic response in tomato. This suggests that phyB1, but not phyB2, normally inhibits phototropic bending. Our RNAseq differential gene expression analysis had found PHOT1 to be differentially expressed in the WT dark versus WT red light comparison but the gene was not phyB1 or phyB2 regulated. PHOT2 was not differentially expressed in either comparison. Since differences in the phototropic phenotype were recorded for seedlings grown under conditions different from those in our RNAseq experiment, we decided to check if gene expression differences of these receptors pivotal to the phototropic response might also be detectable between phyB1 and phyB2 mutants during phototropic stimulation. Testing PHOT1 and PHOT2 expression with qPCR at 0 and 3 hr of treatment with unilateral white light, we observed a decline in PHOT1 and an increase in PHOT2 expression over the 3-hr treatment (Fig. S1), but found no significant differences of gene expression between the two phyB mutants, suggesting that regulation of the PHOT1 and PHOT2 genes does not explain the measured phenotypic differences and instead indicates that the differences are likely due to differential gene regulation downstream of PHOT1 and PHOT2 (Fig. S1). Since gravitropism, like phototropism, is a typical auxin-regulated response, we decided to test if gravitropism manifests itself differentially in the two phyB mutants in tomato. Five-day-old darkgrown seedlings were transferred to agar plates, either exposed to R or kept in the dark, and grown upright for 1 hr immediately after the transfer. Plates were then reoriented 90 degrees to induce a gravitropic response. We observed that in R the phyB1 mutant responded statistically significantly faster to the altered gravity vector, which was especially obvious around 8 hr postgravistimulation, whereas in darkness the mutants responded to gravity at the same rate as WT ( Figure 4). This experiment suggested that the differential auxin responsiveness between phyB1 and phyB2 also extends to differences in their gravitropic response. Interestingly, when we reduced the light levels from 135 to 60 µmol*m −2 *s −1 , the gravitropic response F I G U R E 3 In white light, phyB1 mutants show significantly faster phototropism than wild type or phyB2 mutants. The average degree to which 5-day-old dark-grown seedlings bent toward unidirectional white light (bend angle) over 3 hr is shown. Error bars represent standard error. Combined data from three biological replicates are shown, n = 5 seedlings per genotype per time point per biological replicate. A two-way ANOVA with time and genotype was performed followed by Tukey's post hoc test using the software R. Shared letters represent no statistically significant difference differences between genotypes disappeared (Fig. S2), suggesting that the phyB1-mediated gravitropic response in tomato is also light intensity-dependent. Since we only observed significantly greater gravitropic curvature in the phyB1 mutants when the gravitropic experiment was done with high light intensity from the top but not with low light intensity from the side we wanted to exclude the remote possibility that in tomato phototropism can also be triggered by R alone, instead of requiring blue light. We therefore performed a series of control experiments in which we exposed seedlings to unilateral R light and measured their directional growth response over a period of three hours in a similar way to how we had performed the phototropic experiments shown in Figure 3. Not surprisingly (Fankhauser & Christie, 2015), our data showed that, like Arabidopsis, tomato does not have a red light phototropic response (data not shown), confirming that the enhancement in the gravitropic response of phyB1 could not have been due to its enhanced phototropic response. | PhyB1 and PhyB2 differentially modulate photosynthetic responses in tomato seedlings Our transcriptional co-expression analysis had shown almost 60 photosynthesis-related genes to be enriched in the blue module, which contains genes with expression positively correlated with the phyB1 mutation, but not significantly correlated with the phyB2 mutation ( Figure 2). We therefore decided to measure a variety of photosynthesis-related physiological parameters to test the hypothesis that gene duplication in PHYB had led to the subfunctionalization of regulation of genes involved in photosynthesis. We measured overall photosynthetic activity and related this activity to leaf size and fresh weight. Measuring overall leaf chlorophyll concentrations, we found differences between the WT and the two mutants but they were not statistically significantly different from each other (data not shown). Photosynthetic activity was not statistically significantly different between the three genotypes when the photosynthetic rate was normalized by leaf area regardless of light intensity (Figure 5a,c). However, when we normalized photosynthetic rate by fresh weight of the leaf portion used for the gas exchange analysis, we observed a statistically significant difference between the two phytochrome mutants. These differences between phyB1 and phyB2 were seen both in low and high light intensities (Figure 5b,d). Interestingly, the data suggest that phyB1 and phyB2 act antagonistically to each other and that PHYB1 and PHYB2 have subfunctionalized with respect to the role they play in regulating photosynthesis. | Subfunctionalization of phyB1 and phyB2 is correlated with differences in the genes' regulatory region Using the PlantCARE database (Lescot et al., 2002), we compared the 3-kb regulatory region immediately upstream from each gene's transcriptional start site (Table S5) and found a number of differences. Overall, PHYB1 contained 17 recognized light-regulated cisacting elements, while PHYB2 only contained 7 such elements. The type of elements found in each gene's promoter region was also different. For example, the PHYB1 promoter region contained 7 G-Box F I G U R E 4 In R, phyB1 mutants show significantly faster gravitropism than wild type or phyB2 mutants. The average degree to which 5-day-old dark-grown seedlings bent toward the negative gravity vector (i.e., upwards) after gravistimulation over 24 hr is shown. Seedlings were either gravistimulated in the dark (left), or with 135 µmol photons m −2 s −1 of R. Error bars represent standard error. The dark and R plots each contain data from three biological replicates. N = 20 per genotype per time point per biological replicate. A three-way ANOVA with time, genotype, and light condition was performed followed by Tukey's post hoc test in R. Shared letters represent no statistically significant difference elements, which bind PHYTOCHROME INTERACTING FACOTRs (PIFs) (Pham, Kathare, & Huq, 2018), while in PHYB2, there were only 2. Several other motifs were found only in one or the other phy gene (Table S5). Overall, the differential occurrence of light regulatory sequences suggests that transcription of these duplicated genes might be differentially regulated. | D ISCUSS I ON Gene duplication is a major source of genetic material with the potential for the evolution of novel functions and the development of complexity of responses to the environment (Panchy et al., 2016). Retention of duplicated genes can either indicate that retained genes are positively selected to provide genetic redundancy (Zhang, 2012), that they are required to maintain proper dosage or genetic balance (Birchler & Veitia, 2014;Freeling & Thomas, 2006), or that duplication eventually led to the acquisition of novel or refined functions (Lynch & Conery, 2000;Ohno, 1970). PHY genes, in particular, have been estimated to be evolving at a faster rate (1.52-2.79 times) than the average plant nuclear gene, suggesting that diversification of the PHY gene family might respond either to selective pressure or to the absence of major evolutionary constraints (Alba et al., 2000). We used differential mRNA expression and co-expression analysis to first evaluate the degree to which the PHY genes PHYB1 and PHYB2 have functionally diversified since their separation from a common ancestor gene and then to identify and verify physiological traits for which phyB has subfunctionalized since its gene duplication event. Our analysis indicated significant differences in the transcriptome of plants mutant in either PHYB1 or PHYB2. On the other hand, after filtering, the overall number of genes that were regulated by phyB1 (121) and phyB2 (73) was relatively modest. Overall, our differential gene expression analysis showed that the group of genes regulated by phyB1 but not phyB2 was enriched in auxin response genes, and our co-expression analysis showed that those genes found in co-expression gene networks and that differentially correlated to phyB1 and phyB2 were enriched in auxin response and photosynthesis genes. | Regulation of auxin responses by phytochrome B In Arabidopsis, phototropic curvature is enhanced when plants are pre-treated with R for 2 hr before directional blue light (B) treatment (Janoudi, Konjevic, Apel, & Poff, 1992). This pre-treatment response is phyA-mediated, and not phyB-mediated (Parks, Quail, & Hangarter, 1996), although it has been shown that even without R pre-treatment, Arabidopsis phyA, phyB, and phyD promote phototropism (Whippo & Hangarter, 2004). Specifically for B intensities of greater than 1.0 µmol*m −2 *s −1 of light, phyB and phyD show functional redundancy with phyA, while at fluences of B around 0.01 µmol*m −2 *s −1 , phyA was required for a normal phototropic response (Whippo & Hangarter, 2004). Additionally, Arabidopsis phyB has been shown to inhibit phototropism in shade-free environments (a high R/FR ratio), while mediating the phototropic response in the shade via PHYTOCHROME INTERACTING FACTORs (PIFs) and members of the YUCCA gene family (Goyal et al., 2016). Furthermore, it was shown that the quadruple mutant for phyB, phyC, phyD, and phyE has a normal phototropic response (Strasser, Sánchez-Lamas, Yanovsky, Casal, & Cerdán, 2010), confirming the notion that phyA is required in Arabidopsis for a normal low-flu- Our data suggest that phototropism is differently regulated between tomato and Arabidopsis. Our genetic analysis shows that phyB1, but not phyB2, negatively regulates the phototropic F I G U R E 5 Photosynthetic activity is enhanced by phyB2 and repressed by phyB1 independent of light intensity. Photosynthetic activity was measured under varying light intensities in 6-week-old WT, phyB1, and phyB2 mutants grown at 25°C (16 hr day/8 hr night) using a LiCOR 6400XT. Three biological replicates were performed with 10 plants per genotype per replicate. Data were normalized in two different ways either by leaf area (a and c) or by leaf area and fresh weight of the leaf tissue that was used for photosynthetic rate measurement. Data were statistically analyzed with a one-way ANOVA followed by a Tukey post hoc test using the software R. In each panel, data points not connected by a shared letter are statistically significantly different response in tomato (Figure 3). This in turn suggests that in tomato, phyB duplication led to a defined split between phyB1 and phyB2 with respect to phototropism, while in Arabidopsis phyB and phyD share redundancy, at least for its control of phototropism in response to R pre-treatment (Whippo & Hangarter, 2004). Additionally, while Arabidopsis work has shown phyB to be repressing phototropism in shade-free environments (Goyal et al., 2016), we saw that phyB2 in tomato is not involved in that response. Our RNAseq analysis supports the split in function also with respect to expression differences in the PIN genes that Haga and colleagues (2014) had proposed to play a role in phy-mediated phototropism: In tomato, our network analysis placed SlPIN4 into the brown module, which is negatively correlated with the phyB1 mutation but positively correlated with the phyB2 mutation ( Figure 2). Furthermore, SlPIN4 was differentially regulated in response to R only in the phyB2 mutant, but not in the phyB1 mutant (Table S1). This differential sensitivity in auxin response signaling between the two subfunctionalized genes in tomato suggests one possible avenue for the two phy genes in tomato to differentially affect phototropic curvature. Gravitropism, like phototropism, is an auxin-mediated differential growth response that results in directional elongation with respect to the gravity vector (Morita, 2010). Our data showed that phyB1, but not phyB2, represses gravitropism in R (Figure 4). This response is therefore similar to the phototropic response in that it is enhanced by the phyB1 mutation. The role of phytochrome in the gravitropic response in less well understood than it is for phototropism. In Arabidopsis, but not in tomato, R perceived by both phyA and phyB results in strongly reduced shoot gravitropism (Liscum & Hangarter, 1993;Poppe, Hangarter, Sharrock, Nagy, & Schäfer, 1996) caused by PIFs that in R convert the gravity-sensing amyloplasts in the endodermis into other, non-gravity-sensing types of plastids (Kim et al., 2011). Interestingly, root gravitropism in whitelight-grown Arabidopsis is diminished in phyB but not in phyD mutants (Correll & Kiss, 2005), suggesting subfunctionalization for this trait between the two genes in Arabidopsis. Interestingly, however, in Arabidopsis roots WT phyB promotes gravitropism, whereas in tomato shoots WT phyB1 inhibits it. Since R does not inhibit shoot gravitropism in 5-day-old dark-grown tomato seedlings, gravity sensing in the hypocotyl appears to follow a different signaling route than it does in Arabidopsis, but clearly phytochrome appears to play a role in both. | Regulation of photosynthesis by phytochrome B Our co-transcriptional analysis had suggested that photosynthesis genes were differentially affected by mutations in PHYB1 versus PHYB2 of tomato ( Figure 2) and our physiological experiments had supported this finding ( Figure 5). In Arabidopsis, phyB has previously been shown to increase photosynthetic rates, but only at light levels greater than 250 µmol*m −2 *s −1 (Boccalandro et al., 2009). Our data show that photosynthesis is enhanced in the phyB1 mutant and reduced in the phyB2 mutant compared to the WT response ( Figure 5b,d), suggesting that in tomato phyB2, apparently antagonistically to phyB1, plays the role of increasing photosynthetic rates. Interestingly, it appears that this instance of subfunctionalization did not simply split the two phyB homologs into one serving the function of the parental gene while the other largely lost its participation in the process, but instead led to opposite regulation of the same process. Another difference between the Arabidopsis and tomato responses is that, unlike in Arabidopsis, the effects of phyB1 and phyB2 on photosynthesis are not light intensity-dependent in tomato, at least not at the two light intensities tested here. It is of note that differences in photosynthetic rates were only discernable in our analysis when we normalized carbon assimilation rates by fresh weight and leaf area as opposed to leaf area alone ( Figure 5). Chlorophyll content in all genotypes was about the same but fresh weight per unit leaf area was highest in phyB2 and lowest in phyB1 among the three genotypes. This indicates that phyB1 promotes leaf thickness, water conservation or both, while phyB2 might promote transpiration (creating a net weight loss) or restrict leaf thickening. The conflict between gene functions of phyB1 and phyB2 could allow the plant to balance its photosynthetic and water needs depending on environmental conditions. More work is needed, however, to specifically assign those roles to the two phyB homologs in tomato. | In tomato, subfunctionalization of phyB has led to equally important sister genes The relatively recent duplication of phyB into separate homologs in different species provides a window into how gene duplication can result in different evolutionary trajectories. PHYB duplications in Arabidopsis and tomato both occurred after divergence of the Solanaceae and Brassicaceae (Li et al., 2015). In Arabidopsis, comparison of the coding sequence shows 48-56% amino acid identity between PHYA, PHYB, PHYC, and PHYE, but 80% identity between PHYB and PHYD (Clack, Mathews, & Sharrock, 1994). Amino acid identities between PHYB and PHYD in Arabidopsis, and between PHYB1 and PHB2 in tomato are similarly high in the two species (Hauser, Cordonnier-Pratt, Daniel-Vedele, & Pratt, 1995). Functional redundancy between PHYB and PHYD in Arabidopsis is high, but mutation in PHYD enhances the phyB mutant response with respect to leaf morphology, rosette leaf number (Franklin et al., 2003) and shade avoidance (Devlin et al., 1999;Franklin et al., 2003). While single mutation of PHYD in Arabidopsis leads to an increase in hypocotyl length in continuous R and white lights, the effect of phyD on the end-of-day (EOD) FR response was negligible until combined with a mutation in PHYB (Aukerman et al., 1997). With respect to leaf morphology and developmental traits, mutation in Arabidopsis PHYD resulted in none or only minor consequences on the phenotype while mutation in PHYB resulted in statistically significant phenotypic change (Aukerman et al., 1997). Analysis of the phyB/D double mutant, however, showed that PHYD contributes residual function to phenotype in a manner redundant and subordinate to PHYB (Aukerman et al., 1997). In tomato, divergence of the 5′ cis-regulatory regions in PHYB1 and PHYB2 has resulted in variability of the number and type of light response motifs, suggesting that this variation might be part of the reason for the genes' subfunctionalization. Duplication and gene divergence in tomato, in contrast to Arabidopsis, has resulted in two genes that have taken on specialized functions for a variety of developmentally important traits. This situation is not unlike that in maize. In maize, the two homologs of ZmPHYB showed complete redundancy for involvement in several morphological traits, such as plant height and stem diameter, while regulation of photoperiod-dependent flowering time was regulated only by ZmPHYB2 (Sheehan et al., 2007). Early work on phyB1 and phyB2 in tomato describing the mutants had already noted that phyB1 and phyB2 played different roles in early seedling development, but described the genes as largely redundant in older plants (Weller et al., 2000). Our data suggest that in tomato, phyB1 inhibits auxin responses of phototropism and gravitropism (and phyB2 does not play a role) while phyB2 promotes and phyB1 inhibits photosynthesis. We want to caution that in the absence of multiple alleles of phyB1 and phyB2 in our analysis, it is formally possible that unknown, secondary background mutations in the material could contribute to some of the observations we made in this study. | CON CLUS IONS Although phys are evolutionarily old genes and found in at least two copies, phyA and phyB, in all angiosperm species (Mathews, 2010), functional diversification is an ongoing process. PhyB is the phy homolog that has most recently duplicated again in some species (Mathews, 2010), including Arabidopsis (phyB/phyD), maize (phyB1/phyB2), and tomato (phyB1/phyB2). This latest round of duplication therefore lends itself well to analysis of variation in subfunctionalization of this important gene between species, and also provides a recent gene duplication event that plants have exploited for further specialization of their responses to light and the environment. ACK N OWLED G EM ENTS We acknowledge funding from the National Science Foundation (IOS-1339222, to AM, PRFB 1523917 to KDC). We thank Bob Peaslee and Amy Replogle for technical help and critical discussions. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest associated with the work described in this manuscript.
2020-03-04T14:47:42.358Z
2020-02-01T00:00:00.000Z
211829000
s2orc/train
v2
Evaluation of Proinflammatory, NF-kappaB Dependent Cytokines: IL-1α, IL-6, IL-8, and TNF-α in Tissue Specimens and Saliva of Patients with Oral Squamous Cell Carcinoma and Oral Potentially Malignant Disorders
Evaluation of Proinflammatory, NF-kappaB Dependent Cytokines: IL-1α, IL-6, IL-8, and TNF-α in Tissue Specimens and Saliva of Patients with Oral Squamous Cell Carcinoma and Oral Potentially Malignant Disorders Background: Oral squamous cell carcinoma (OSCC) is a life-threatening disease. It could be preceded by oral potentially malignant disorders (OPMDs). It was confirmed that chronic inflammation can promote carcinogenesis. Cytokines play a crucial role in this process. The aim of the study was to evaluate interleukin-1alpha (IL-1α), interleukin-6 (IL-6), interleukin-8 (IL-8), and tumor necrosis factor alpha (TNF-α) in tissue specimens and saliva of patients with OSCC and OPMDs. Methods: Cytokines were evaluated in 60 tissue specimens of pathological lesions (OSCCs or OPMDs) and in 7 controls (normal oral mucosa, NOM) by immunohistochemistry and in saliva of 45 patients with OSCC or OPMDs and 9 controls (healthy volunteers) by enzyme-linked immunosorbent assays. Results: Immunohistochemical analysis revealed significantly higher expression of IL-8 in OSCC specimens and TNF-α in OSCCs and OPMDs with dysplasia as compared to NOM. Moreover, expression of TNF-α was significantly higher in oral leukoplakia and oral lichen planus without dysplasia, whereas expression of IL-8 only in oral leukoplakia without dysplasia in comparison with NOM. Salivary concentrations of all evaluated cytokines were significantly higher in patients with OSCC than in controls. Moreover, levels of IL-8 were significantly higher in saliva of patients with OPMDs with dysplasia as compared to controls and in OSCC patients as compared to patients with dysplastic lesions. There was also significant increase in salivary concentrations of IL-6, IL-8 and TNF-α in patients with OSCC as compared to patients with OPMDs without dysplasia. Conclusion: The study confirmed that proinflammatory, NF-kappaB dependent cytokines are involved in pathogenesis of OPMDs and OSCC. The most important biomarker of malignant transformation process within oral mucosa among all assessed cytokines seems to be IL-8. Further studies on a larger sample size are needed to corroborate these results. Introduction Oral cancer is quite common disease and has an increasing worldwide trend [1]. Histologically, over 90% of malignancies affecting this region are diagnosed as oral squamous cell carcinoma (OSCC) [2]. OSCC is responsible for approximately 4% of all malignancies [3]. It has invasive behavior and high risk of metastases. The mortality rate associated to OSCC is high and has remained unchanged over the past decades [4]. The main reason is too late diagnosis. There are some clinically defined precursor lesions, such as oral erythroplakia, oral leukoplakia, oral submucous fibrosis, and oral lichen planus, that can precede cancer development within the oral mucosa. All these lesions should be called oral potentially malignant disorders (OPMDs) [5]. The term was recommended in year 2005 during one of the WHO workshops [6]. The identification of OPMDs with higher risk of malignant transformation and OSCCs at the early stage of development seems to be a matter of great importance and the best way to improve OSCC statistics. Oral carcinogenesis is a complex process in which genetic events result in successive molecular changes that lead to the disruption of cell proliferation, growth, and differentiation [7]. The kinetics of this event is a result of the interactions between tumor and host, especially the immune system [8]. The role of inflammation in carcinogenesis was suggested for the first time by Rudolf Virchow more than 150 years ago [9]. Various studies have confirmed that chronic inflammation can influence cell homeostasis and various metabolic processes, inducing changes at the genomic level, which can promote carcinogenesis [10]. Inflammation stimulates the activation of cytotoxic mediators, such as reactive oxygen species (ROS) and reactive nitrogen species (RNS), which play a major role in DNA damages. DNA damage accumulation is responsible for the initiation of carcinogenesis through the enhancement of genomic instabilities. Moreover, several inflammatory factors can facilitate the migration and invasion of neoplastic cells, leading to cancer progression [11]. Cytokines are critical regulators of tumor microenvironment and chronic pro-tumorigenic inflammation [12]. They are soluble, low molecular weight, multifunctional polypeptides that are produced mainly by cells of the innate and adaptive immune system but also by resident tissue and tumor cells [8]. They influence many aspects of cellular behaviors, such as growth, differentiation, and function. Their physiological activities are dysregulated during inflammation and carcinogenesis. The studies confirmed the crucial role of proinflammatory cytokines in carcinogenesis process, including the development of lung cancer [13,14], hepatocarcinoma [15], colorectal cancer [16], as well as OSCC [17]. The transcription factor, nuclear factor-kappaB (NF-kB) is an early response gene promoting the expression of a series of cytokines with proinflammatory, proangiogenic, and immunoregulatory activity which play an important role in carcinogenesis. Aberrant NF-kB regulation has been observed in many cancers [18,19]. Studies have demonstrated the activation of NF-kB in OSCC and elevated expression of its downstream proinflammatory cytokines in tissues, serum, tissue infiltrating lymphocytes (TIL) and cell lines of OSCC, including interleukin-1alpha (IL-1α), interleukin-6 (IL-6), interleukin-8 (IL-8), and tumor necrosis factor alpha (TNF-α) [20]. IL-1α modulates various growth-promoting pathways, including anti-apoptotic signaling and cellular proliferation [21]. It was also observed that IL-1α released from OSCC cells stimulates carcinoma-associated fibroblasts (CAFs) to secrete CCL7, CXCL1, and IL-8, thereby facilitating cancer invasion [22]. IL-6 is a multifunctional cytokine with growth-promoting and anti-apoptotic activity [18,23]. There is evidence that IL-6 regulates activation of the Janus kinases (JAK) and signal transducers and activators of transcription (STATs), which then stimulate pathways involving mitogen-activated protein kinase (MAPK), which in turn supports cancer development [24]. IL-8, a member of the chemokine family, acts on two receptors, namely CRCX-1 and CRCX-2, that are located on tumor-associated macrophages, neutrophils, and cancer cells. Their presence on cancer cells strongly suggests that IL-8 is an important chemokine for cancer cells environment. The carcinogenic potential of IL-8 results from its ability to neutrophil recruitment, angiogenic potential, proliferation and survival promotion, as well as protection from apoptosis [24]. TNF-α is a pleiotropic cytokine. It is known that the TNF-TNF receptor system plays an important role in inflammation, angiogenesis, programmed cell death, and proliferation, which are all crucial components in malignant transformation process [25]. It was also discovered that TNF-α can directly damage DNA of cells and lead to their malignant transformation through induction of reactive oxygen species (ROS) [26]. Additionally, TNF family members contribute to immune suppression [18]. There are some immunohistochemical studies that confirmed the role of pro-inflammatory, NF-kB dependent cytokines in malignant transformation process within oral mucosa, however evidence is rather scarce. It was revealed that IL-6 and TNF-α can promote malignant transformation in patients with oral submucous fibrosis [27] and with oral lichen planus [28]. Moreover, it was reported that the expression of TNF-α is significantly increased in lesions exhibiting epithelial dysplasia [29]. In recent years, the role of saliva for early detection of oral cancer has been intensively studied [30][31][32][33]. Due to the fact that saliva can be collected in an easy and noninvasive way, it seems to be a very attractive diagnostic material [34]. Proinflammatory cytokines have also been investigated in saliva as potential biomarkers of OPMDs and OSCC, and the current results are encouraging [24,[35][36][37]. The aim of the presented study was to evaluate IL-1α, IL-6, IL-8, and TNF-α in tissue specimens and saliva of patients with oral squamous cell carcinoma and oral potentially malignant disorders such as oral leukoplakia and oral lichen planus to confirm the potential of proinflammatory, NF-kappaB dependent cytokines as biomarkers of malignant transformation process within the oral mucosa. Study Group Sixty patients with diagnosis of OSCC or OPMDs such as oral leukoplakia and oral lichen planus were included into the study. They were diagnosed in the Chair of Periodontology and Clinical Oral Pathology and Department of Oral Surgery, Institute of Dentistry, Jagiellonian University Medical College in Krakow between 2011 and 2015. The diagnosis was made on the basis of clinical and histopathological examination using the WHO criteria [38]. The approval of the Bioethics Committee of the Jagiellonian University (KBET/290/B/2011, KBET/122.6120.183.2015) and the informed consent of the patients were obtained before collection of saliva and evaluation of tissue specimens. The study was performed in accordance with the Helsinki Declaration of 2008. Histopathology and Immunohistochemistry The formalin-fixed, paraffin-embedded blocks of 60 tissue samples collected from patients of the study group were sectioned (2 µm). Normal oral mucosa (NOM) in margins of 7 formalin-fixed, paraffin-embedded archival blocks of fibromas were used as controls. For histopathological examination, the sections were stained with hematoxylin and eosin (H&E). OSCCs were graded as well, moderately, and poorly differentiated using the standard WHO criteria [39]. The criterion for judging the malignant potential of OPMDs is mainly the presence and degree of dysplasia [39]. OPMDs were classified histologically into stages with increasing risk of developing into OSCC, namely as mild, moderate, and severe epithelial dysplasia according to the WHO criteria [6,40]. For immunohistochemistry, the sections were deparaffinized and rehydrated. After antigen retrieval, slides were incubated with antibodies: Anti-IL-1α rabbit polyclonal IgG (30 min., room temp.) and Anti-IL-6 mouse monoclonal IgG (30 min., room temp.) purchased from Santa Cruz Biotechnology Inc. (Dallas, TX, USA) and Anti-IL-8 mouse monoclonal IgG (60 min., room temp.) and Anti-TNF-α rabbit polyclonal IgG (60 min., room temp.) purchased from Abcam Plc. Taking into account that epithelium and stroma contain completely different cells, we analyzed them separately. Depend on the proportion of the positively stained cells and intensity of staining, a semiquantitative immunoreactive score from 0 to 6 was calculated separately for epithelial/cancer cells and stromal cells ( Table 1). The overall score was not calculated. The score was elaborated by authors based on the literature [41][42][43]. Laboratory Tests Whole unstimulated saliva (WUS) was collected from 45 subjects of the study group: 9 patients with OSCC, 7 with oral epithelial dysplasia (OED), 16 with oral leukoplakia without dysplasia (OL), and 13 with oral lichen planus without dysplasia (OLP). Individuals with a history of any systemic inflammatory disease, individuals suffering from inflammatory conditions in the oral cavity (e.g., dental abscess, pericoronitis, gingivitis, periodontitis), patients treated because of OSCC in the past, individuals taking drugs that induced hyposalivation (e.g., anticholinergics, antihistamines, antihypertensives, and beta adrenal blockers), and individuals using secretagogues were excluded from this part of the study. None of the lesions had been treated in any manner prior to sample collection. Samples of WUS of nine volunteers without any systemic diseases and without any pathological lesion within oral mucosa were used as controls. WUS samples were collected between 9.00 and 11.00 a.m. The subjects were instructed to refrain from eating, drinking, using chewing gum, and smoking for at least 90 min before collection of saliva. Samples were obtained by requesting the subjects to swallow first, tilt their head forwards, and expectorate the saliva into plastic vials for 10 min [44]. Samples were stored at −80 • C and centrifuged at 6000 rpm for 20 min to remove squamous cells and debris before the biochemical analysis. All laboratory tests were conducted in the Diagnostic Department, Chair of Clinical Biochemistry, Jagiellonian University Medical College, Krakow, Poland. Statistical Analysis For categorical variables, frequency and percentage were calculated. For continuous variables, minimum (Min), maximum (Max), median (Me) and interquartile range (IQR) were calculated. For qualitative data, differences between groups were analyzed by Fisher's exact test. For quantitative data, differences between groups were analyzed by Kruskal-Wallis test and post-hoc analyses were performed with Dunn's test. p-values less than 0.05 were considered significant. Analyses were performed using the Statistical Package for Social Sciences (SPSS, version 19.0) and the R Project for Statistical Computing (www.R-project.org). Histopathology and Immunohistochemistry On the basis of histopathological examination and clinical data, 14 tissue samples were diagnosed as OSCC, 21 as oral leukoplakia (hyper-and/or parakeratosis) without dysplasia (OL), 15 as oral lichen planus without dysplasia (OLP), and 10 as OED (5 cases as hyper-and/or parakeratosis with dysplasia and 5 cases as lichenoid dysplasia). Using the standard WHO criteria, 6 cases of OSCC were classified as well differentiated, another 6 cases as moderately and 2 cases as poorly differentiated. Among OED cases all but only one were classified as mild and one as severe epithelial dysplasia according to the WHO criteria. Immunohistochemical staining revealed differences in distribution of particular cytokines within the epithelium between different types of lesions ( Table 2). IL-1α was present more often within all layers of the epithelium in OSCCs than in OPMDs and it was present within all layers of the epithelium in none of the specimens assessed as normal oral mucosa. In turn, TNF-α was present within all layers of the epithelium in almost all cases of OED and OSCC specimens and only in one third cases of specimens assessed as NOM. Moreover, TNF-α was not present in any layer of the epithelium of almost one third of NOM specimens. When only OSCC, OED, and NOM cases were included in the statistical analysis, significant differences in distribution of IL-8 within the epithelium between compared lesions were confirmed. IL-8 was present within all layers of the epithelium in almost 65% of OSCC cases and in none layer of the epithelium of all specimens assessed as NOM. Analysis of immunoreactive scores confirmed significant differences in immunoreactivity for IL-8 and TNF-α (Tables 3 and 4). When only OSCC, OED, and NOM cases were compared, immunoreactivity for IL-8 was significantly higher in epithelial/cancer cells and in stroma of OSCCs in comparison with NOM specimens (p = 0.0073 and 0.032, respectively), whereas immunoreactivity for TNF-α was markedly higher in epithelium and stroma of OEDs in comparison with NOM cases (p = 0.019 and 0.0038, respectively) and in epithelium/cancer cells of OSCCs as compared to NOM specimens (p = 0.011). Moreover, immunoreactivity for TNF-α was significantly higher in stroma of OED cases than in OSCCs (p = 0.0102). When all types of specimens were included into statistical analysis, significant differences in immunoreactivity for IL-8 in stroma and for TNF-α in epithelium and stroma between oral leukoplakia without dysplasia and NOM cases (p = 0.022, p = 0.0017, and 0.047, respectively) as well as for TNF-α in epithelium between oral lichen planus without dysplasia and NOM specimens (p = 0.0071) were also revealed. Below we present photos of immunohistochemical staining for IL-1α, IL-8, and TNF-α in selected specimens of OSCCs and OPMDs (Figures 1-8). Laboratory Tests Subjects characteristics are given in Table 5. There were no significant differences in age, sex, as well as cigarette use and alcohol consumption between compared groups. Figure 8. TNF-α-strong brown staining in epithelium (basal and parabasal layer) and stroma of oral lichen planus without dysplasia (OLP) (20x). Laboratory Tests Subjects characteristics are given in Table 5. There were no significant differences in age, sex, as well as cigarette use and alcohol consumption between compared groups. Statistical analysis revealed significant differences in levels of all measured cytokines when only patients with OSCC, OED, and healthy volunteers were compared. Concentrations of IL-1α, IL-6, IL-8, and TNF-α were markedly higher in saliva of patients with OSCC in comparison with healthy volunteers (p = 0.017, 0.0012, 0.0001, and 0.0012, respectively). Moreover, levels of IL-8 were significantly higher in saliva of patients with OED as compared to controls (p = 0.0492) and in OSCC patients as compared to patients with OED (p = 0.0345). However, when all groups were analyzed, only levels of IL-6, IL-8, and TNF-α were markedly higher in patients with OSCC as compared to controls (p = 0.0041, 0.0004, and 0.0041, respectively). Concentrations of IL-6, IL-8, and TNF-α were also markedly higher in OSCC group as compared to subjects with oral leukoplakia without dysplasia (p = 0.0012, 0.0000, and 0.0492, respectively) and oral lichen planus without dysplasia (p = 0.0084, 0.0002, and 0.0212, respectively) ( Figure 9). J. Clin. Med. 2020, 9, x FOR PEER REVIEW 12 of 18 significantly higher in saliva of patients with OED as compared to controls (p = 0.0492) and in OSCC patients as compared to patients with OED (p = 0.0345). However, when all groups were analyzed, only levels of IL-6, IL-8, and TNF-α were markedly higher in patients with OSCC as compared to controls (p = 0.0041, 0.0004, and 0.0041, respectively). Concentrations of IL-6, IL-8, and TNF-α were also markedly higher in OSCC group as compared to subjects with oral leukoplakia without dysplasia (p = 0.0012, 0.0000, and 0.0492, respectively) and oral lichen planus without dysplasia (p = 0.0084, 0.0002, and 0.0212, respectively) ( Figure 9). Discussion Alterations in host immunity, inflammation, angiogenesis, and metabolism have been noted as the prominent pathological features in patients with oral cancer [45]. NF-kappaB dependent Discussion Alterations in host immunity, inflammation, angiogenesis, and metabolism have been noted as the prominent pathological features in patients with oral cancer [45]. NF-kappaB dependent cytokines are molecular messengers highly involved in all these processes [24]. Altered levels of proinflammatory, NF-kappaB dependent cytokines have been reported not only in patients with OSCC but also in patients with OPMDs, such as oral leukoplakia, oral lichen planus, and OSF [46]. There are numerous studies in which levels of proinflammatory cytokines were assessed in body fluids of patients with OSCC or OPMDs, however, in most of them only one cytokine and one type of OPMDs was considered. Moreover, in some of the previous studies exclusion criteria were not restrictive. In turn, the evidence on the expression of proinflammatory, NF-kappaB dependent cytokines in tissue samples of OSCCs and OPMDs is very limited, especially in OPMDs. Thus, the present study is unique. We decided to evaluate the panel of four proinflammatory, NF-kappaB dependent cytokines (IL-1α, IL-6, IL-8, and TNF-α) not only in saliva, but also in tissue specimens of OSCCs and OPMDs such as oral leukoplakia and oral lichen planus. We compared the expression of IL-1α, IL-6, IL-8, and TNF-α in epithelial and stromal cells between different types of tissue specimens implementing the immunoreactive score. To the best of our knowledge, this is the first study designed in this way. Moreover, to reduce the risk of interfering variables affecting salivary concentrations of assessed cytokines we implemented strict exclusion criteria. Subjects with acute or chronic inflammatory conditions in the oral cavity, such as dental abscess, pericoronitis, gingivitis, or periodontitis, patients with systemic inflammatory diseases and patients taking medications that can alter salivary flow were not included into the salivary analysis. All analyzed groups were also comparable in terms of age, gender, cigarette smoking and alcohol drinking. It should be also underline that this is the first such study carried out in the Polish population. The results of immunohistochemical staining confirmed the expression of IL-1α, IL-6, IL-8, and TNF-α in OSCCs. Analyzed cytokines were observed within epithelial/cancer cells of most OSCC cases and in stroma of all OSCC tissue specimens. Likewise, Woods et al. confirmed intracellular production of IL-1 and IL-6 in all analyzed invasive OSCCs, whereas Chen et al. detected IL-1α, IL-6, and IL-8 within keratin-positive malignant epithelium of all analyzed OSCCs in situ [47,48]. In turn, de Oliveira et al. revealed presence of IL-6 and IL-8 in inflammatory cells in invasive front of all analyzed OSCCs [8]. These results showed that proinflammatory, NF-kappaB dependent cytokines, which regulate innate and adaptive immune response, are produced not only by inflammatory cells in the tumor microenvironment, but also by tumor cells. Expression of these proinflammatory and proangiogenic cytokines in OSCCs indicate that they may play a role in the increased pathogenicity of OSCC by providing a growth advantage. The present study also confirmed the expression of IL-1α, IL-6, IL-8, and TNF-α in oral leukoplakia and oral lichen planus specimens. All assessed cytokines were observed in stroma of every OPMD and in epithelial cells of most analyzed cases. However, IL-8 was present in the smallest number of OPMDs samples as compared to other assessed cytokines. Haque et al. confirmed the expression of IL-1α and IL-6 in stroma and in epithelial cells of specimens of oral submucous fibrosis, whereas Sclavounou et al. reported the expression of TNF-α in epithelial cells and proinflammatory cells of most analyzed oral lichen planus specimens [49,50]. These results indicate that proinflammatory cytokines, especially IL-1α, IL-6, and TNF-α could play an important role in the pathogenesis of OPMDs. The comparison of immunoreactivity for particular cytokines between different types of tissue specimens analyzed in this study revealed that the expression of IL-8 and TNF-α was markedly increased in OSCCs in comparison with tissue specimens assessed as normal oral mucosa, whereas in OED specimens the expression of TNF-α was notably altered. These results together with the fact that IL-8 was not present in epithelial cells of specimens assessed as normal oral mucosa, whereas it was present within all layers of the epithelium in most cases of OSCCs indicate that IL-8 and TNF-α could play a leading role among proinflammatory, NF-kappaB dependent cytokines in malignant transformation process within the oral mucosa. Because of too small samples and a large disproportion in numbers between subgroups with different grade of OED and with different differentiation grade of OSCC, it was not possible to check whether the grade of OED (mild, moderate, and severe) and the differentiation grade of OSCC (well, moderate, and poor) significantly influence the expression of cytokines. Analysis of saliva revealed markedly higher levels of IL-1α, IL-6, IL-8, and TNF-α in OSCC patients in comparison with healthy individuals when only OSCC, OED, and controls were taken into consideration, whereas only IL-6, IL-8, and TNF-α when all groups were analyzed. These results are in line with other studies. Rajkumar et al. and Lee at al. reported significantly higher salivary levels of IL-6, IL-8, and TNF-α in OSCC patients in comparison with controls [51,52], whereas Rhodus et al. observed significantly higher levels of IL-1α, IL-6, IL-8, and TNF-α in saliva of patients with OSCC as compared to healthy individuals [20,52]. SahebJamee et al. also observed higher levels of IL-1α, IL-6, IL-8, and TNF-α in saliva of patients with OSCC in comparison with healthy subjects, however only differences in IL-6 concentration were statistically significant [53]. In turn, Punyani and Sathavane stated significantly higher salivary concentrations of IL-8 in OSCC patients in comparison with controls. The levels of IL-8 were also compared as per the TNM stage and histopathologic grading. The levels were the highest for stage IV disease, however, the difference between stages was statistically non-significant. The mean salivary IL-8 concentration was higher in patients with moderately differentiated squamous cell carcinoma than in patients with well-differentiated squamous cell carcinoma, however, the difference was also not statistically significant [45]. Korostoff et al. analyzed salivary levels of IL-1α, IL-6, IL-8, and TNF-α in patients with exophytic and endophytic tongue squamous cell carcinoma (TSCC) [54]. They observed an increasing trend of all assessed cytokines from controls to TSCC subjects. All cytokines were markedly elevated in saliva of patients with endophytic TSCC. Moreover, patients with endophytic TSCC and elevated IL-8 had a shorter lifespan after diagnosis. In the present study levels of all analyzed cytokines were higher in saliva of patients with OPMDs with dysplasia than in subjects without oral mucosal lesions, however only differences in IL-8 concentrations were statistically significant (when only OSCC, OED, and healthy controls were taken into consideration). Rhodus et al. observed markedly higher salivary levels of IL-1α, IL-6, IL-8, and TNF-α in patients with oral lichen planus with dysplasia in comparison with the control group [20,53]. Sharma et al. reported markedly higher levels of IL-6 in patients with oral leukoplakia with dysplasia as compared to controls. Moreover, within the leukoplakia group IL-6 level was found to be increased with increase in the severity of dysplasia [55]. Lack of statistical significance in differences of IL-1α, IL-6, and TNF-α concentrations between patients with OED and controls in the present study could be related to the fact that all but only one case of OED were classified as mild, whereas in the study of Rhodus et al. all cases were classified as moderate or severe dysplasia. Similar to Rhodus et al., Kaur and Jacobs reported significantly higher levels of IL-6, IL-8, and TNF-α in saliva of patients with OPMDs (oral leukoplakia, oral lichen planus, and oral submucous fibrosis) in comparison with the control group [46]. They also observed that salivary levels of IL-6, IL-8, and TNF-α were markedly higher in the advanced stages of OPMDs as compared to the early stages. In turn, the study of Rajkumar et al. revealed significantly higher levels of IL-6 and TNF-α in patients with oral leukoplakia and oral submucous fibrosis in comparison with healthy individuals [52], whereas differences in IL-8 concentrations between patients with OPMDs and controls were not significant. Unfortunately, the authors of both mentioned studies did not give information about epithelial dysplasia of analyzed cases of OPMDs. In the study of Punyani and Sathavane, salivary levels of IL-8 were higher in patients with oral submucous fibrosis and oral leukoplakia in comparison with the controls, but the difference was statistically non-significant. It should be underlined that there was no histological evidence of dysplasia in all cases of oral submucous fibrosis and only in five of twelve cases of oral leukoplakia mild dysplasia was stated [45]. The results of the present study did not reveal significant differences in salivary concentration of IL-1α, IL-6, IL-8, and TNF-α between patients with OPMDs without dysplasia and healthy individuals. In turn, we observed markedly higher levels of IL-8 in OSCC patients in comparison with OED cases. Rhodus et al. reported significantly higher levels of IL-1α, IL-6, IL-8, and TNF-α in OSCC patients as compared to patients with oral lichen planus with dysplasia, whereas Punyani and Sathavane stated markedly higher levels of IL-8 in the OSCC group in comparison with patients with oral leukoplakia, but only in five of twelve cases mild dysplasia were described [20,45,52]. Rajkumar et al. reported significantly higher levels of IL-6, IL-8, and TNF-α in patients with OSCC as compared to patients with oral leukoplakia and oral submucous fibrosis, however they gave no information about epithelial dysplasia [56]. There are two large scale studies in which diagnostic value of IL-8 and IL-6 for OSCC was assessed. Rajkumar et at. analyzed IL-8 levels in saliva of patients with OSCC and OPMDs (oral leukoplakia and oral submucous fibrosis). This study revealed significantly higher levels of IL-8 in patients with OSCC and OPMDs as compared to healthy individuals [57]. Moreover, they observed a significant increase in levels of salivary IL-8 in OSCC patients in comparison with OPMDs. Most cases of OPMDs were dysplastic. Receiver operating characteristic curve analysis found salivary IL-8 to have superior sensitivity in detecting OSCC. A significant increase in IL-8 levels based on the histologic grading of OSCC was also observed. Analogical results in case of IL-6 were reported by Dineshkumar et al. [58]. Conclusions The present study confirmed that proinflammatory, NF-kappaB dependent cytokines are involved in pathogenesis of OPMDs and OSCC. The increase in salivary levels of IL-6, IL-8, and TNF-α could be a useful indicator of malignant transformation process within the oral mucosa. The higher levels of proinflammatory, NF-kappaB dependent cytokines, especially IL-8 and TNF-α in saliva of patients with OSCC or OPMDs, could be caused by increased expression of these cytokines in pathological tissues. The most important biomarker of malignant transformation process within oral mucosa among all assessed cytokines seems to be IL-8. It was present within all layers of the epithelium in most OSCCs, was not present in epithelial cells of specimens assessed as normal oral mucosa and was observed in the smallest number of OPMDs samples. Moreover, its salivary concentration was significantly higher in patients with OSCC as compared not only to healthy subjects but also to patients with OPMDs with dysplasia. Further studies on a large sample size are required to confirm the utility of proinflammatory, NF-kappaB dependent cytokines as screening/diagnostic markers for routine use at clinical practice.
2020-03-26T10:30:11.884Z
2020-03-01T00:00:00.000Z
214786300
s2orc/train
v2
Reliability of task‐evoked neural activation during face‐emotion paradigms: Effects of scanner and psychological processes
Reliability of task‐evoked neural activation during face‐emotion paradigms: Effects of scanner and psychological processes Abstract Assessing and improving test–retest reliability is critical to efforts to address concerns about replicability of task‐based functional magnetic resonance imaging. The current study uses two statistical approaches to examine how scanner and task‐related factors influence reliability of neural response to face‐emotion viewing. Forty healthy adult participants completed two face‐emotion paradigms at up to three scanning sessions across two scanners of the same build over approximately 2 months. We examined reliability across the main task contrasts using Bayesian linear mixed‐effects models performed voxel‐wise across the brain. We also used a novel Bayesian hierarchical model across a predefined whole‐brain parcellation scheme and subcortical anatomical regions. Scanner differences accounted for minimal variance in temporal signal‐to‐noise ratio and task contrast maps. Regions activated during task at the group level showed higher reliability relative to regions not activated significantly at the group level. Greater reliability was found for contrasts involving conditions with clearly distinct visual stimuli and associated cognitive demands (e.g., face vs. nonface discrimination) compared to conditions with more similar demands (e.g., angry vs. happy face discrimination). Voxel‐wise reliability estimates tended to be higher than those based on predefined anatomical regions. This work informs attempts to improve reliability in the context of task activation patterns and specific task contrasts. Our study provides a new method to estimate reliability across a large number of regions of interest and can inform researchers' selection of task conditions and analytic contrasts. | INTRODUCTION Concerns about replicability (Open Science Collaboration, 2015) in functional magnetic resonance imaging (fMRI) work are growing (e.g., Poldrack et al., 2017). Improving test-retest reliability is a cornerstone of addressing these concerns. A recent meta-analysis (Elliott et al., 2019) suggests that test-retest reliability of fMRI task contrasts is often relatively poor (e.g., intra-class correlation coefficients [ICCs] < .4). The current study uses two statistical approaches to examine how scanner effects and task-related factors influence reliability. The study focuses specifically on task-evoked activation during two faceemotion-viewing paradigms. Across experimental paradigms, several factors are known to influence fMRI reliability. These include scanner-or site-related factors, participant-related factors, and time-related change. Several studies have shown that only small proportions of variance tend to be affected by scanner differences (e.g., Gountouna et al., 2010;Gradin et al., 2010;Yendiki et al., 2010). However, as such studies can often confound scanner and practice effects, we use a pseudo-random assignment to two scanners across three time points, thereby separating scanner-and time-related variance. Face-emotion paradigms are often used in studies of individual differences as affect-evoking stimuli. In prior studies, the reliability of fMRI face-emotion paradigms varied by task condition. For example, prior work typically finds moderate reliability for face vs. baseline contrasts, but poor reliability for contrasts between specific face-emotion types, for example, angry vs. neutral Plichta et al., 2012;Sauder, Hajcak, Angstadt, & Phan, 2013;van den Bulk et al., 2013;White et al., 2016). The current study utilizes two tasks that differ in their cognitive demands. One task involves implicit faceemotion processing, such that face-emotion monitoring is irrelevant to task performance; the other involves explicit face-emotion judgments. Many earlier reliability studies focused on a priori regions-ofinterest (ROIs), whereas newer statistical methods have become available for whole-brain reliability analyses. That said, common approaches to multiple comparisons correction for whole-brain analyses, for example, cluster-correction, rely on profound data reduction that may reduce reliability (Chen et al., 2019;Woo, Krishnan, & Wager, 2014). Significance tests are conducted independently per voxel; this massive multiplicity is accounted for by estimating the probability of a number of contiguous voxels all exhibiting significant effects. A complementary approach is to leverage the substantial information present in fMRI scans by using rational, Bayesian principles that mitigate data reduction by accounting for uncertainty (Chen, Taylor, Cox, & Pessoa, 2020). Therefore, this study includes a recent translation of Bayesian methods for group-level fMRI analysis, measuring reliability through two approaches. First, we examined a conventional, voxel-wise linear mixed-effects model with cluster-based correction. Second, we used a hierarchical Bayesian approach that examines ROIs across the whole brain, defined independently of the study data. For this second approach, results are reported based on an open-source, publicly available Bayesian hierarchical model developed for fMRI (Chen et al., 2019). This method enables test-retest analyses that incorporate all ROIs into one model to mitigate the issue of multiple testing over many units. The current study examines 40 healthy adult participants using two face-emotion paradigms, one requiring explicit face-emotion labeling and one involving implicit, task-irrelevant face-emotion processing. Participants completed up to three scanning sessions over approximately 2 months. We examine reliability using Bayesian linear mixed-effects models performed voxel-wise across the brain and a novel Bayesian hierarchical model in predefined ROIs. Participants were pseudo-randomized and scanned across two comparable 3T GE MRIs, as would be common in single-site or harmonized multi-site studies. We expect scanner to account for minimal variance in temporal signal-to-noise ratio (tSNR) and fMRI task contrast maps. Moreover, we expect higher reliability among regions activated during the task at the group level (i.e., regions showing significant task contrast activity at the first scan session) relative to regions not activated significantly at the group level. Finally, we expect to see greater reliability for contrasts involving conditions with clearly distinct visual stimuli and associated cognitive demands (e.g., face vs. nonface discrimination) compared to conditions with more similar demands (e.g., angry vs. happy discrimination). | Participants Forty-five participants enrolled in an institutional review boardapproved protocol at the National Institute of Mental Health in Bethesda, MD. Participants provided written informed consent. All participants were >18 years old (Age: M = 31.95 years, SD = 9.39; 58% female). Participants were excluded for any current psychiatric conditions, as determined by the Structured Clinical Interview for DSM-IV Disorders (Spitzer, Williams, Gibbon, & First, 1992 Figure S1). | Task paradigms Participants completed up to three MRI sessions, across two scanners in an ABA or BAB order, pseudo-randomized across participants. During each scan, participants completed two tasks, a visual search with emotional distractors, and an explicit labeling emotional face task. The order of task completion was counterbalanced across participants (but consistent within-participant across session). | Visual search task This task, which was modified and adapted for fMRI from a previously used paradigm (Haas, Amso, & Fox, 2017), required participants to find a target stimulus in search array following an emotional face image. Each trial consisted of a grayscale face stimulus (angry, happy, or scrambled control) presented for 300 ms, then a 600 ms fixation cross, followed by a visual search array with one black bar target slanted left or right and 0, 4, or 29 distractors (slanted or vertical white bars and vertical black bars) displayed for a 2,000 ms response window ( Figure 1). Participants were required to find the target bar and indicate the direction that it was slanted (left or right) via a response box button press. Emotional face stimuli were images from 16 actors displaying angry or happy expression drawn from an available stimulus set (Tottenham et al., 2009). Face stimuli were cropped to a face-shaped oval and set to grayscale. The pixels of a face stimulus were scrambled to create a control stimulus matched on visual properties but without any face properties. A fixation cross was presented between trials for a jittered inter-trial interval (ITI; min = 500 ms, ITI distribution followed an exponential decay curve). 2.2.2 | Face-emotion labeling task This task was adapted for fMRI from a previously used behavioral paradigm (Stoddard et al., 2016). Participants were required to judge the emotion of a composite male face drawn from the Karolinska Directed Emotional Faces (Lundqvist & Litton, 1998). Stimuli were 15 faceemotion expressions equally spaced/morphed on a continuum from prototypically angry to prototypically happy. On each trial, a face morph was presented for 150 ms followed by a 250 ms white noise mask, and then a response screen with a fixation cross for 2,000 ms ( Figure 1). Participants had to indicate whether the briefly presented face displayed an angry or happy expression via a button box press. A fixation cross was presented for a jittered ITI between trials (min 500 ms, ITI distribution followed an exponential decay curve). Stimulus presentation and jitter orders were optimized and pseudo-randomized using AFNI's make_random_timing.py program. Participants completed a total of 540 trials across four runs, including 90 fixation trials (i.e., each morph was presented 30 times). Each run was 412 s long with $10 s of fixation at the beginning and end of each run. | Behavioral data Accuracy and reaction time data were examined for each task, see details by task below. | Visual search task Accuracy and mean reaction time (to identify the slant of the target bar) were calculated as a function of condition: face-emotion (angry, happy, scrambled control) and search array size (1, 5, and 30 bars). Sessions with accuracy <70% and/or >15% nonresponses were excluded (2 sessions for 1 participant). The effect of emotion, search array size, and their interaction were of interest here. | Face-emotion labeling task As in prior work (Stoddard et al., 2016), a four-parameter logistic curve was fit to each participant's choice-response data (parameters F I G U R E 1 Schematics of inscanner tasks. Left panel: Visual search task. Right panel: Faceemotion labeling task included: upper limit, lower limit, slope, and inflection point of the logistic curve, that is, the morph/emotional intensity where judgments switch from predominantly happy to angry, adjusted for the maximum probability of either judgment). An inflection point of 8 indicates no bias (middle of morphs 1-15), whereas a lower inflection point indicates a hostile interpretation bias, that is, a tendency to judge ambiguous faces as angry, rather than happy. We examined both inflection point and slope from the logistic regressor for the behavioral data. Reaction time was examined as linear slope (coding emotion intensity from angry to happy) and quadratic slope (coding ambiguity from ambiguous to overt) across face morphs. Additional reaction time indices (i.e., reaction time difference scores) are presented in the Supporting Information. Data from participants who failed to correctly identify at least 70% of the emotional expression of the overtly angry and happy facial expressions or had more than 15% of missed responses were excluded (8 sessions across 6 participants). | Behavioral data Test-retest reliability of task behavior across three scanning sessions was tested in a Bayesian framework using linear mixed-effects models in R v3.5.0 (R Core Team, 2015) using the blme package (Chung, Rabe-Hesketh, Dorie, Gelman, & Liu, 2013). This included a random effect for participant modeled with Gamma priors (shape = 2, rate = 0.5) and three fixed effects, one for scanner, one for visit and one for the order of task acquisition in the scanner. Intraclass correlation coefficients were estimated as the proportion of participant-specific variance out of total variance (Bartko, 1966;Shrout & Fleiss, 1979). This approach mirrored that used at the voxel-level in the fMRI analyses described below. ICCs were calculated for task contrasts of interest (see below). | Visual search task ICCs were calculated for two reaction time contrasts: faces vs. scramble control difference scores and log-transformed slope across search array size for each emotion and scramble control stimuli. Supporting Information additionally contains ICCs for array size 30 vs. 1 and happy versus angry differences scores and log-transformed slope across all emotions, as well as the ICC for the difference in accuracy for search array size 30 vs. 1. | Face-emotion labeling task ICCs were calculated for inflection point and slope of the choice response data as well as for linear and quadratic slopes of reaction time data across face emotion morphs. Refer to Supporting Information for additional contrasts (e.g., ambiguous vs. overt and happy vs. angry faces). | Acquisition Neuroimaging data were collected on two 3T General Electric Signa 750 scanners each using a 32-channel head coil with identical acquisition sequences. After a sagittal localizer scan, an automated shim calibrated the magnetic field to reduce signal dropout due to susceptibility artifact. BOLD signal was measured by T2*-weighted echo-planar imaging at a voxel resolution of 2.5 Â 2.5 Â 3.0 mm ( | Imaging preprocessing Neuroimaging data were analyzed using Analysis of Functional NeuroImages (AFNI; http://afni.nimh.nih.gov/afni/; Cox, 1996) v18.3.03 with standard preprocessing, including despiking, slicetiming correction, distortion correction, alignment of all volumes to a base volume (MIN_OUTLIER), nonlinear registration to the MNI template, spatial smoothing to 6.5 mm FWHM kernel (using blur_to_fwhm flag), masking, and intensity scaling. Spatial smoothing to a desired blur size assures that a similar smoothness is achieved across scanners and sessions, rather than adding a set blur kernel to acquired images that may vary in initial smoothness. First-level models were created with generalized least squares time series fit with restricted maximum likelihood estimation of the temporal autocorrelation structure (3dREMLfit). This work utilized the computational resources of the NIH HPC Biowulf cluster (http://hpc.nih.gov). This processing and first-level general linear models (GLM) controlled for head motion. Specifically, we regressed any pair of successive TRs where the sum head displacement (Euclidean norm of the derivative of the translation and rotation parameters) between those TRs exceeded 0.5 mm. TRs, where more than 10% of voxels were timeseries signal outliers, were also excluded. Sessions were excluded if the average motion per TR after censoring was >0.25 mm or if >15% of TRs were censored for motion/outliers. Additionally, six head motion parameters were included as nuisance regressors in individuallevel models. Temporal signal-to-noise ratio (tSNR = average signal/ standard deviation of noise [GLM residuals]) maps were created from the first-level model output. Visual search task Regressors for nine trial types of interest (3 emotion by 3 search array size) and error trials were included in first-level GLMs. These were modeled with a block hemodynamic response function (BLOCK (2.9,1)). Four first-level contrasts were created for each participant to examine: task vs. fixation, faces vs. scrambled control, search array 30 vs. 1, and a log-linear slope across search array size. Figure S2 displays additional contrasts of angry versus happy, a log slope per emotion, and 30 versus 1 search array. Face-emotion labeling task Fifteen regressors of interest were included to represent the 15 face emotion morphs, modeled with a block hemodynamic response function (BLOCK (0.15,1)). Separately, two amplitude-modulated regressors as a function of face morph weighed in a linear and quadratic fashion (AM2), as well as error trials modeled without amplitude modulation, were coded. Three first-level contrasts were created for each participant: task (15 face-emotion regressors) vs. fixation, amplitudemodulated linear slope across morphs (coding emotion intensity from angry to happy), amplitude modulated quadratic slope across morphs (coding ambiguity from ambiguous to overt). Additional contrasts of subtraction values: ambiguous vs. overt faces and angry vs. happy faces are presented in the Supporting Information. | Imaging analysis Activation at Session 1 Linear mixed-effects models (3dLME; Chen et al., 2013) with participant as a random effect were computed for the first scan session to examine group average activity for each task condition. Models included scanner and task order (to indicate which behavioral task was performed first in the scanner) as fixed-effects covariates. Monte Carlo simulations were performed using AFNI's 3dClustSim to correct for multiple comparisons. All analyses were restricted to a whole-brain mask of 98,386 voxels where 90% of participants (completing either/ both tasks) had useable data at Session 1. Smoothness of the residuals was estimated based on a Gaussian plus mono-exponential spatial autocorrelation function (3dFWHMx with -acf flag) for all participants and averaged yielding an effective smoothness of FWHM = 9.14 mm (ACF parameters, a = 0.61, b = 3.37, c = 10.88). Two-sided thresholding was examined for whole-brain tests with first-nearest neighbor clustering (NN = 1). To obtain a whole-brain family-wise error correction of p < .05, all results were thresholded at a voxel-wise p < .001 and a cluster extent of k = 20 voxels. Voxel-wise test-retest assessment Bayesian linear mixed-effects models (3dLME; Chen et al., 2018) were used to compute voxel-wise ICC of BOLD activation across the three MRI sessions. The Bayesian ICC approach has been demonstrated to address potential issues in traditional ICC estimates (e.g., negative ICC values, missing data, confounding effects). Linear mixed-effects models included a fixed effect for task order and visit and random effects with Gamma priors (Chen, Saad, Britton, Pine, & Cox, 2013) for participant and scanner, to estimate of the proportion of participant, scanner, and residual error variance per voxel for tSNR and task versus baseline (ICCs with absolute agreement (ICC[2,1]; Shrout & Fleiss, 1979). For task contrasts, we used ICCs with consistency formulation (ICC[3,1]; examining the consistency in rank rather than absolute value, which accounts for systematic changes over time, such as practice effects) with participant as a random effect and fixed effects for scanner, task order, and visit. For display purposes, ICC maps of participant-specific variance were binned into color schemes representing "poor" (ICC < 0.4), "fair" (ICC = .4-.6), "good" (ICC = .6-.75), and "excellent" (ICC > .75) test-retest reliability. Conjunction maps ( Figure S2) were created for display purposes to illustrate the overlap in brain regions that were robustly activated by the task (at the first scanning session; cluster-corrected) and reliably activated across scanner and time (ICC > 0.4). To statistically test whether more active regions are also more reliable, we use AFNI's 3ddot function to examine whole-brain voxel-wise correlations between first scanning session tSNR or task activation and their associated ICC maps. Additionally, we examined associations between mean tSNR at the first scanning session and the task versus baseline ICC maps to assess how tSNR may influence task reliability. AFNI's 3ddot provides a single correlation coefficient describing the association between two voxel-wise maps. ROI-based test-retest assessment As an alternative to voxel-wise testing with cluster-based multiple comparisons correction, we conducted ICC analyses across 214 ROIs covering the whole brain (defined independently of our reliability estimates). These included 200 parcels from a published cortical parcellation (Schaefer et al., 2018) and 14 subcortical ROIs from the Harvard-Oxford probabilistic atlas (75% probability for defining the hippocampus; 50% probability for defining other regions). Contrast activity was extracted from all ROIs across the three scanning sessions for both tasks and used in these analyses. ICCs at the ROI level was inferred through a Bayesian multilevel model that integrated all regions (Chen et al., 2019(Chen et al., , 2020. Specifically, each effect was decomposed into three components that are associated with the variability across subjects, visits, and regions while the scanner and task effects were modeled as covariates with the following Bayesian multilevel formulation, where a 0 , a 1 , and a 2 code the intercept, scanner, and task effects, respectively; ξ 0i , ξ 1i , and ξ 2i represent the intercept, scanner, and task effects during the ith session (visit); η j models the effect of jth subject; γ ij characterizes the effect of jth subject during ith session; ζ 0k , ζ 0k , and ζ 0k are the intercept, scanner, and task effects at the kth ROI; the μ and μ terms are the intercept, scanner, and task effects of the ith session at the kth ROI and the jth subject at the kth ROI, respectively; finally, ε is the residual term. With a Gaussian assumption for crosssession, cross-participant, cross-ROI effects, their interactions, and residuals, the Bayesian model is numerically solved through Markov Chain Monte Carlo simulations using the R package brms (Bürkner, 2017) in Stan (Carpenter, 2017). The ICC at the kth ROI was assessed through the mean, standard error, and quantile interval based on the variances (σ 2 ) of the corresponding posterior density: 3.2 | Imaging data | Scanner effects on tSNR We first investigated scanner effects on tSNR using both voxel-wise analyses and ROIs covering the whole brain. Voxel-wise analysis Average tSNR (at the first scan) for the visual search task was M = 212.84 (SD = 28.58) and for the face-emotion labeling task M = 223.33 (SD = 28.01). For both tasks, we found participantspecific variance in tSNR to be highly reliable across the three scanning sessions (Figure 2a). Higher ICCs for scanner-specific relative to participant-specific effects were only seen in white matter. The mean tSNR map (at the first session) was highly correlated with voxel-wise participant-specific ICCs for both paradigms (visual search: r = .92; face-emotion labeling: r = .89), that is, voxels with higher tSNR were more reliable over scans. Refer to Tables S11 and S13 for a list of participant-specific ICCs for each of the 200 cortical parcels and 14 subcortical ROIs for each task, presented alongside the conventional linear mixed-effects approach for each parcel. | Reliability of task contrasts We next investigated reliability of the task contrasts for each paradigm utilizing both voxel-wise and ROI analyses. Table 1 contains "ata-glance" summaries of reliability estimates of the main behavioral indices and fMRI contrasts for each task; detailed tables can be found in Tables S3-S6. Voxel-wise analysis of visual search task First, for the task vs. baseline contrast, scanner-associated variance was minimal, with no scanner-associated variance surpassing the threshold of ICC > .4 (Figure 2b). Reliable participant-specific variance was observed in visual, parietal, and prefrontal cortices, including the inferior-frontal and middle frontal gyri. ICCs for the task vs. baseline contrast correlated positively with both mean tSNR (r = .82) and task vs. baseline activity (r = .60) at the voxel-wise level. Next, two task contrasts of interest were examined (Figure 2c). Faces vs. scrambled contrast signal in visual cortex/fusiform gyrus was both active at the first session and reliable at ICC > .4. Note that the right amygdala was also active at the first session but was not reliable at a threshold of ICC > .4. Average faces vs. scrambled contrast activity at the first session was weakly correlated with the associated reliability map (r = .29) at the voxel-wise level. The log-transformed slope analysis revealed contrast signal in visual, parietal, and bilateral dorsal lateral prefrontal cortex (dlPFC) regions, signal that was also reliable across time. The anterior insular showed significant activation at the first session, but was not reliable at ICC > .4. Average log-transformed slope activity at the first session was correlated with the associated reliability map (r = .55). Additional contrasts (angry vs. happy and log slopes per emotion) and tables detailing group-level activation at the first scanning session and clusters of ICC > .4 are presented in Figure S2. Voxel-wise analysis of face-emotion labeling task As above, scanner-associated variance was minimal for the task vs. baseline contrast, and reliable participant-specific variance was F I G U R E 2 Voxel-wise analysis. (a) Scanner effects on temporal signal-to-noise ratio (tSNR) and (b) task versus baseline contrasts. For both tasks, participant-specific variance in tSNR was highly reliable over time. Higher ICCs for scanner-specific relative to participant-specific effects were only seen in white matter. (c) Conjunction maps between the first group-level activation for main task contrast at a corrected significance level of .05 based on voxel-wise p < .001 and ICC maps at a threshold of ICC > 0.4 observed in visual, motor, parietal, and prefrontal cortices (Figure 2b). Task vs. baseline ICCs positively correlated with both mean tSNR (r = .84) and the task versus baseline activity at the first session (r = .65). Next, two task contrasts of interest were examined (Figure 2c). The linear slope across face-morphs (coding emotion intensity from angry to happy) reliably tracked motor response in the bilateral motor cortex. Average linear slope activity at the first session was correlated with the associated reliability map (r = .61). For the quadratic slope across morphs (coding ambiguity from ambiguous to overt), reliable signal (ICC > .4) overlapped with regions of significant activation at the first session in the bilateral dlPFC/ anterior insula, supplementary motor area/anterior cingulate cortex (ACC). Average quadratic slope activity at the first session was correlated with the associated reliability map (r = .48). Additional contrasts (i.e., difference value: ambiguous vs. overt faces, happy vs. angry faces) and tables detailing group-level F I G U R E 3 Surface renderings of unthresholded maps of ROI ICCs for both tasks using a Bayesian hierarchical model. (a) tSNR and (b) task versus baseline contrasts showed reliable participant-specific variance, while reliability of main task contrasts faces versus scrambled contrast signal and log-transformed slope exhibited patterns of largely "poor" reliability activation at the first scanning session and clusters of ICC > .4 are presented in Figure S2. ROI-based analysis Similar test-restest results were observed in Bayesian multi-level analyses of ROIs covering the whole-brain for both tasks. Figure 3b,c displays surface renderings of maps of ROI ICC for each contrast. Refer to Tables S12 and S14 for the full list of ICCs for each of the 200 cortical parcels and 14 subcortical ROIs. | DISCUSSION This study examined test-retest reliability of neural responses during two face-emotion paradigms, one requiring explicit, task-directed face-emotion labeling, and one involving implicit, task-irrelevant faceemotion processing. Three key findings emerged. First, scanner effects accounted for minimal variance in temporal signal-to-noise ratio (tSNR) and fMRI activity maps. Second, regions showing significant task-contrast activity showed higher reliability than regions that T A B L E 1 Summary of reliability estimates for main behavioral indices and fMRI task contrasts Note: This table provides an "at-a-glance" summary of behavioral and neural reliability findings alongside group-level voxel-wise activation patterns at the first scan session for the visual search and emotion labeling tasks. The test-retest reliability of main behavioral indices is noted, that is, the intra-class correlation coefficient (ICC) of participant-specific variance. Full behavioral reliability results are presented in Tables S1 and S2. A brief descriptive summary of regions exhibiting at least "fair" reliability (ICC > .4) in voxel-wise analyses for main tasks contrasts is also presented, full results are presented in Tables S3-S6. did not show strong task-related activity at the group level. Finally, across both tasks, we found greater reliability for task contrasts involving conditions with clearly distinct visual stimuli and associated cognitive demands (e.g., face vs. non-face discrimination) compared to conditions with more similar demands (e.g., angry vs. happy discrimination). Variability in tSNR and activation across scanners is undesirable for multi-scanner/multi-site studies. Previous work has generally reported relatively little systematic variability in fMRI signal across scanners (Noble et al., 2017), specifically in subtraction contrasts (Nielson et al., 2018). However, some studies combining data across scanners of different field strength and/or from different vendors or models find larger scanner effects (Friedman, Glover, Krenz, Magnotta, & First, 2006). Although we found substantial white matter variance to be scanner-specific, we found little variance accounted for by scanner in gray matter. Our study employed scanners from the same vendor as is typical for single-site or harmonized multi-site studies; nonetheless, it is likely that effects would be larger for studies with less consistent hardware. Continuing to examine possible systematic scanner differences is important as differences may also be vendor-specific. Different software solutions can be adopted to harmonize systematic scanner differences without removing other variance of interest. Alternatively, including scanner as a covariate can help partition scanner-associated variance. Past studies have typically found reliability estimates of taskbased imaging to be relatively poor across commonly used tasks is not stable within an individual over time (i.e., activity is robust but not reliable). In contrast, other regions may show stronger betweenindividual differences (leading to lower/less robust mean signal) that allows for variability that is consistent across time (Chen et al., 2021b;Hedge, Powell, & Sumner, 2018). Hence, sub-optimal levels of reliability may, for some tasks, derive from design features that aim to maximize group-level activation (thereby implicitly minimizing individual differences; Hedge et al., 2018;Lissek, Pine, & Grillon, 2006 Although using an a priori whole-brain atlas allows us to define ROIs independent of the current data and avoid circularity, a predefined anatomical or parcellation atlas may not best capture the most reliable functional units for a given task (and estimates may vary based on the chosen atlas). Nonetheless, our two statistical approaches largely converged, but ICCs were generally higher in the voxel-wise approach. This is in part due to a "global calibration" across spatial units in the hierarchical model. Leveraging the distribution of estimates across ROIs helps to minimize outlier values and ideally yields a better estimate of true reliability. In our data, this partial pooling/shrinkage generally decreased regional ICC estimates given overall low reliability estimates across the brain. Furthermore, trial-wise modeling approaches using amplitude modulation or hierarchical models (Chen et al., 2021) can also be helpful in modeling cognitive processes precisely and circumvent subtraction contrasts. Additional modeling approaches, including structural equation (Cooper, Jackson, Barch, & Braver, 2019) and computational modeling, are growing and hold promise to increase reliability. For new data collection, it will be important to make design choices to improve reliability, for example by increasing the potency of stimuli or making conditions more distinct (i.e., less correlated), while still isolating the cognitive process of interest. Additionally, selecting sequences with improved tSNR will result in increased statistical power; similarly, increasing the number of trials can optimize the statistical efficiency of designs (Chen et al., 2021c). This study provides strong data on reliability in healthy adults and has several strengths. These include assessing reliability across three points in time, which allowed us to separate scanner-and time-related variance components. We also examine two different face-emotion tasks requiring different attentional demands, and we compare two statistical approaches to reliability estimates. However, the current study also has several limitations. First, the generalizability of these estimates will need to be tested in individuals with psychopathology and pediatric samples. Second, there was variability in the time between sessions across participants, though this was constrained to 2-6 weeks. This timeframe would be similar to pre-/post-scanning for many psychiatric treatment trials, but reliability over longer time frames would still need to be examined, for example, to match developmental studies over years. Third, our sample size was larger than many reliability studies but is still relatively limited, especially given only moderately reliable behavioral effects. Fourth, scanners were of the same build and tasks were run with identical sequences across scanners to minimize scanner-related variance. This reflects an ideal scenario and may not be the case for many large multi-site projects. The current report adds to the small but growing corpus of work on test-retest reliability of task-based fMRI activation by examining the influence of specific scanners and task-related factors on estimates of reliability. Greater reliability was found in regions activated during the task at the group level and for contrasts involving conditions with clearly distinct cognitive demands. This work highlights the importance of assessing reliability in the context of task activation patterns and specific task contrasts. K23MH113731). The funding source was not involved in study design; the collection, analysis, or interpretation of data; writing of the report; or the decision to submit the article for publication.
2022-02-16T06:24:10.380Z
2022-02-15T00:00:00.000Z
246827300
s2orc/train
v2
SIRT1 Activity Is Linked to Its Brain Region-Specific Phosphorylation and Is Impaired in Huntington’s Disease Mice
SIRT1 Activity Is Linked to Its Brain Region-Specific Phosphorylation and Is Impaired in Huntington’s Disease Mice Huntington’s disease (HD) is a neurodegenerative disorder for which there are no disease-modifying treatments. SIRT1 is a NAD+-dependent protein deacetylase that is implicated in maintaining neuronal health during development, differentiation and ageing. Previous studies suggested that the modulation of SIRT1 activity is neuroprotective in HD mouse models, however, the mechanisms controlling SIRT1 activity are unknown. We have identified a striatum-specific phosphorylation-dependent regulatory mechanism of SIRT1 induction under normal physiological conditions, which is impaired in HD. We demonstrate that SIRT1 activity is down-regulated in the brains of two complementary HD mouse models, which correlated with altered SIRT1 phosphorylation levels. This SIRT1 impairment could not be rescued by the ablation of DBC1, a negative regulator of SIRT1, but was linked to changes in the sub-cellular distribution of AMPK-α1, a positive regulator of SIRT1 function. This work provides insights into the regulation of SIRT1 activity with the potential for the development of novel therapeutic strategies. Introduction Huntington's disease (HD) is a devastating neurodegenerative disorder caused by a CAG repeat expansion within exon 1 of the huntingtin gene (HTT), which encodes for an expanded polyglutamine (polyQ) tract in the huntingtin protein (HTT) [1]. Symptoms usually appear in mid-life, comprise personality changes, problems with motor coordination and cognitive decline; disease duration lasts between 15 and 20 years and there are no disease-modifying treatments [2]. The neuropathology of HD is characterised by neuronal cell death in the striatum, cortex and other brain regions and the accumulation of cytoplasmic and nuclear aggregates [3]. Mouse models of HD include those that are transgenic for N-terminal fragments of HTT (e.g. R6/2) or the full length HTT protein or are knock-in models in which the HD mutation has been introduced into mouse Htt (e.g. HdhQ150) [4]. The R6/2 mouse is transgenic for an exon 1 HTT protein [5] and is a model of the aberrant splicing that occurs in HD [6]. The HdhQ150 model had a 150 CAG repeat knocked into the mouse Htt gene [7]. In addition to the full length protein, HdhQ150 mice express mutant exon 1 HTT through aberrant splicing [6] and many other N-terminal HTT fragments generated through proteolysis [8]. At late stage disease (14 weeks for R6/2 and 22 months for homozygous HdhQ150 mice) these models exhibit remarkably similar phenotypes [9][10][11][12][13][14] the main difference between these two models being the age of disease onset and rate of disease progression. SIRT1, a mammalian orthologue of the yeast Sir2 protein, is a NAD + -dependent deacetylase that plays a critical role in multiple biological processes including apoptosis [15], ageing [16], metabolism [17] and various stress responses [18]. It has been demonstrated that DBC1 (deleted in breast cancer 1) inhibits SIRT1 via a direct interaction with its catalytic domain [19]. This dynamic interaction is sensitive to the energetic state of the cell and involves the activity of AMPK (AMP-activated protein kinase), an important cellular energy sensor [20]. In circumstances of low cellular energy, AMPK stimulates compensatory processes, including the activation of SIRT1, resulting in the restoration of ATP levels [21]. However, the complexity of SIRT1 functions in the mammalian brain and the mechanisms involved in SIRT1 regulation are not fully understood. SIRT1 has been shown to participate in neuronal protection and survival in various mouse models of neurodegenerative disorders through a number of substrates such as P53 [22] and HSF1 [23]. With relevance to HD, the activation of Sir2 was protective against mutant phenotypes in a C. elegans model [24]. Increased expression of Sirt1 attenuated neurodegeneration and improved motor function in N171-82Q and BACHD mice [25] and attenuated brain atrophy and reduced mutant HTT aggregation in R6/2 mice without prolonging lifespan [26]. More recently, SRT2104, a SIRT1 activator was reported to have beneficial effects in an HD mouse model [27] with the potential for interrogating SIRT1 activity in the clinic [28]. In contrast, a SIRT1 inhibitor, selisistat, has been reported to alleviate HD-related phenotypes in multiple HD models [29] and has been found to be safe in clinical trials [30]. Based on these findings, the mis-regulation of SIRT1 could have important implications in the development and progression of HD. In this study we describe a striatum-specific phosphorylation-dependent regulatory mechanism that controls SIRT1 activity under normal physiological conditions that is impaired in HD. We show that SIRT1 activity is decreased in the brains of R6/2 and HdhQ150 mice, and that this is not caused by the sequestration of SIRT1 into HTT inclusions. We demonstrate that the presence of mutant HTT in the striatum and cerebellum of HD mice alters the phosphorylation status of SIRT1 and that these effects are related to the abnormal expression and cellular localization of AMPK-α1. Finally, we show that the ablation of DBC1, a negative regulator of SIRT1 [31] does not rescue the deficit in SIRT1 activity in HD mouse models. These results provide new insights into the mechanisms that regulate SIRT1 function and may lead to the development of new strategies by which SIRT1 can be manipulated for therapeutic benefit. (CBA × C57BL/6) F1 background were generated by intercrossing Hdh Q150/Q7 heterozygous CBA/Ca and C57BL/6J congenic lines (inbred lines from Harlan Olac). R6/2 and Hdh Q150/Q150 homozygous mice were genotyped and the CAG repeat was sized as previously described [32]. The mean repeat size (± SD) for all mice used in the entire study was 165 ± 10 for Hdh Q150/Q150 homozygous mice and 204 ± 7 for R6/2 mice. Dbc1 heterozygous mice were obtained from the Eduardo Chini at the Mayo Foundation, Mayo Clinic College of Medicine, Rochester, Minnesota, USA. PCR conditions for genotyping Dbc1 knock-out mice have been previously described [19]. SirT1 floxed homozygous (SirT1 Fl/Fl) mice were obtained from the JAX Laboratory (Mouse Strain: B6;129-SirT1tm1Ygu/J) [33] and were bred with β-actin/Cre heterozygous mice to generate complete Sirt1KO mice. Sirt1 transgenic mice (CBA×C57BL/6J) [34] were obtained from David Holzman's laboratory at Washington University, Missouri, USA Animals were housed under 12 h light/12 h dark cycle, with unlimited access to water and food (Special Diet Service, Witham, UK) in a conventional Unit. Cages were environmentally enriched with a cardboard tube. R6/2 mice and all mice in phenotypic assessment trials were always given mash food consisting of powered chow mixed with water from 12 weeks of age until sacrificed. Upon sacrifice, dissected brain regions, whole brains or peripheral tissues were snap frozen in liquid nitrogen and stored at -80°C until use. Mouse behavioural analysis At 4 weeks of age, mice were weaned into cages of 5-6 animals. Each cage contained at least one representative of each genotype from mixed litters. The analysis of mice of different genotypes was distributed equally throughout the assessment period on any given day and all behavioural tests were performed blind to the investigator. Mice were weighed weekly and rotarod performance and grip strength were assessed as previously reported [35][36][37]. The statistical power of these tests was calculated as previously described [37]. The data were analysed by repeated measures general linear model ANOVA using SPSS software [37]. Protein extraction for SDS PAGE, Immunoblotting and Immunoprecipitation Frozen mouse brain tissue was homogenized in 1 volume of ice cold NETN buffer (20 mM Tris-HCl pH 8, 100 mM NaCl, 1 mM EDTA, 0.5% NP-40, complete protease inhibitors and phosphatase inhibitors) using a polytron homogenizing probe. Samples were sonicated on ice with a vibracell sonicator (10 x 1 s 20 kHz pulses) and spun at 13,000 x g for 10 min at 4°C. The supernatant was retained and protein concentration was determined for each sample by the BCA assay (Thermo Scientific). SDS PAGE and Immunoblotting Protein lysates were diluted with 2x Leammli Buffer, denatured for 10 min at 95°C, loaded onto SDS polyacrylamide gels and subjected to western blot as previously described [8]. Membranes were blocked in 5% non-fat dried milk in PBS-0.2% Tween 20 (PBS-T) or 4% BSA for 2 h at RT. Primary antibodies were added overnight at 4°C in 5% non-fat dried milk in PBS-T (DBC1, SIRT1, HTT, AMPK-α1,) or 4% BSA (MpM2). β-actin, ATP5B, α-tubulin and histone pan-H3 were incubated for 20 min at RT in 5% non-fat dried milk in PBS-T. Blots were washed three times for 10 min in 0.2% PBS-T, incubated with the appropriate secondary antibody for 1 h at RT, washed three times for 10 min in 0.2% PBS-T and exposed to ECL according to manufacturer's instruction (Amersham). The signal was developed using Amersham hyperfilm and Xenograph developer. Densitometry of western blots was performed using a Bio-Rad GS-800 densitometer. Developed films were scanned and the average pixel optical density (OD) for each band was measured using QuantityONE software. The OD of an area devoid of bands was subtracted from the values obtained for bands of interest in order to normalize the OD against background. Relative expression was determined by dividing the normalized OD of bands of interest by the OD of the appropriate loading control for each sample. For full details of primary antibodies see S1 Table. Immunoprecipitation Protein lysates were prepared for immunoprecipitation (IP) as described above. For IP from striatal lysates, striata were pooled from two animals. IP reactions were performed in 1 ml of NETN buffer containing from 400 to 1000 μg protein and 1 μg of antibody and normal rabbit IgG (#2729; Cell Signaling) was used as a negative control. Reactions were left on a rotating wheel at 4°C for 90 min (AMPK-α1) or 4 h (SIRT1) and 15 μl of protein G-coupled Dynabeads (10004D; Life Technologies) were added for the last 45 min. Following IP, protein G-coupled Dynabeads were briefly spun at 13,000 x g for 30 sec, put on a magnetic rack, washed with 1 ml of NETN buffer (4x) and re-suspended in 15 μl of 2x Leammli buffer. Immuno-precipitated complexes were eluted from the beads by denaturation at 100°C for 10 min and immediately loaded for SDS-PAGE analysis. Nuclear/cytoplasmic fractionation All steps were performed on ice. Half brain tissue or liver was cut into small pieces and homogenized with a Dounce homogenizer in TKM buffer (0.25 M sucrose; 50 mM Tris-HCl, pH 7.4; 25 mM KCl; 5 mM MgCl2 and 1 mM PMSF) and nuclear and cytoplasmic fractions were prepared as previously described [19]. For the nuclear and cytoplasmic preparations from brain regions, striata were pooled from four animals, whereas a single cerebellum was used. The final pellet containing the purified nuclei was resuspended in 4% PFA for immunohistochemistry or in NETN buffer for protein analysis and protein concentration was determined by the BCA assay (Thermo Scientific). Immunohistochemistry The isolation of nuclei from brain or liver was as described above. Nuclei were extracted from 4 mice per genotype from half brain or liver and for brain regions, from two pools each containing specific regions from five mice. Samples were fixed on the slide for 30 min with 4% paraformaldehyde prepared in PBS, permeabilized with 0.1% Triton X-100 in PBS for 15 min, washed 3X with PBS, and incubated for 1 h at RT in blocking buffer (PBS with 0.1% Triton and 1% BSA). Nuclei were incubated with the primary antibody in blocking buffer (DBC1, SIRT1, P53 and Ac-P53) overnight at 4°C, washed 3x with PBS at RT and then incubated with the secondary antibody and DAPI in PBS-0.1% Triton for 1 h at RT. Samples were mounted using VECTA-SHIELD mounting medium. Nuclei were visualized using a TCS SP2 Leica confocal microscope. Fluorescence intensity was quantified from 50 nuclei per sample imaged from 10 fields of view per slide using ImageJ. Ac-P53 levels were normalised to the P53 intensity level. Fluorescent intensity levels were presented as a fold change from WT levels as indicated in the figures. The direction of the fold change was inverted to depict the comparative deacetylase activity. Fluor de Lys assay SIRT1 activity was determined with a SIRT1 Fluorometric Kit (BML-AK555) according to the manufacturer's instructions. Protein extraction was performed as described above. Homogenates were then incubated for 10 min at 37°C to allow degradation of any contaminant NAD + . 10 mM DTT was added to the medium, and homogenates were incubated again for 10 min at 37°C. The homogenates (20-30 μg protein/well) were then incubated in SIRT1 assay buffer in the presence of 50, 100 or 200 μM Fluor de Lys-SIRT1 substrate (Enzo Life Sciences), 5 μM TSA and 200 μM NAD + . After 0-, 20-, 40-and 60 minutes of incubation at 37°C, the reaction was terminated by adding a solution containing Fluor de Lys Developer (Enzo Life Sciences) and 2 mM nicotinamide. After 1 h the values were determined by reading fluorescence on a fluorometric plate reader (Spectramax Gemini XPS; Molecular Devices) with an excitation wavelength of 360 nm and an emission wavelength of 460 nm. Taqman RT-qPCR RNA extraction, cDNA sysnthesis, Taqman RT-qPCR and ΔCt analysis were performed as described previously [38]. The Taqman qPCR assays were purchased from Primer Design and ABI. For a list of primers and probes, see S2 Table. Statistical Analysis Statistical analysis was performed with SPSS (repeated measures ANOVA General Linear Model) or Microsoft Excel (Student's t-test) software. p-values of <0.05 were considered significant. Graphs were constructed using Prism Ver.5.0b (GraphPad Software). SIRT1 function becomes compromised in the brains of HD mice There is considerable evidence to support the beneficial effect of SIRT1 manipulation in HD mouse models. However, the impact of mutant HTT on SIRT1 function has not been fully elucidated. As such, we set out to analyse SIRT1 activity and the mechanisms involved in its regulation in two different mouse models of HD: R6/2 transgenic and HdhQ150 knock-in homozygous mice. SIRT1 regulates the activity of several transcription factors including P53 [39]. It deacetylates P53 on Lys382 thereby inhibiting its function [40]. There are a number of commercial kits that use the deacetylation of this P53 lysine residue to assess SIRT 1 activity. In order to have a direct measurement of SIRT1 activity, we applied the Fluor-de-Lys fluorometric activity assay (Enzo Laboratories). The specificity of the kit was evaluated on lysates from the brains of SIRT1 knock-out (Sirt1KO) [33] mice at 4 weeks of age, but unfortunately we found that this kit was not specific for SIRT1 in these brain lysates (S1 Fig). Therefore we tested an alternative published method to assess the steady-state levels of SIRT1 activity on endogenous P53 in mouse brains that makes use of nuclei purified from mouse tissues [19]. The genotypes of the mice used for the experiment were verified by western blot (Fig 1A). Nuclei were isolated from the brains of Sirt1KO and Sirt1Tg mice [34] at 4 weeks of age and immunostained for P53 and acetylated-P53 (AcP53) at Lys382, and counterstained with DAPI ( Fig 1B). P53 levels were equivalent between the Sirt1KO and Sirt1Tg lines and the corresponding wild type (WT) littermates ( Fig 1C). The acetylation of P53 Lys 382 was considerably increased in the Sirt1KO nuclei and decreased in those from the Sirt1Tg mice, consistent with a decrease in SIRT1 activity in the knock-out line and an increase in SIRT1 activity in the transgenic line respectively (Fig 1C), demonstrating that this approach could be used to monitor the steady-state level of SIRT1 activity in mouse brain. To monitor the level of SIRT1 activity in HD mouse models, we isolated cell nuclei from the brains of R6/2 mice at 4, 9 and 14 weeks of age and HdhQ150 homozygous mice at 2 and 22 months together with their aged-matched WT littermates. Nuclei were immunostained for SIRT1, P53 and acetylated-P53 (AcP53), and counterstained with DAPI (Fig 2A and S2A Fig). We did not detect any variation in the intensity level of SIRT1 and P53 staining between HD mouse samples and their corresponding WT controls at each age of analysis ( Fig 2B and S2B Fig). In contrast, whilst we found that the acetylation levels of endogenous P53 were equivalent in HD as compared to WT littermate brains in presymptomatic mice (i.e. 4 week R6/2 and 2 month HdhQ150 homozygotes) (Fig 2A and 2B), the level of AcP53 was significantly higher (! 1.5 fold) in samples from early symptomatic R6/2 mice (9 weeks) and late stage symptomatic R6/2 (14 weeks) and HdhQ150 homozygous (22 months) mice ( SIRT1 does not co-localize with mutant HTT inclusions and is aberrantly phosphorylated in HD mice Previous studies have shown that SIRT1 interacts with HTT in vitro [26]. To investigate whether the altered SIRT1 activity is caused by the sequestration of SIRT1 into HTT inclusions, we performed a double staining for SIRT1 and HTT (EM48) on nuclei isolated from the brains of 14-week R6/2 and 22-month HdhQ150 homozygous mice, together with their age-matched WT littermates. Interestingly, SIRT1 did not co-localize with HTT inclusions (Fig 3A). To further support this finding, the levels of SIRT1 protein were not decreased in HD brains as judged by western blot (Fig 3B). The role of post-translational modifications (PTMs) in the regulation of SIRT1 activity has been the subject of several studies and phosphorylation has been described as a major control SIRT1 Phosphorylation, Impaired Activity and Huntington's Disease mechanism [41]. Therefore, to understand how mutant HTT reduces SIRT1 activity, we monitored the phosphorylation status of SIRT1 in HD mice. We performed SIRT1 immunoprecipitation from the brains of R6/2 mice at 9 weeks of age and HdhQ150 homozygous mice at 22 months and probed the phosphorylation level of SIRT1 by western blot using the mitotic phosphoprotein monoclonal 2 (MpM2) antibody [42]. This antibody detects the phosphorylation of serine and threonine residues when they are followed by a proline (S/T-P sites) and it is not specific for a SIRT1 phosphorylation site (S3 Fig). Interestingly, a higher level of phosphorylated SIRT1 was found in the brains of both R6/2 and HdhQ150 homozygotes as compared to their WT littermates ( Fig 3C). As previously shown in vitro [26], we were able to co-immunoprecipitate endogenous HTT from R6/2 lysates and mutant HTT and WT HTT from HdhQ150 homozygous and WT lysates respectively (Fig 3C). These results suggest that the impairment in SIRT1 function in the brains of HD mice is not related to its sequestration into HTT inclusions, but rather to an alteration in its phosphorylation profile. SIRT1 phosphorylation becomes decreased in the striatum and increased in the cerebellum of HD mice The analysis of total brain samples might mask or dilute any regional pathological changes. Therefore, we extended the analysis of SIRT1 phosphorylation to the striatum, cortex and cerebellum of R6/2 mice at 4, 9, and 14 weeks of age. We did not detect any difference in the phosphorylation status of SIRT1 at presymptomatic stages of the disease (i.e. 4-week-old R6/2) as compared to WT littermates in any brain region (Fig 4A and 4C and S4 Fig). In keeping with our functional data from total brain, the levels of phosphorylated SIRT1 were altered in the striatum and cerebellum of R6/2 mice by 9 weeks of age (Fig 4A and 4C). Surprisingly, the level of phosphorylation of SIRT1 remained unchanged in the R6/2 cortex at these later stages (S4 Fig), but notably was decreased in the striatum ( Fig 4A) and increased in the cerebellum ( Fig 4C) as compared to WT littermates. These data were replicated in the HdhQ150 homozygous mice: there was no difference in the SIRT1 phosphorylation level at 2 months of age (Fig 4B and 4D) whereas SIRT1 phosphorylation was decreased in the striatum and increased in the cerebellum of 22-month-old HdhQ150 homozygous mice (Fig 4B and 4D). Taken together, these results demonstrate that the presence of mutant HTT alters the phosphorylation status of SIRT1 in opposing directions for the striatum and cerebellum as the disease progresses. Induction of SIRT1 activity is blocked in the striatum Phosphorylation plays a central role in controlling protein activity, cellular localization and degradation [43]. To determine whether the differentially altered phosphorylation profile of SIRT1 in striatum and cerebellum corresponded to a compromised SIRT1 function in these brain regions, we immunostained for SIRT1, P53 and AcP53 in nuclei from the striatum and cerebellum of R6/2 and WT mice at 4, 9 and 14 weeks of age. Consistent with the total brain data, we did not detect a change in the intensity level of SIRT1 and P53 staining in either the striatum or cerebellum of R6/2 and WT mice, at any of the ages studied (S5A and S5B, S6A and S6B Figs). Interestingly, we observed a significant reduction in the level of AcP53 in the striatum of WT mice, corresponding to an increase in SIRT1 activity, between 4 and 9 weeks of age, which was absent in the striatum of R6/2 mice (Fig 5A and 5B). In contrast, when we analysed SIRT1 activity in the cerebellum, we detected no change in the level of AcP53 in WT samples at these ages and there was a significant increase in AcP53 in the cerebellum of R6/2 mice from 4 to 14 weeks of age (Fig 5C and 5D), corresponding to an impairment in SIRT1 activity. These data highlight that SIRT1 activity is regulated by different mechanisms in the striatum and cerebellum of WT mice between 4 and 14 weeks of age; SIRT1 activity is induced in the striatum between 4 and 9 weeks, whereas it remains constant in the cerebellum. The presence of mutant HTT can block this induction process in the striatum and cause a reduction in normal SIRT1 function in the cerebellum resulting in an impairment of SIRT1 activity in both brain regions (Fig 5). SIRT1 induction in the striatum correlates with age-dependent phosphorylation The comparison of SIRT1 activity in the striatum and cerebellum revealed that SIRT1 function is controlled by different mechanisms in these two brain regions in WT mice. In the striatum, SIRT1 is activated with age, a process that does not occur in the cerebellum. To monitor changes in the phosphorylation status of SIRT1 under normal physiological conditions we immunoprecipitated SIRT1 Phosphorylation, Impaired Activity and Huntington's Disease SIRT1 from striatal and cerebellar lysates of WT mice at 4, 9 and 14 weeks and immunoprobed with the MpM2 antibody. Notably, SIRT1 phosphorylation levels decreased in the striatum between 4 and 9 weeks of age (Fig 6A), a time at which the functional data revealed an increase in SIRT1 activity (Fig 5A and 5B). However, it then dramatically increased at 14 weeks (Fig 6A), a stage at which SIRT1 activity remains constant as compared to 9 weeks (Fig 5A and 5B). The MpM2 antibody detects phosphorylation on serine and threonine residues followed by proline (S/ T-P sites) and is not specific for a SIRT1 phosphorylation site; therefore, the increased phosphorylation signal at 14 weeks may correspond to the phosphorylation of different SIRT1 residues to those detected a 4 and 9 weeks of age. Conversely, SIRT1 activity remains constant during these ages in the cerebellum (Fig 5C and 5D) and this is reflected by a phosphorylation level that does not change (Fig 6B). Taken together these data provide a link between the phosphorylation status of SIRT1 and its function, suggesting that in the striatum changes in the SIRT1 phosphorylation with age might be related to the induction of SIRT1 activity. The sub-cellular distribution of SIRT1 is not altered in R6/2 mice Previous studies suggested that the phosphorylation of human SIRT1 can increase its nuclear localization and enzymatic activity [44]. To assess whether the mis-regulation of SIRT1 phosphorylation could affect its nuclear localization we prepared nuclear and cytoplasmic fractions from the striatum and cerebellum of R6/2 and WT mice at 9 and 14 weeks of age. Notably, we did not detect any difference in the distribution of SIRT1 at these ages between R6/2 and WT mice in either brain region (Fig 7A and 7B). However, the level of SIRT1 in the nuclear fraction was more pronounced at 14 weeks as compared to 9 weeks of age in the striatum and cerebellum of both R6/2 and WT mice (Fig 7A and 7B). We went on to analyse the phosphorylation level of SIRT1 in these two cellular compartments from the cerebellum by immunoprecipitation. This was not possible from the striatum due to limiting quantities of the extracts. Interestingly, a strong phosphorylation signal was detected in the nuclear fraction, that was absent from the cytoplasm, for both R6/2 and WT samples and, as previously shown on total lysates, the level of phosphorylation was much higher in R6/2 as compared to WT mice ( Fig 7C). These results demonstrate that the sub-cellular distribution of SIRT1 is not affected by the presence of mutant HTT and suggests, once again, that the phosphorylation levels might be directly linked to the regulation of SIRT1 activity. Tissue specific alteration of the subcellular distribution of AMPK-α1 with disease progression Previous studies showed that DBC1 directly interacts with the catalytic domain of SIRT1 inhibiting its activity both in vitro and in vivo [19]. This dynamic interaction is sensitive to the energetic state of the cell [19]. Activation of AMP-activated protein kinase-α1 (AMPK-α1), an important energy sensor in circumstances of low cellular energy, was recently shown to induce the activation of SIRT1 through the dissociation of SIRT1 and DBC1 [20,45]. To identify the possible role of AMPK-α1 and DBC1 in the molecular phenotypes described so far, we decided to study the interaction between these two opposing modulators of SIRT1 using co-immunoprecipitation. We immunoprecipitated AMPK-α1 from the striatum and cerebellum of 9-week R6/2, 22-month HdhQ150 homozygous and WT littermates and detected the co-immunoprecipitated DBC1. Interestingly, we observed a stronger interaction between AMPK-α1 and DBC1 in the striatum of HD as compared to WT mice (Fig 8A), whereas equivalent amounts of DBC1 were co-immunoprecipitated with AMPK-α1 from cerebellar extracts of HD and WT samples (Fig 8B). These data suggest two possible scenarios: either the increased interaction of AMPK-α1 with the SIRT1-DBC1 complex might attempt to promote SIRT1 activation in the striatum of HD mice through the dissociation from DBC1, or the inability to induce SIRT1 in R6/2 mice might be due to an inhibitory retention of AMPK-α1 via DBC1. To gain insight into the molecular events involved in this process we examined the cellular distribution of AMPK-α1 and DBC1 in the striatum and cerebellum of R6/2 and WT mice at 9 and 14 weeks of age. Consistent with the phenotypes described so far, the distributions AMPK-α1 and DBC1 were different in the striatum and cerebellum. Interestingly, using western blots of nuclear and cytoplasmic fractions, we were able to detect DBC1 in both cellular compartments and, although at 9 weeks of age DBC1 was slightly more abundant in the striatal cytoplasmic fraction, this balance was reverted by 14 weeks of age for both R6/2 and WT mice ( Fig 8C). In contrast cerebellar DBC1 remained constant between the two cellular compartments at 9 and 14 weeks of age for both R6/2 and WT mice (Fig 8D). We next monitored the distribution of AMPK-α1 by immunostaining nuclei isolated from the striata of R6/2 and WT mice at 4, 9 and 14 weeks of age. At 9 weeks, AMPK-α1 was present in the nuclei from the WT striatum, whereas it could not be detected in nuclei from the striatum of R6/2 mice until 14 weeks of age ( Fig 8E). Conversely, cerebellar extracts showed an early nuclear accumulation of AMPK-α1 in R6/2 at 9 weeks of age as compared to WT mice, where AMPK-α1 could only be detected in the nucleus at 14 weeks of age ( Fig 8F). These data were confirmed by western blot (Fig 8C and 8D). The nuclear accumulation of AMPK-α1 at 9 weeks of age in the striatum of WT mice, in conjunction with an induction of SIRT1 activity, might indeed support a role for this kinase in the activation of SIRT1. This mechanism appears to be compromised in R6/2 mice and in this case, AMPK-α1 does not reach the nucleus until 14 weeks. Therefore, the increased interaction between AMPK-α1 and DBC1 might result in the retention of AMPK-α1 in the cytoplasm inhibiting the activation of SIRT1, and/or attempting to rescue SIRT1 activity by preventing DBC1 from binding to SIRT1. Conversely, the early nuclear accumulation of AMPK-α1 in the cerebellum of R6/2 at 9 weeks of age as compared to WT mice with the concomitant alteration in SIRT1 function might be an attempt to increase impaired SIRT1 activity. Taken together these data suggest that the inhibition of SIRT1 function in the striatum of R6/2 might arise through an altered functionality of AMPK-α1 and that AMPK-α1 might be involved in rescuing a deficient SIRT1 function both in the striatum and cerebellum, although through different molecular mechanisms. SIRT1, AMPK-α1 and DBC1 act as partners in the same regulatory circuit to control SIRT1 activity in the striatum Our data suggest that AMPK-α1 may play an active role in attempting to rescue SIRT1 deficiency in both the striatum and cerebellum of R6/2 mice. To obtain further evidence for a regulatory circuit involving these three proteins, we went on to compare the expression levels of SIRT1, DBC1 and AMPK-α1 in the striatum and cerebellum of R6/2 at 4, 9 and 14 weeks of age and HdhQ150 homozygotes at 2 and 22 months as compared to their WT littermates. Strikingly, there was a synchronised, statistically significant down-regulation (35-40%) of all three genes at the mRNA level from 4 to 9 weeks of age in the striatum of WT mice (Fig 9A). We detected the same significant reduction in the striatum of R6/2 mice for Dbc1 and Ampk-α1, and there was a weak trend for Sirt1 (Fig 9A). Notably, the presence of this regulatory circuit in the cerebellum was not supported by the same co-ordinated changes in expression levels, although the expression level of Sirt1 was significantly higher in WT mice at 9 and 14 weeks as compared to 4 weeks of age (Fig 9D). These mRNA changes did not result in concomitant alterations in the levels of the SIRT1, DBC1, and AMPK-α1 proteins (Fig 9B, 9C, 9E and 9F). The mRNA changes in the striatum may be the result of a stabilisation of these proteins between 4 and 9 weeks of age. Indeed, the levels of all three proteins were equivalent at 2 and 22 months in the HdhQ150 homozygotes and WT mice (S7A and S7B Fig). Interestingly, we observed a significant upregulation of AMPK-α1 in the striatum of WT mice between 4 and 9 weeks occurring in conjunction with the increase in SIRT1 activity, neither of which took place in R6/2 mice (Fig 9B and 9C). The only change that occurred in the cerebellum was a reduction in the level of SIRT1 in both WT and R6/2 at 14 weeks of age (Fig 9E and 9F). Dbc1 ablation does not improve HD-related phenotypes Our data indicate that brain region specific dysregulated cellular processes result in a reduction in SIRT1 activity in the brains of HD mouse models. We hypothesis that this is related to the phosphorylation status of SIRT1 and that the AMPK-α1 kinase attempts to rescue SIRT1 function. As DBC1 is a negative regulator of SIRT1, and Dbc1 knock-out mice are viable and healthy [19], we elected to use a genetic approach to ablate DBC1 levels in R6/2 mice and investigate whether the dissociation between SIRT1 and DBC1 could increase SIRT1 activity and improve HD phenotypes. We crossed R6/2 transgenic mice with Dbc1 heterozygous knock-out mice (Dbc1 +/-) to obtain Dbc1 +/-::R6/2 males that were then crossed with Dbc1 +/females to generate WT, Dbc1 +/-, Dbc1 -/-, R6/2, Dbc1 +/-::R6/2 and Dbc1 -/-::R6/2 mice. As predicted, genetic ablation of Dbc1 resulted in a significant decrease in Dbc1 mRNA and DBC1 protein levels, and we found that this did not alter the expression of SIRT1 (S8A Fig). To confirm that the removal of DBC1 resulted in an increase in SIRT1 activity, we immunostained nuclei extracted from the brains of WT, Dbc1 -/-, R6/2 and Dbc1 -/-::R6/2 mice at 9 weeks of age for SIRT1, P53, AcP53 and DBC1 and counterstained with DAPI. The absence of DBC1 did not affect the level and nuclear accumulation of SIRT1 and/or P53 (Fig 10A and 10B). As expected, the ablation of DBC1 in WT mice resulted in an increase in SIRT1 activity as indicated by a significant reduction (~65%) in the signal intensity for AcP53 in Dbc1 -/as compared to WT mice (Fig 10B). SIRT1 activity was decreased in R6/2 mice (consistent with Fig 2) and surprisingly the absence of DBC1 did not ameliorate this impairment (Fig 10). In line with these results, we did not detect improvements in the onset and progression of specific behavioural HD-related phenotypes such as body weight, grip strength and rotarod impairment (S8B, S8C and S8D Fig). Taken together these results suggest that the negative effect of mutant HTT on SIRT1 activity might be multifactorial and/or operate outside the inhibitory circuit controlled by DBC1. Discussion The involvement of SIRT1 in lifespan extension and cellular protection from aggregationprone proteins http://jcb.rupress.org/content/190/5/719.full-ref-91 has made it a promising therapeutic target for neurodegenerative disorders [46][47][48]. In the context of HD, the manipulation of SIRT1 activity has not generated results that are easy to interpret. On the one hand, over expression of SIRT1 has been shown to reduce mutant HTT-induced toxicity in HD mouse models, improving motor function and reducing brain atrophy [25,26]. In contrast to this, the pharmacological inhibition of SIRT1 has been shown to have beneficial effects in drosophila and mouse models of HD [29] and on the basis of these results, selisistat was assessed for safety and tolerability in a clinical trial aimed at the development of HD pharmacodynamic biomarkers [30]. Despite this interest, the integrity of SIRT1 function in HD has not been comprehensively investigated. In the present study, we have shown that SIRT1 activity is impaired in different brain regions from two distinct mouse models of HD and that this is linked to an altered SIRT1 phosphorylation status. Furthermore, we provide insights into the temporal tissue-specific regulation of SIRT1 activity in different brain regions from WT mice. To monitor SIRT1 activity in the brain, we analysed P53 acetylation by performing immunohistochemistry on nuclei isolated from both R6/2 and HdhQ150 mice as compared to their WT SIRT1 Phosphorylation, Impaired Activity and Huntington's Disease littermates. We did not detect an alteration in SIRT1 function at the presymptomatic stage in either model. However, SIRT1 activity was overtly compromised by 9 weeks of age in the R6/2 mice with a comparable impairment at late stage disease in both models. This was not caused by SIRT1 sequestration into HTT inclusions and we did not detect any variation in either the level or sub-cellular distribution of SIRT1 between HD and WT mice. We also showed that this impairment occurred in liver and therefore extends to peripheral tissues. We would not expect the increase in P53 acetylation to be caused by the HD-related dysregulation of acetyltransferases as it has been shown that the P53 acetyltransferases: CREB binding protein, P300 and P300/CBP associated factor, are inhibited with disease progression in HD [49] and would therefore be expected to result in a reduction in P53 acetylation, the opposite to that observed in this study. We were unable to replicate these results using an independent measure of SIRT1 activity as the commercial kit that we tested was not specific for SIRT1 in mouse brain lysates. The role of post-translational modifications in the regulation of SIRT1 activity has been the subject of several studies and phosphorylation has been described as a major control mechanism [41]. It has been shown that kinases such as JNK1 and CK2 can phosphorylate SIRT1 thereby increasing its nuclear deacetylase activity [44,50]. The phosphorylation of SIRT1 by JNK1 has also been shown to induce SIRT1 ubiquitination and proteasomal degradation [44]. In this study, although we detected an impaired SIRT1 activity in both the striatum and cerebellum of HD mice, the phosphorylation level of SIRT1 changed in opposite directions in these two brain regions, indicative of a tissue-specific SIRT1 regulation. Our further investigations led us to identify a striatum-specific phosphorylation-dependent induction of SIRT1 activity with age in WT mice, which does not occur in the cerebellum. Taken together, our findings suggest that it is the induction of SIRT1 function that is compromised by mutant HTT in the striatum of HD mice (Fig 11A), whereas in the cerebellum, mutant HTT impairs an already established SIRT1 activity. In an attempt to rescue this SIRT1 deficiency, we crossed R6/2 mice with Dbc1 -/mice, as DBC1 negatively regulates SIRT1 via the direct interaction with its deacetylase domain [19]. Strikingly, despite a significant upregulation of SIRT1 activity in Dbc1 -/mice, the ablation of DBC1 from R6/2 mice had no effect on the impairment of SIRT1 activity, suggesting that mutant HTT alters key regulatory events that lie outside the inhibitory circuit controlled by DBC1. Consistent with this, the absence of DBC1 did not lead to improvements in the onset and progression of several behavioural HD-related phenotypes. In contrast to DBC1, AMPK-α1 has been reported to positively regulate the activity of SIRT1 by inducing SIRT1 activation through its dissociation from DBC1 [20,45]. In addition, there is evidence to indicate that AMPK-α1 and SIRT1 can regulate each other [21]. Interestingly, our co-immunoprecipitation experiments revealed an increased interaction between DBC1 and AMPK-α1 in the striatum of HD mice, which might point to an attempt to rescue SIRT1 function. On the other hand, the nuclear accumulation of AMPK-α1 is delayed in the striatal nuclei of R6/2 mice and its retention in the cytoplasm through an interaction with DBC1 might impede SIRT1 activation. In contrast, the nuclear accumulation of AMPK-α1 in the cerebellum of R6/2 mice occurs earlier than in WT mice, indicating that it might be attempting to relieve the SIRT1 inhibition imposed by mutant HTT. In support of the existence of a striatum-specific regulatory circuit linking these three proteins in the induction of SIRT1 activity, we found that the downregulation of Sirt1, Dbc1 and Ampk-α1 is co-ordinated in WT mice between 4 and 9 weeks of age. The protein level of AMPK-α1 increases in WT mice during the same time frame, which correlates with the induction of SIRT1 activity, neither of which occurs in R6/2 mice. We propose a model whereby disease progression leads to an altered SIRT1 phosphorylation status. As a consequence, the decrease in SIRT1 activity leads to a reduction in the deacetylation of P53 and other SIRT1 substrates, modifications that may contribute to neuronal Proposed model for the striatum-specific regulation of SIRT1 via phosphorylation in WT mice and for the impairment in SIRT1 activity in HD brain. (A) The change in SIRT1 phosphorylation status in the striatum of WT mice between 4 and 9 weeks of age induces an increase in SIRT1 activity followed by a reduction in acetylated P53. The nuclear accumulation of AMPK-α1 in WT striatum at 9 weeks supports a role for this kinase in the activation of SIRT1 (AMPK-α1 is not the kinase involved in the change in SIRT1 phosphorylation detected here, as the MpM2 antibody only recognises Ser/Thr-Pro residues and AMPK-α1 does not phosphorylate Ser/Thr residues that are followed by proline). AMPK-α1 is present in the nucleus at 9 dysfunction ( Fig 11B). This would be consistent with previous data showing that the ablation of P53 from an HD mouse model had beneficial consequences [51]. In conclusion, our data provide two major new findings. First, we have shown that mechanisms controlling the tissuespecific regulation of SIRT1 activity differ between brain regions, and we have identified a novel striatum-specific phosphorylation-dependent mechanism of SIRT1 induction in WT mice. Second, we demonstrate that SIRT1 activity is impaired in two distinct HD mouse models. Given that SIRT1 plays a central role in metabolism, longevity and neurodegeneration, loss of SIRT1 activity may contribute significantly to disease progression in HD. These results provide new insights into the mechanisms that regulate SIRT1 function and may lead to the development of new strategies by which SIRT1 can be manipulated for therapeutic benefit. weeks and activates SIRT1 through a mechanism independent of DBC1. The down-regulation of Sirt1, Dbc1 and Ampk-α1 at the mRNA level between 4 and 9 weeks of age is consistent with these three proteins being partners in the same regulatory circuit. In the context of HD, the marked reduction in SIRT1 phosphorylation impedes the induction of SIRT1 activity. The greater interaction between AMPK-α1 and DBC1 may result in the cytoplasmic retention of AMPK-α1, inhibiting the activation of SIRT1, and/or promoting a futile rescue attempt by preventing DBC1 from binding to SIRT1. (B) The HD pathogenic process leads to an alteration in the phosphorylation status of SIRT1, resulting in an impairment in SIRT1 activity which modulated the function of SIRT1 targets that include P53 and may contribute to neuronal dysfunction.
2018-04-03T00:50:46.229Z
2016-01-27T00:00:00.000Z
24155210
s2orc/train
v2
Circulation microRNA expression profiles in patients with complete responses to chemoradiotherapy in nasopharyngeal carcinoma
Circulation microRNA expression profiles in patients with complete responses to chemoradiotherapy in nasopharyngeal carcinoma Background Nasopharyngeal carcinoma (NPC) is endemic cancer in Southeast Asia with a relatively poor prognosis. Chemoradiotherapy is a primary treatment that advantages certain patients, particularly in the early stages. New predictive and prognostic biomarkers are required to guide and select the best treatment. Aims To evaluate the circulation expression profile of microRNAs (miRNAs) associated with responses to chemoradiotherapy in nasopharyngeal carcinoma. Methods Peripheral blood from 17 patients was collected before and after chemotherapy and radiotherapy. Differential expression circulating miRNAs were analyzed using microRNA Cancer Panels and were compared among patients with complete responses. Differential expression analysis using GenEx 7 Multid, statistic represented by GraphPad Prism 9. Alterations mechanism signaling pathways and biological function using IPA (Ingenuity Pathways Analysis). Results Using microRNAs Cancer Plate consisting of 116 miRNAs, we identified ten circulating miRNAs that were differentially expressed in NPC patients after chemoradiotherapy. Unsupervised clustering and confirmation using qRT-PCR showed that miR-483-5p, miR-584-5p, miR-122-5p, miR-7-5p, miR-150-5p were overexpressed and miRNA are miR-421, miR-133a-3p, miR-18a-5p, miR-106b-3p, miR-339-5p were significantly downregulated after chemoradiotherapy (p < 0.0001). In addition, ROC analysis through AUC (Area Under Curve) with 99% confidence interval (CI) p value < 0.0001. Gene enrichment analysis of microRNAs and the targeted proteins revealed that the main involved pathways for chemoradiotherapy in NPC were cell death and survival signaling pathways. Conclusion qPCR profiling in circulating blood compared before and after chemoradiotherapy in nasopharyngeal carcinoma can identify pathways involved in treatment responses. miR-483-5p, miR-584-5p, miR-122-5p, miR-7-5p, miR-150-5p, miR-421, miR-133a-3p, miR-18a-5p, miR-106b-3p, miR-339-5p are differentially regulated after chemoradiotherapy in NPC. Introduction Nasopharyngeal cancer (NPC) is a head and neck cancer with relatively high incidence, mortality, and low survival rates in Southeast Asia, including Indonesia [1]. Many are found in Southeast Asia and are associated with particular ethnicities, so this cancer is unique [2]. The cause of cancer is still unclear. Although it is more commonly found in men, the relationship with gender has not been explained. Delay in diagnosis worsens the patient's condition. The anatomical location and small size are difficult to detect early on, so it is considered the cause of the low cure rate [3][4][5]. One of the biggest challenges in the treatment is complete response and a high rate of cancer progression after treatment. The core clinical management is early diagnosis using nasopharyngeal tissue biopsy followed by radiotherapy with or without concomitant chemotherapy. In low-and middle-income countries, NPCs are usually diagnosed in late stages due to the difficulty of recognizing the disease until manifestation in the cervical lymph nodes [6][7][8][9]. In patients with advanced stages, disease recurrences are relatively high after a particular time of clinical responses with chemoradiotherapy. Clinical biomarkers to predict disease progression in NPC are still lacking. Tumor biomarkers are essential to guide treatment, calculate disease progression risk, and design surveillance [10]. MicroRNAs (miRNAs) are small RNA (20-25 bp) involved in the modulation of gene expression through post-translational mechanisms [11,12]. Differences in miRNA expression profiles in primary tissue samples have been used in differentiating pathophysiology risk factors [13], therapeutics response [14], and prognosis of NPC [15]. Chemoradiotherapy causes apoptosis of cancer cells, thereby affecting changes in miRNA expression. Several miRNAs have been involved in chemotherapy response after receiving 5-fluorouracil + cisplatin of NPC and 5-fluorouracil sensitivity in breast cancer [16,17]. We also previously investigated the association between the expressions of chemotherapy responses f NPC and Breast cancer using cell lines and primary tissue samples [18,19]. Chemoradiotherapy might also affect circulating tumor cells, protein, free-DNA, and RNAs. This study investigated altered miRNA expression in the plasma in NPC patients with complete responses after chemoradiotherapy. Study subjects and ethics statement 17 NPC patients who received chemotherapy based on cisplatin and radiotherapy were involved. Patients come to health facilities with complaints of nasal obstruction, ear problems, nosebleeds, headaches, and lumps in the neck. Four patients were initially diagnosed in stage II B, and 4 and 9 patients were diagnosed in stage III and IV. Plasma samples were collected before treatment 3-17 months after the chemoradiotherapy. Other eligibility criteria for this study were Early Antigen (EA) > 1, Viral Capsid Antigen (VCA) > 2, and EBNA >1.6. Response to treatment was evaluated 12 weeks after using nasopharyngeal endoscopy and CT scan. Complete response (CR) was defined as no residual disease in the smooth nasopharyngeal mucosa, no mass, and no lymph nodes with confirmation of biopsy. This study was performed after approval from Jenderal Soedirman University Ethics Committee (number 898/EC/2016). Furthermore, all participants were older than 18 years old and could provide an informed consent form when recruited, which entails using samples, acquisition, and clinical data. Chemoradiotherapy A combination of radiation and chemotherapy is utilized to treat progressed locoregional NPC. Chemotherapy is classified as neoadjuvant, contemporaneous, or adjuvant, whether given some time recently, amid, or after radiation. Chemotherapy is chosen separately based on the patient's characteristics. Concurrent cisplatin with illumination is the standard treatment for chemoradiation in nearby maladies. In the meantime, with cisplatin + radiotherapy taken after cisplatin/5-FU or carboplatin/5-FU, chemoradiation taken after adjuvant chemotherapy may be utilized. The docetaxel/cisplatin/5 FI, docetaxel/cisplatin regimen is utilized for neoadjuvant chemotherapy. Cisplatin/5 FU, cisplatin/epirubicin/paclitaxel, and concordant coordination with week-by-week cisplatin or carboplatin organization. The radiation measurements endorsed were 69-74 Gy to PGTVnx, 66-70 Gy to PGTVnd, 60-66 Gy to PTV1, and 50-54 Gy to PTV2, conveyed in 30 or 33 divisions. Radiation is given once day by day, five divisions per week, for 6-6.5 weeks for IMRT arranging. Plasma sample collection and miRNA isolation Whole blood from patients (5 mL each) before and after therapy was collected using an EDTA vacutainer. Plasma was separated using centrifugation (1500 rpm for 10 min) and was stored at − 80 • C until analysis. 200 mL of plasma was used for total RNA extraction using RNA Isolation Kit miRCURY-Biofluid (Cat No. 300 112, Exiqon). cDNA synthesis was performed using 50 mL of total RNA with cDNA Synthesis Universal kit II, 8-64 rxns (Cat No. 203 301, Exiqon) in Biorad C1000 thermal cycler (42 • C for 60 min, 95 • C for 5 min, and 4 • C). All procedures were performed following the manufacturer's recommended protocol. Quantification microRNA panel MicroRNA profiling was performed using real-time PCR using Cancer Focus microRNA PCR Panel. ExiLent SYBR Green master mix, 2.5 mL (Cat No. 203 402, Exiqon) consisting of 196 target primer miRNAs based on LNA (Locked Nucleic Acid). All protocols were performed following the recommended protocols provided by the manufacturer. Data analysis The analyses were performed using Genex 6 Pro with Exiqon qPCR wizard software MultiD. Expression analysis was performed using relative quantification of − 2 ΔΔCt [20]. Gene enrichment analysis of the differential miRNA expression was performed using Ineguinity Pathway Analysis (IPA). GraphPad Prism 9 software was used for data analysis and Figure configuration to represent the mean, standard deviation (SD), and the student t-test. ROC sensitivity and specificity analysis was constructed with a 99% confidence interval and p < 0.05 as a statistically significant value. Patients characteristics In this study of patients, 13 males and 4 females with a median age of 51. Staging I-II data was performed on 4 patients and 13 at III-IV. These patients received completed chemo and radiotherapy, as shown in Table 1. Based on titer EBV infection, data showed positive EBV EA (n = 15), EBV-EBNA (n = 15) and EBV-VCA (n = 17). From the histology status, the participants were dominated by WHO type III with 14 patients. Differential expression Clinical and pathological patient characteristics at diagnosis are summarized in Table 1. Analysis of relative expression using GenEx identified 20 of the most differentially deregulated miRNAs (details see Table 2). From these results, 10 miRNAs were down expressions (p < 0.0001), and 10 were up expressions (p < 0001). The heatmap of differentially expressed microRNAs is presented in Fig. 1. miRNAs expression in the circulation of 17 NPC patients with complete response after chemoradiotherapy showed an inverse expression of 20 microRNAs before and after receiving therapy (Fig. 2). Five miRNAs including miR-483-5p, miR-584-5p, miR-122-5p, miR-7-5p, and miR150-5p were upregulated. Another 5 miRNAs including miR-421, miR-133a-5p, miR-18a-5p, miR-106b-3p, and miR-339-5p were downregulated. Sensitivity and specificity analyzes were performed on 10 miRNAs that consistently experienced changes in expression after receiving chemotherapy. The analysis showed that 10 miRNAs could be suggested as candidates for assessing the response to chemoradiotherapy in NPC patients using circulating miRNAs. ROC analysis shows the AUC (Area Under Curve) value > 0.9 with a 99% confidence interval with a significance value of p < 0.0001 (can be seen in Fig. 3). Mechanism signaling pathways The identification of potential biological mechanisms affected by the miRNAs expression dysregulation after chemoradiotherapy was carried out using IPA (Ingenuity Analysis Pathways). Differential expression of Fig. 1. Profile expression with a high significance p-value (p < 0.0001) using cancer focus microRNAs panel from circulating pretreatment (non-chemo-radiotherapy) and chemo-radiotherapy in NPC. deregulated miRNAs by analysis using IPA showed several impacts of cellular mechanisms with p-value <0.01 category with activation zscore of − 1.131 -2.256. Significant changes due to the impact of chemotherapy and radiotherapy affect the mechanism of cellular death and survival involved in the necrosis, apoptosis, cell viability, and cell death of carcinoma cell lines (Table 3). 13 downregulated miRNAs were involved in the cell viability processes. Six miRNAs were involved in cell death regulation, 23 miRNAs were associated with the biological pathways of apoptosis, and 25 miRNAs were involved in cell necrosis processes. Table 3 Mechanism cellular analysis from profiling circulating expression of microRNAs in NPC after receiving chemo-radio therapy. potential targeted treatment [21]. The low number of patients to achieve a complete response makes it interesting to know the molecular changes. One molecule that is known to be differentially regulated in response to the changes in cellular activities such as therapy is microRNA (miRNA). MicroRNA is responsive to the stress-like effect on hypoxia. Biological processes such as cell proliferation, apoptosis, and tumorigenesis may provide a general overview of the microenvironment affecting miRNA expression [22,23]. Therefore, discovering candidate biomarkers based on minimally invasive is expected to provide a new approach to assessing the success of treatment to increase efforts for treatment success [24]. miRNAs are molecules that play an essential role as posttranscriptional regulators by targeting hundreds of mRNAs and are involved in many disease cases, including cancer. Previous studies have reported that miRNA expression is associated with several chemotherapy responses and resistance events in esophageal cancer [14], Oral squamous cell carcinoma [25], breast cancer [19], and lung cancer [26]. In another report on nasopharyngeal carcinoma, miR-324-3p and miR-519d are deregulated by inhibiting gene translation targeting WNT2B [27] and PDRG1 [28] toward radiotherapy sensitivity. In addition, mIR-29c is known to be jointly sensitive to cisplatin-based radiotherapy and chemotherapy [18,29,30]. This study found 10 miRNAs that stably and consistently changed circulating expression after achieving a complete response to chemoradiotherapy ( Figs. 1 and 2). It consisted of 5 miRNAs that significantly increased expression, namely miR-483-5p, miR-584-5p, miR-122-5p, miR-7-5p, and miR-150-5p, and 5 miRNAs that had decreased expression. Significantly, namely miR-421, miR-133a-3p, miR-18a-5p, miR-106b-3p and miR-339-5p. In the previous study, Changes in the expression response of miRNAs to therapy are closely related to the type of therapeutic agent given to influence cellular mechanisms and the response by the body [31]. It is shown by giving 5-FU to chemosensitive affect miR-494 expression and chemoresistance to miR-200c in colorectal cancer. To further investigate the mechanism regulation of miRNAs of this nasopharyngeal carcinoma, expressions with significance miRNAs 4 cellular regulation of cell death and survival related to chemotherapy and radiotherapy treatment. We found alterations of miRNAs related to necrosis, apoptosis, cell viability, and cell death of carcinoma cell lines. Although the mechanism analysis is related to the chemoradiotherapy response in nasopharyngeal carcinoma cases, the direct relationship with miRNAs is unclear. There are several limitations to this study. We used a small sample size for miRNAs profiling: low survival rate and irregular treatment schedule due to unvalidated with a large cohort. Alterations expression of miRNAs on circulating can be detected by bioinformatics approaches and correlated with molecular biology change after receiving the chemoradiotherapy. Even though confirmation using the biological model and validating the differential expression of miRNAs is necessary for evaluation. Therefore, further research is needed to explain more comprehensively the mechanism and validation with a large number of samples. Meanwhile, few studies are studying the function of microRNA and circulation in the sensitivity of tumor treatment and the complete response to chemoradiotherapy for nasopharyngeal carcinoma. Conclusions In conclusion, research focused on studying changes in circulating miRNA expression in response to a combination of radiotherapy and chemotherapy is relatively limited. In this study, we created a signature profile associated with these conditions. As a result we found increased expression of miR-483-5p, miR-584-5p, miR-122-5p, miR-7-5p and miR-150-5p. In contrast, decreased expression was found in miR-421, miR-133a-3p, miR-18a-5p, miR-106b-3p and miR-339-5p.In the future, our study will validate with a larger sample size to determine the sensitivity and specificity of the obtained chemoradiotherapy biomarker candidate miRNAs. Declaration of competing interest The authors state that there is no conflict of interest in this study.
2022-09-13T15:03:32.521Z
2022-09-01T00:00:00.000Z
252205710
s2orc/train
v2
Assessing Consumer Willingness to Pay for Nutritional Information Using a Dietary App
Assessing Consumer Willingness to Pay for Nutritional Information Using a Dietary App A healthy society is the foundation of development in every country, and one way to achieve a healthy society is to promote healthy nutrition. An unbalanced diet is one of the leading causes of noncommunicable diseases globally. If food was correctly selected and correctly consumed, both the problems of overeating and lack of nutrition could be largely solved while also decreasing public health costs. Interventions such as presenting necessary information and warning labels would help consumers make better food choices. Hence, providing nutritional information to consumers becomes essential. The present study investigates the importance of nutrition information labels on consumers’ preferences by estimating their willingness to pay for features and information provided by a dietary software program (app). An application can easily display the information to the consumers and help them make informed food choices. A discrete choice experiment investigated consumers’ preferences and willingness to pay to receive nutritional information. Mixed multinomial logit and latent class analysis were applied. The results showed the existence of heterogeneity in consumer preferences for different nutritional information provided by the application. Consumers are willing to pay more for salt and fat alerts. The results of this study allow for the analysis of consumers’ interest in nutritional information. Such results are essential for the industry for future investments in similar applications that potentially could help consumers make better informed choices. Introduction According to the World Health Organization [1], noncommunicable diseases (NCDs) such as cardiovascular diseases, cancers, chronic respiratory illnesses and diabetes are the leading cause of death worldwide. More than 41 million people die yearly from NCDs (71% of deaths worldwide), including 15 million individuals who die too early, between 30 and 69 years old. More than 85% of these early deaths in low-and middle-income countries are due to NCDs. NCDs are also considered a major health concern in developing countries like Iran. With a population of over 80 million, the mortality rate due to NCDs was almost 82% in 2016 [2]. In addition, in 2013, in Iran, a warning mortality growth of NCDs over the last 20 years was observed [3]. It is noteworthy that the ageing population of Iran can worsen the current situation [4]. Most early deaths are related to well-known risk factors, such as an unhealthy diet, harmful use of tobacco and alcohol, and lack of physical activity [1]. Dietary risk factors are the main contributors to NCDs [5]. People of all ages are more likely to experience an unhealthy diet than any of the other three factors (i.e., harmful use of tobacco, alcohol, and lack of physical activity). Foods rich in fat, saturated fats, trans-fats, sodium and sugar are likely to increase the risk of nutrition-related diseases [6]. Reducing such harmful elements in highly-populated countries such as Iran can be an effective strategy to control and manage NCDs [7]. Therefore, steps must be taken to guide appropriate food consumption behaviours [4]. If foods were accurately selected and consumed, the problems of overeating and lack of nutritional value could be solved, to a large extent, and could help decrease public health costs [8]. Numerous factors affect consumers' food choices [9]; among them is the information displayed on food labels. As an essential tool to inform consumers, food labelling is the primary means of exchanging information between producers and consumers along the food supply chain [10]. Consumers look for different types of food information related to religious constraints, the presence of allergens, environmental issues and production procedures [11]. In this vein, nutritional food labelling has gained importance in the development of food industries with increasing competition and consumers' greater attention to health factors [10]. Food nutritional labels influence consumers' buying decisions and change their behaviour toward a healthy and desirable food pattern [12,13]. Therefore, presenting nutritional and allergen information on the food label is considered part of a broad attempt to prevent the health cost of food-related diseases [14]. Although the information on the food package is simple, consumers seem to use such information less than one might expect [8]. Because of limited attention, customers pay attention to a restricted number of product features. Usually, only those that may be relevant as a quality cue are considered: price and brand are the most relevant [15]. During their in-store experience, consumers are also influenced by various marketing stimuli, and since they intend to shop a wide range of products, they face multidimensional decision-making problems. Time pressure and cognitive limitations are significant constraints in using and understanding label information in real shopping scenarios. Consumers are not willing to consider the health labels, especially when constrained by time [15]. Grunert et al. [16] concluded that only 27% of the consumers check the nutritional information of the food packages. Moreover, although the majority (83%) of Iranian consumers declared to read food labels when shopping, only a small percentage of them (5%) aimed to obtain nutritional information on labels [8,17]. A possible reason is that the consumers are too hurried to look at and analyse nutritional and health-related data presented on food labels due to a distracting and crowded purchasing atmosphere. In this situation, evaluating the nutritional value of their shopping basket could be difficult even for those conscious consumers who wish to choose healthy foodstuffs [14]. Using interpretation guides, such as software programs (or applications-apps) and keeping a tally of nutritional information on food products while shopping could help consumers to make more conscious choices [12,14,18]. This study investigates consumers' preferences and willingness to pay for different dietary software program information features. A discrete choice experiment was applied to evaluate the hypothetical services mentioned. Thus, three models were estimated: a Mixed Multinomial Logit (ML) in preference space, a ML in willingness to pay (WTP) space, and a latent class. The findings of this study are useful to several stakeholders. Policymakers could use the present results to develop policies and favour software applications that allow more conscious food choices and promote a healthy change in people's diet. Consumers could also benefit from having access to a tool that assists them in selecting their food. Literature Review According to the literature, food labelling, by providing relevant information to consumers, has a significant role in healthy food choices [19]. Front Of Package (FOP) nutrition labels are food labels that provide nutritional information on the most common intakes of saturated fat, sugar, and sodium (or salt) in various designs and assist consumers in making healthy food choices [20,21]. Emrich et al. [22] found that consumers who consulted FOP nutrition labels reduced unhealthy food in their diets, such as overall total fat, saturated fat, and sodium intake. Similarly, Mclean et al. [23] showed that people with hypertension would lower high-sodium processed food intake using FOP nutrition labels. Consumers consider nutrition labels essential in evaluating the product [24]. However, the information included in the labels does not influence their purchasing decisions. Barreiro-Hurlé et al. [25] showed that using nutritional information, regardless of the type of the nutritional label (facts panels or claims), leads consumers to a healthy food selection. In addition, background knowledge is needed to understand the information on the nutritional labels [26,27]. Low awareness and lack of nutritional knowledge are the most important reasons for not paying attention to the nutritional contents of food labels [17]. Therefore, to be effective, salient and personalised information should be easily, clearly, and quickly accessed through reliable tools that lead to informed food choices [28]. A possible efficient solution to all the mentioned issues is using technology; usability is of paramount relevance here [29]. In other words, smartphone applications can be used to read and process the written information on nutritional labels and display the information to consumers in an easy-to-understand way. Moreover, by adapting to consumers' unique features and providing nutritional advice, these apps can assist in choosing the best product that consumers require [30]. In this regard, apps such as "Tapingo" or "SmartAPPetite" were developed to provide consumers with relevant information worldwide and in southwestern Ontario. Whereas university students used the first app to order food, the second was used to motivate people to consume local food, considering their nutritional preferences and providing customised information [31,32]. Neither of the apps are available in Iran [31,32]. Therefore, customised food information, specific for each individual, easily understood and relevant to their needs, can be provided via technological tools such as the apps mentioned above. Moreover, previous studies have found that participants were willing to pay for customised food information [14]. However, the willingness to pay was mainly for the information that included allergy-alert and diet-alert warnings. This information could also be sold based on the subscription method. Several economic factors explain the existence of subscriptions. First, subscriptions can reduce transaction costs. In other words, several products can be exchanged only once and not every time the product is supplied and used. Second, the risk of price change is less for the consumer, although the seller may encounter uncertainty about future prices. However, the payment in advance is a premium for running that risk. Finally, subscriptions can lower market uncertainty by fixing the number of products sold. Subscriptions allow sellers to segment buyers into groups with demand elasticities and thus permit price discrimination, which can benefit both producers and social welfare, so long as the marginal cost of production is positive. There are no binding capacity constraints [33]. Another critical issue in using the app is the information provision format. Previous studies presented and analysed two main label formats: the Guideline Daily Amount (GDA) and Traffic Light System (TLS). The GDA shows nutritional information numerically and positively influences healthy food choices [34]. The TLS is widely used in the food industry. It typically shows different lights (e.g., green, amber or red labels) to inform whether foods contain unhealthy ingredients (e.g., low, medium or high amounts of salt, fat, saturated fat or sugars) [35]. According to several researchers [36,37], this system also has a significant role in the selection of healthy food. However, the information provision format plays a crucial role in forming the consumers' perspective toward the nutritional information provision. Consumers like nutritional labels with nice colours, symbols, and easy-to-understand information, whether graphically or numerically [18]. In a study including TLS and GDA, approximately 90% of participants checked to agree/strongly agree on the scale when asked if they liked TLS [38]. In the same study, GDA was liked by only 50% of the participants. Similarly, consumers in New Zealand preferred the TLS format most often [39]. By contrast, other studies found that GDA was considered a more attractive and liked label than TLS [40,41]. Different factors such as social level, local differences, interest in healthy eating and nutritional knowledge of consumers play a significant role in utilising GDA and understanding nutritional information [27]. Young consumers with no children using GDA are more interested in nutritional information and more aware of food-related health issues [12]. Another factor influencing consumers informed food choice is the difference between displaying food information for every single product in the shopping basket or the total number of items in an aggregate basket. The limited number of studies on this issue have found that consumers prefer to process each product's information individually rather than obtain aggregate information for the total basket [12,14]. However, given the scarcity of literature on this issue, further research is needed. The Discrete Choice Model A Discrete Choice Experiment (DCE) is a survey-based methodology widely used to model consumers' preferences [42]. According to the method, all goods and services are defined by a set of attributes. Therefore, respondents are presented with several market simulations (choice sets) in which the offered products or services differ in at least one attribute [43]. All alternatives shown to the respondents are evaluated using the same list of attributes but with different levels (e.g., the presence or not of the organic logo). Based on these attributes and their levels, participants are asked to choose the alternative they preferred the most in each choice set, according to the characteristics and price of the presented products or services. Including the price/cost of each product enables the calculation of a respondent's willingness to pay (WTP) for a specific attribute. The DCE is based on the Lancastrian consumer theory [44] and the Random Utility model (RUM) [45]. According to Lancaster [44], individuals' choice is based on the utility maximisation rule, meaning they will select the alternative that gives the highest utility [46]. Moreover, the RUM framework indicates that the utilities of different goods or services can be broken down into separate utilities for their attributes. Therefore, the total utility of the selected item i by the individual n is represented as the sum of two utility components, a systematic component (V ni ) and a non-observed component (ε ni ), which is treated as random. The systematic (non-observed) component can be further approximated by a linear function of the product or service attributes in the vector X ni , while the population utility weights for each attribute can be collected through the vector β: Under the assumption that ε ni follows an independent and identically extreme value distribution, a multinomial logistic (MNL) model can be implemented. However, the main limitation of the MNL model is that it assumes homogeneity of preferences across consumers, which is an unrealistic assumption [47]. Flexible models such as the Mixed Multinomial Logit (ML) can capture unobserved preference heterogeneity across individuals. Assuming that the unknown parameter estimating β is random according to the continuous probability distributions, the utility of the individual n from alternative i is specified as: where β varies between individuals but not over alternatives (representing 'consumers' preferences heterogeneity); while X ni is a vector of observed variables related to the alternative i and decision-maker n. ε ij is a random term distributed i.i.d. extreme value over individuals and alternatives. In the present study, the random parameters β were assumed to be normally distributed to allow positive and negative preferences for each attribute. Only the price parameter was assumed to be distributed following a negative log-normal distribution to obtain a better fit with the microeconomic theory (negative utility for the price parameter). Furthermore, due to the impossibility of direct interpretation of coefficients in the preference space, the willingness to pay was also calculated. As the ML models accounts for the heterogeneity of preferences, the WTP was directly estimated in the WTP-space which allows to interpret coefficients of attributes and compare them with each other in and easier way, while providing more reasonable distributions [48]: where λ n = (β n price /µ n ) with β n price being an individual-specific coefficient for the price, while µ n represents an individual-specific scale parameter γ n = (c n /λ n ), where c n = (β n /µ n ). Although the ML models consider variety in preferences, the source of the heterogeneity of tastes is unknown. Moreover, their specification requires an a priori assumption for the β distribution. These limitations can be overcome with the latent class (LC) models [49,50], providing a variety of information about participants' behaviour. In this model, the heterogeneity of preferences is accommodated by dividing consumers into a set of exclusive classes with homogeneous preferences within them. Therefore, the utility in the LC models of the individual n for the alternative i is: where β c is the vector of class c associated with a segment-specific vector of coefficients, while ε ni|c follows a Gumbel distribution. The class membership of the individuals is a priori unknown to the analyst, as it depends on the observable attributes and the latent unobservable components [51]. The optimal number of classes was identified using the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), as well as the significance of the estimated parameters and the interpretation of the model (in terms of sign and size of the parameters) [49,52]. Based on the assumption of linearity of the utility function, the indirect utility for each alternative i for each segment c is: The WTP for each attribute was estimated for each segment. The WTP was calculated as the price change related to a unit increment un a specific attribute in that segment: where β attribute is the coefficient of the attribute of interest in a specific segment and β price is the price coefficient in the same segment. Product and Attribute Selection A new digital mobile application that helps consumers make informed food choices during grocery shopping was selected for the analysis. It can provide consumers with personalised information on potential unhealthy components (e.g., fats, salt) in the food products. A warning on possible allergies is also available. This information can be provided individually or as a basket for all products simultaneously. Moreover, the information can be displayed using a TLS or GDA. The service has a price in local currency, which varies according to the affiliation program (monthly, quarterly or yearly). Hence, seven attributes were selected based on previous literature: basket [12,14,53], format [24,34,54,55], fat alert [6,19,22,56,57], salt alert [23], allergy alert [58][59][60] payment type [61][62][63], and price [12,14]. The description of attributes and their levels is presented in Table 1. Attribute Description Levels * Basket The nutritional information is displayed in two ways: a: basket: information shown in aggregate form for all the selected products (the entire basket). b: individual: information presented for each product in the basket. Format a: Guideline Daily Amounts (GDAs): this label communicates nutrient content levels in absolute values, per 100 g or portion size, and is also expressed as a percentage of proposed daily reference quantities within one's total diet. b: Traffic Light System (TLS): this label shows nutrient content by weight. Green, amber, and red colours are used to respectively depict low, medium, and high content for unhealthy components (e.g., salt). Guideline Daily Amounts = 0, Traffic Light System = 1 Sodium (salt) alert A service will alert shoppers if they buy products with high sodium content. Salt alert = 1, No alert = 0 Fat alert A service to alert shoppers to avoid buying foods with high-fat content. Fat alert = 1, No alert = 0 Allergies A service helps buyers avoid an allergic reaction. The most common ingredients that trigger food allergens include dairy, eggs, peanuts, wheat, corn and soy. The service also includes any allergens found in flavourings, colourings or other additives. This application allows users to scan products' barcodes to check for allergens. Allergy alerts = 1, No alerts = 0 Payment Payment for using the diet application is based on monthly, quarterly, or yearly subscription. Quarterly and yearly subscribes have access to a 50% and 70%discount, respectively. Monthly = 0, Quarterly = 1, Yearly = 2 Price per month Monthly cost for using services of diet application (without the quarterly and yearly discount). Data Collection and Analysis The data were gathered through personal interviews between August and September 2019 in Sari, the capital of Mazandaran (Iran). A convenient sample in the province of Mazandaran was selected given the predominance of nutritional-related diseases. In Mazandaran, the leading cause of death is cardiovascular disease. Moreover, Mazandaran presents the higher levels of obesity prevalence in all of Iran and is among the top five provinces regarding the prevalence of hyperlipidaemia and hypertension [64]. The individuals included in the study were over 18 years old, currently living in Sari and had an income or could use the household income for food expenses. For the sample collection, three leading chain supermarkets were selected during quiet hours of the department store (excluding Thursdays and Fridays), so the respondents would not get distracted. Customers and staff present at the moment of the study were randomly recruited. Out of the 207 distributed questionnaires, three incomplete questionnaires were excluded. The final sample included 204 questionnaires. The questionnaire was developed in English on the Qualtrics platform. Then, it was translated and back-translated to Farsi [65]. Close collaboration with a local researcher allowed conceptual, functional and categorical equivalence in the translation [66]. The questionnaire included socio-demographic data, health conditions, shopping behaviour, awareness of food-related diseases and the DCE. Discrete Choice Experiment Design and Estimation Given the seven attributes and their levels, 384 (25 × 3 × 4) possible alternatives could be created. The number of combinations was reduced to obtain a more reliable and statistically efficient design. A fractional factorial design of 12 unlabelled choice sets was developed employing a D-efficient design in the Ngene software (D-error = 0.35 and A-error = 0.40). Each respondent was presented with 12 choice sets with two alternatives and a "no choice" option in case the respondents would prefer not to choose either the application services. All choice sets and alternatives were presented in a randomised order to avoid any bias [67]. Before the choice experiment, participants were introduced to the purpose of the research. Then, the respondents were presented with a list of products, in their local language, based on the criteria of "optimal food basket for Iranian households" introduced by the Ministry of Health, Treatment and Medical Training of Iran [68]. The list was developed based on the consumers' nutrient requirements according to age and gender, the cost (including only economically affordable goods) and the production potential in the country for each product. Both raw and pre-prepared products were included. The products were presented in a graphical format (Figure 1) and randomised order. Individuals were asked to imagine they were in a store and choose the groceries they usually buy during their shopping trip to simulate purchasing decision-making. The participants could choose as many products as they wanted. The weight/volume of each product was specified ( Table 2). The objective of this task was to get the consumer to visualise the products they usually buy and evaluate how useful the proposed application could be when valuing the nutritional properties of their usual food shopping. Then, the respondents were introduced to the diet application. The participants were told that a new diet application was being launched to aid customers in making healthier food choices. The service would allow them to keep a tally of the main nutritional components of the food they purchase, as well as the presence of specific harmful substances for them (e.g., sugar, fat or allergic components). Therefore, the application Individuals were asked to imagine they were in a store and choose the groceries they usually buy during their shopping trip to simulate purchasing decision-making. The participants could choose as many products as they wanted. The weight/volume of each product was specified ( Table 2). The objective of this task was to get the consumer to visualise the products they usually buy and evaluate how useful the proposed application could be when valuing the nutritional properties of their usual food shopping. Table 2. List of products presented to participants and their weight/volume. List of Products Bread (400 g package) Barbecued chicken (800 g pack) Rice (300 g pack) Eggs Macaroni (500 g package) Milk (1 L) A can of baked beans and mushrooms (400 g) Vegetable oils (810 g) Olivier salad (250 g pack) Chicken soup (70 g package) Vegetables (250 g pack) Biscuits (100 g) Apples Coca-cola (1.50 L) Cheese (450 g) Please add to list other foodstuffs you buy Then, the respondents were introduced to the diet application. The participants were told that a new diet application was being launched to aid customers in making healthier food choices. The service would allow them to keep a tally of the main nutritional components of the food they purchase, as well as the presence of specific harmful substances for them (e.g., sugar, fat or allergic components). Therefore, the application would display in an easy-to-read format the nutritional information by scanning the bar code of the selected product. All attributes of the application and the attributes' levels were presented and thoroughly described to inform participants equally. Participants could also request help or ask for clarifications to the interviewer present at the moment of the development of the study. An example of a choice set (Figure 2) was also introduced to the participants, followed by a "cheap talk" to reduce the hypothetical bias [69]. In the DCE, participants were asked to select the alternative with the features they would like the most for the healthy application at the given price and payment method. All alternatives were simultaneously presented as in the example in Figure 2. Respondents had no time limit for their selection. A total of 2448 choices were collected. The data were analysed through the APOLLO package in R [70]. An MNL model was estimated as a departure point, followed by a ML in the preference-space using Halton draws with 500 replications. Coefficient estimates from the ML in the preference space were incorporated as priors for the ML estimations in the WTP-space, using Halton draws with 500 replications. To facilitate convergence in the WTP-space, a scaling factor was implemented. In the DCE, participants were asked to select the alternative with the features they would like the most for the healthy application at the given price and payment method. All alternatives were simultaneously presented as in the example in Figure 2. Respondents had no time limit for their selection. A total of 2448 choices were collected. The data were analysed through the APOLLO package in R [70]. An MNL model was estimated as a departure point, followed by a ML in the preference-space using Halton draws with 500 replications. Coefficient estimates from the ML in the preference space were incorporated as priors for the ML estimations in the WTP-space, using Halton draws with 500 replications. To facilitate convergence in the WTP-space, a scaling factor was implemented. Results The characteristics of the sample are shown in Table 3. A total of 204 responses were collected. The average age of the participants was 37 years old. In addition, 36.27% of respondents declared they are the head of their household, meaning they were responsible for providing all or most of the household expenses or deciding how to spend the household income. Most respondents presented a low income, and the average household size was about 3.5 people. The average body mass index (BMI) was 27 (kg/m 2 ), indicating that people in the sample suffered from being overweight [71]. 49 3.43 over 6. 50 6.37 Regarding the results of the ML Table 4, the alternative specific constant (ASC) was positive and statistically significant. That is, choosing alternatives A or B yielded positive utility. The basket coefficient was negative and statistically significant. This means that the respondents preferred to have displayed the product information individually rather than in an aggregated format for all items. The format coefficient was not statistically significant, indicating the participants' indifference to format. Presenting information using the Traffic Light System (TLS) or Guideline Daily Amount (GDA) did not make any difference to the respondents. Notes: symbols *, **, *** mean significant at 10%, 5% and 1%. ASC captures the choice of either alternative A or B as opposed to the "none" option. All estimates follow a normal distribution, besides price (negative log-normal). Additionally, the coefficients of salt alert, fat alert, and allergy alert were positive and statistically significant, indicating that consumers in Sari would like to know whether their chosen products have a high intake of fat and salt in compared to not having such information (reference level). They also preferred to be informed about the products that could cause them food allergies compared to not having such information Regarding payment frequency, respondents prefer a quarterly payment over a monthly payment (reference level). However, a monthly payment was preferred over a yearly one. The price coefficient was significant and negative, which means that participants preferred to pay less. Dispersion parameters (standard deviation estimates) were statistically significant, exhibiting heterogeneous preferences for all attributes. Based on the Mixed Multinomial Logit (ML) results in the willingness to pay space in Table 4, the highest willingness to pay was related to the fat alert. Consumers were prepared to pay approximately 23,000 tomans (approximately €4.93) to receive fat alerts for selected products. Following, consumers were willing to pay about 21,000 tomans (approximately €4.50) to receive the salt alerts and 13,000 tomans (approximately €2.79) to receive allergy alerts. Most of the standard deviation estimates for the WTP model were significant, indicating a great variety among consumers' preferences for these attributes. However, the standard deviation estimate for the format coefficient was not significant, meaning that there was no heterogeneity among respondents regarding their willingness to pay for this attribute. In general, respondents are not willing to pay to get their nutritional information in the TLS format over the GDA, and this is quite a homogeneous result across the sample. A latent class model was estimated to investigate the source of such heterogeneity in consumer behaviour. The number of classes was based on the model with the lower BIC (3281.51) and the best interpretability. Thus, participants were divided into two groups with different behavioural characteristics. Accordingly, 14% of consumers belonged to class one and 86% to class two. The result of the model is reported in Table 5. Results of the latent class model showed significant differences between the two classes. The respondents in class 1 were indifferent to receiving the information for each product or as a basked. Moreover, the format was negative and statistically significant, demonstrating consumers' preference for GDA rather than TLS. The coefficients of the salt and fat alerts were positive and statistically significant, indicating the preferences of the consumers to receive the alerts related to the prevention of health issues. The allergy alert estimate was only significant at 10%. In this class, the quarterly and annual payments coefficients were negative and statistically significant, indicating the respondents' preferences for a monthly subscription over a quarterly and annual subscription. The coefficient of the price was negative as expected and statistically significant. Notes: symbols *, **, *** mean significant at 10%, 5% and 1%. a ASC captures the choice of either alternative A or B as opposed to the "none" option. In contrast to class 1, in class 2, the basket attribute was significant, which indicated that people in this class preferred that the diet app presents nutritional information for each product separately rather than for the entire basket. The influence of format attributes on consumer preference was not statistically significant in class 2. Moreover, salt alert, fat alert, and allergies alert were positive and statistically significant. Unlike in class 1, the payment attributes were not statistically significant. Participants were indifferent to monthly, quarterly, and annual subscriptions. The coefficient of the price in this class was also negative, as expected. In summary, although the salt alert and fat alert coefficients were higher in class 1 compared with class 2, the willingness to pay for the salt alert and fat alert in class 2 was larger than in class 1. Similarly, the willingness to pay for the allergy alert was higher in class 2 than in class 1. While the consumers in class 1 had clear preferences for the format (GDA) and the payment frequency (monthly), consumers in class 2 were indifferent to these attributes. On the other hand, participants in class 2 preferred to receive information for each product, while participants in class 1 did not have any specific preference for it. In addition, the ASC was positive and significant, indicating positive preferences for using the application. Discussion The present study uses a discrete choice experiment to investigate consumers' preferences and WTP for receiving nutritional information provided by a dietary application during grocery shopping. The findings from the ML model indicate heterogeneity in consumer preferences for the different attributes of the app. The aggregated results from the ML and the latent class models showed that participants were willing to pay for customised information at the point of the purchase. This result is consistent with previous literature [12,14], which showed that consumers preferred obtaining dietary and allergy information when buying food. This result also aligns with previous findings that about 80% of Mexican consumers liked and wanted warning labels on the front of food packages [38]. Moreover, considering both models' positivity and the significance of salt and fat alert coefficients, respondents are willing to pay for information that helps them make informed and healthier food choices. In this regard, it is recommended that the food industry in Iran insert alerts (e.g., salt alert and fat alert) on food packaging, similar to the action taken by the tobacco industry. In addition, the ML and latent class models results inferred that most participants preferred to receive the customised information from the application for each product individually, rather than for the whole basket as aggregated information. In other words, individuals, similarly to previous findings [14], prefer to examine the nutritional information for each product individually. However, in the latent class, 14% of individuals were indifferent to obtaining information in this context. This result is also in line with previous research [12], which found that 89% of consumers were indifferent whether the information was provided product by product or as an aggregated format for the whole basket. A possible explanation for this can be the lack of such devices in Iran to display integrated information of the shopping basket at the purchasing time; therefore, this feature is intangible to some individuals. The result of the ML indicated that consumers were indifferent about the format of the nutritional information. However, the latent class analysis showed that the smallest of the resulting classes (14%) preferred nutritional information displayed in Guideline Daily Amount rather than the Traffic Light System. Although this finding is in line with previous research [40], the results are somewhat surprising, given that the TLS has been mandatory in Iran since 2016, and the GDA is optional [10]. A possible reason is that consumers are still not familiar with TLS. A study showed that 59% of the participants were not familiar with the Traffic Light System and only 27% of consumers claimed that they had used the TLS [17]. It is worth mentioning that before the implementation of TLS, the nutritional information was presented on the food packaging, similar to the GDA format. According to previous research, although the GDA format presents more detail [72] and its preference might be influenced by the level of education [73], it is also liked more by consumers than other formats [74]. Therefore, designing educational programs in health centres to introduce labels and their function in informed food choice should be a priority for responsible organisations. The results of the ML also indicated the existence of a strong preference for monthly payments rather than yearly. Consumers are unwilling to undertake a long-time commitment [75]. Moreover, the significant standard deviation estimates for the payment methods in the ML model show high heterogeneity among respondents. So, although a class of respondents prefer monthly payments (class 1), the other class is indifferent. In the case of implementing the app, various payment frequency alternatives should be presented to address different segment needs. In general, the results of the present paper contribute to the literature in two ways. First, to the authors' best knowledge, there are no previous studies on the WTP of Iranian consumers for features of an app that could help them make a more informed decision [12,14,18]. Second, given that the geographical area in which the sample was collected there is an important predominance of nutritional-related diseases [64], it is key to identify communication strategies and tools to inform consumers and assist them in their food choices effectively. The present study explores one possible alternative among many others. However, future research should also compare this option with other digital and non-digital solutions. Conclusions In this study, a DCE was developed and analysed to examine the effect of a diet app, which included the characteristics of the basket, format, salt alert, fat alert, allergy alert, payment, and price on consumers' preferences and their willingness to pay in Sari, Iran. Our findings provide new insights into consumers' preferences for the nutritional information provided by the application as an effective tool in helping consumers to make conscious choices during their shopping experience. In this regard, policymakers could benefit from our results by encouraging the implementation of diverse systems (digital and nondigital) that assist consumers in making an informed decision during their grocery shopping. Meanwhile, future research should also explore how to effectively deliver this information to the consumers, studying the diverse tools that could be used for this purpose (e.g., apps, advertisement, etc.), how to frame the messages effectively, as well as consumer acceptability. Policymakers should also promote campaigns that increase consumers knowledge regarding the presence or lack of nutrients in certain foods. This could collaborate in reducing the incidence of non-communicable diseases by encouraging more conscious food choices. Such a process should also motivate the food industry to increase sales by changing food formulation to low-salt and low-fat products The present study also presents some limitations. First, the study includes a convenient sample of 204 people living in Sari, the capital of Mazandaran (Iran). Although the geographical area of the study was chosen on purpose given the predominance of nutritional-related diseases in the area [64], the results are not generalisable to the rest of the Iranian population or other countries. Future research would benefit in replicating the current study in other geographical areas of Iran or other countries with similar morbidity profiles. Second, the present results are based on a hypothetical market, which means that the hypothetical bias may arise [76]. Respondents might express certain preferences in the hypothetical DCE that may differ from their actual preferences under real circumstances [77], leading to overestimated coefficients [78]. Therefore, future work should benefit greatly by combining revealed and stated preference data. Institutional Review Board Statement: Ethical review and approval were waived for this study due to the lack of an Ethical Review Board at UNIVPM at the time of data collection.
2022-12-12T05:15:29.253Z
2022-11-25T00:00:00.000Z
254535110
s2orc/train
v2
Double Soft Graviton Theorems and BMS Symmetries
Double Soft Graviton Theorems and BMS Symmetries It is now well understood that Ward identities associated to the (extended) BMS algebra are equivalent to single soft graviton theorems. In this work, we show that if we consider nested Ward identities constructed out of two BMS charges, a class of double soft factorization theorems can be recovered. By making connections with earlier works in the literature, we argue that at the sub-leading order, these double soft graviton theorems are the so-called consecutive double soft graviton theorems. We also show how these nested Ward identities can be understood as Ward identities associated to BMS symmetries in scattering states defined around (non-Fock) vacua parametrized by supertranslations or superrotations. For example, it has now become clear that the "universal" soft theorems (i.e., those soft theorems whose structure is completely determined by gauge invariance [20,21]), such as the leading soft theorems in gauge theories and gravity, as well as the sub-leading soft theorem in gravity, are manifestations of Ward identities associated to a class of asymptotic symmetries (in 4 dimensions due to the infra-red divergences in these theories, the cleanest statement can be made at the tree-level S-matrix.). In the case of gravity, these symmetries are nothing but an infinite dimensional extension of the famous Bondi, Metzner, Sachs (BMS) group. However, factorization theorems in gauge theories and in quantum gravity have a richer structure. In the case of gravity, in a recent paper by Chakrabarti et al [22], it was shown that there exists a hierarchy of factorization theorems when arbitrary but finite number of gravitons are taken to be soft in a scattering process. Of particular interest is the so called double soft graviton theorem, which is a constraint on the scattering amplitude when two of the gravitons become soft. Such double soft theorems have a history in pion physics [23]. In the case of pions which are Goldstone modes of a spontaneously broken global non-abelian symmetry, double soft pion limits have an interesting structure. As was shown in [23], if we consider a scattering amplitude in which two of the pions are taken to soft limit simultaneously, the scattering amplitude factorizes and the double soft theorem contains information about the structure of the (unbroken) symmetry generators. Due to the presence of an Adler zero, which ensures that single soft pion limit vanishes, it is easy to see that there is no non-trivial factorization theorem if two pions are taken soft consecutively as opposed to when they are done so at the same rate. Double soft graviton theorems are distinct in this regard. Not only is the simultaneous soft limit non-trivial and highly intricate, unlike the case of soft pions even the consecutive soft limit does not vanish and gives rise to factorization constraints on the scattering amplitude which are called the consecutive double soft theorems. In this paper we try to find an interpretation of such consecutive double soft theorems as a consequence of Ward identities associated to the generalised BMS algebra 1 . The outline of this paper is as follows. In Section 2, we recall the equivalence between leading and subleading soft graviton theorems and Ward identities associated to asymptotic symmetries [1,12,26,27]. In Section 3, we explain the consecutive double soft limit and how it gives rise to a leading and two subleading consecutive double soft theorems. In Section 4.1, we propose asymptotic Ward identities, which, as we show in Appendix A, can be heuristically derived from Ward identities associated to Noether's charges [28]. In Appendix B, we discuss the conceptual subtleties associated to the domain of soft operators, which is an obstacle to the full rigorous derivation of one of the subleading consecutive double soft theorems from asymptotic symmetries. In Section 5, we present a formal derivation of this subleading consecutive double soft theorem from asymptotic symmetries. We conclude with some remarks, which primarily focus on the key open question that pertaining to the study of the simultaneous double soft graviton theorem from the perspective of asymptotic symmetries. Single Soft Graviton Theorems and Asymptotic Symmetries We begin by reviewing the derivations of the single soft graviton theorems (both leading and sub-leading) from asymptotic symmetries [1,12]. In the process, we also define the notations that we use later. According to present understanding, the asymptotic symmetry group of gravity, acting on the asymptotic phase space of gravity is the "Generalised BMS" group -it is a semidirect product of supertranslations and Diff(S 2 ). They can be thought of as a local generalization of translations and the Lorentz group respectively. While the original BMS group [24,25] is a semidirect product of supertranslations and SL(2, C), in the generalised BMS group the SL(2, C) symmetry is further extended to Diff(S 2 ). Each of the supertranslations and Diff(S 2 ) symmetry gives rise to conserved asymptotic charges, namely, the supertranslation charge (Q f ) and superrotation charge (Q V ) respectively. These charges are determined completely by the asymptotic "free data" and are parametrized by an arbitrary function f (z,z) and an arbitrary vector field V A (z,z), respectively, both of which are defined on the conformal sphere at null infinity. By studying the algebra, one finds that supertranslations and superrotations form a closed algebra [16]. To define a symmetry of a gravitational scattering problem at the quantum level, these charges are elevated to a symmetry of the quantum gravity S-matrix. Corresponding to each such symmetry one gets a Ward identity. In next two sections, we discuss that how the single soft graviton theorems are equivalent to Ward identities of generalised BMS charges. Leading Single Soft Graviton Theorem and Supertranslation Symmetry The leading single soft graviton theorem follows from the Ward identity of the supertranslation charge Q f [1], which physically corresponds to the conservation of energy at each direction on the conformal sphere at null infinity. 2 The supertranslation charge Q f is given by [1] Here, the sum in and out is over all the hard particles in the "in" and "out" states respectively, with energy E i = | k i | and the unit spatial vectork i = k i /E i characterizing the direction of i th particle. Using (2.4), (2.3) and (2.2) then, one obtains a factorization of the form: Structure of the terms in (2.5) encourages one to ask whether this can be related to Weinberg's soft graviton theorem [26]. This reads, where the soft graviton has energy E p and momentum p. Its direction is parametrized by (w,w) and its polarization is given by ǫ + (w,w) = 1/ √ 2(w, 1, −i, −w). We adopt the notation: with which, the leading soft factor in the r.h.s. of (2.6) can be written as: It is important to notice that the contribution to the soft factor S (0) (p; {k i }) from the i th hard particle with momemtum k i and energy E k i , namely S (0) (p; k i ), depends on the energy of the hard particle. But,Ŝ (0) (p; k i ) does not depend on E k i -as written in (2.8), the energy dependence has been seperated out. Now, consider a hard particle of momentum k parametrized by (E, z,z). If one chooses in (2.5), then the RHS of the soft theorem (2.6) and the Ward identity (2.5) match, since, Further, the l.h.s. of the soft theorem (2.6) and the Ward identity (2.5) match because of the identity, It is also possible to go from the soft theorem (2.6) to the Ward identity (2.5) by acting (2π) −1 d 2 w f (w,w)D 2 w on both sides of (2.6). In this case, the r.h.s. matches because of the identity: Hence, the equivalence of the soft theorem and Ward identity is established. It should also be noted that Weinberg's soft theorem for the negative helicity graviton is not an independent soft theorem and can be obtained through a similar derivation. Subleading Single Soft Graviton Theorem and Superrotation Symmetry The subleading single soft graviton theorem follows from the Ward identity of the superrotation charge Q V [12], which physically corresponds to the conservation of angular momentum at each angle in a gravitational scattering process. This charge is given by: where α = 1 2 D z V z + DzVz and V A (z,z) is an arbitrary vector field on the conformal sphere at null infinity. As usual, the covariant derivatives are w.r.t. the 2-sphere metric. As before, the first term is the "hard part" Q hard V and the second is the "soft part" Q soft V of the superrotation charge. Proceeding in a manner similar to the case of supertranslation, the Ward identity for superrotations can be written as: Now, using the asymptotic quantization of the "free data" and crossing symmetry one can write the soft superrotation charge as: Hence, Q soft V |in = 0. Note that, unlike the previous case, due to the absence of a CK-like condition, the action of Q soft V on the "out" state gives gravitons of both helicities. Also, the action of the hard superrotation charge gives: Again, the sum in and out is over all the hard particles in the "in" and "out" states respectively, with the i th particle having energy E i = | k i | and direction characterized by the vectork i = k i /E i . A detailed expression of J h i V i can be found in [12]. As a result, one can write the Ward identity for superrotations (2.14) as: Now, the Cachazo-Strominger (CS) subleading soft theorem reads [27]: where, J µν i is the angular momentum operator acting on the i th hard particle. For further use, we adopt the notation: Using this, the subleading soft factor in the r.h.s. of (2.18) can be written as: Now, in the Ward identity (2.17), if one chooses the vector field V A as: The l.h.s. of the soft theorem (2.18) and the Ward identity (2.17) also match due to the identity: To go from the CS soft theorem(2.18) to the superrotation Ward identity (2.17) one acts the operator −(4π) −1 d 2 w Vw∂ 3 w on both sides of (2.18). Then, using the linearity of J V in vector field V , and the identity, one recovers Ward identity (2.17) with the vector field Vw∂w. The vector field W in above expression is given by: Here, unlike the Ward identity for the leading case (2.5), it is important to note that the Ward identity for the subleading case (2.17), contains both negative and positive helicity soft graviton amplitudes. To get a clear factorization, one of the components of vector field V A is chosen to be zero, depending upon which soft graviton helicity we want in the soft theorem. As has been analyzed in the literature, there are two kinds of double soft graviton theorems depending upon the relative energy scale of the soft gravitons. The simultaneous soft limit is the one where soft limit is taken on both the gravitons at the same rate. It was shown in [22], that simultaneous soft limit yields a universal factorization theorem. However, as we argue in Appendix A, from the perspective of Ward identities, it is the consecutive soft limits which arise rather naturally. Consecutive double soft graviton theorems (CDST) elucidate the factorization property of scattering amplitudes when the soft limit is taken on one of the gravitons at a faster rate than the other [29]. We now review this factorization property when such soft limits are taken and show that they give rise to three CDSTs. The first one, we refer to as the leading CDST which is the case where the leading soft limit is taken on both the soft gravitons. The remaining two theorems refer to the case where the leading soft limit is taken with respect to one of the gravitons and the subleading soft limit is taken with respect to the other. We begin with a (n+2) particle scattering amplitude denoted by A n+2 (q, p, {k m }) where p , q are the momenta of the two gravitons which will be taken to be soft and {k m } is the set of momenta of the n hard particles. Consider the consecutive limit where the soft limit is first taken on graviton with momentum q, keeping all the other particles momenta unchanged and then a soft limit is taken on the graviton with momentum p. Using the single soft factorization, the scattering amplitude A n+2 (q, p, {k m }) can be written as: where A n+1 (p, {k m }) is the n + 1 particle scattering amplitude. It is important to recall the notations used here, which we explained in Section 2 ((2.8), (2.19)). As mentioned, S (1) (q; k i ) is the contribution to the subleading soft factor with soft momentum q with k i being the i th hard particle. SimilarlyŜ (0) (q; k i ) denotes the contribution to the subleading soft factor with soft momentum q with k i being the i th hard particle, with energy dependences w.r.t. both the soft and hard particles seperated out.Ŝ (0) (q; p) and S (1) (q; p) denote similar contributions to the soft factor where the graviton with momentum p is treated as hard w.r.t. the graviton with momentum q. Now, the amplitude A n+1 (p, {k m }) further factorizes as: Note that, according to our notation, S (1) (p; k i ) is the contribution to the subleading soft factor with soft momentum p and k i is the i th hard particle. Again,Ŝ (0) (p; k i ) denotes the contribution to the subleading soft factor with soft momentum p and k i the i th hard particle, with energy dependences w.r.t. both the soft and the hard particles seperated out. Substituting (3.2) in (3.1), we get the factorization of the (n + 2) particle amplitude containing two soft gravitons in terms of the amplitude of the n hard particles (up to subleading order in energy of the individual soft particles). This expansion contains three types of terms. The first type scales as 1/(E p E q ) (and hence gives rise to a pole in both the soft graviton energies), giving the leading contribution to the factorization. The second and the third type of terms scale as E 0 q /E p and E 0 p /E q respectively, both contributing to the subleading order of the factorization. The leading order contribution, described above, is: This gives the leading CDST as: As is evident, the leading double soft factor is just the product of the individual leading soft factors. One obtains this same theorem in the case of the simultaneous double soft limit as well [22,29,32,33]. In Section 4.2, we show that this soft theorem matches with the result derived from the Ward identity of two supertranslation charges (4.10). Let us now consider the subleading soft limit. At this order of factorization we have four terms: Notice that the first two terms in (3.6) scale with soft graviton energies as E 0 p /E q and the second two terms scale as E 0 q /E p . From the first two terms of (3.6), one gets a subleading CDST. Here, the first term is the product of single soft factors (2.8), (2.20), appearing in the leading and subleading single soft theorems respectively. The second term in the r.h.s of (3.7) contains a single sum over the set of hard particles as opposed to the first term which is the product of single soft factors and contains two sums over the set of hard particles. Such terms are usually referred to as "contact terms" in the literature. One can evaluate this contact term as: wherep = p/E p = (1,p) and similarly,q = q/E q = (1,q). ǫ p and ǫ q refer to the polarisations of soft gravitons with momentum p and q respectively. This is the well known consecutive double soft graviton theorem [29] . A Different Consecutive Limit. We now take a different limit in eq.(3.6) and show how it leads to a distinct factorization theorem. From the last two terms in (3.6) one gets: Now, S (1) (q; k i ) contains the angular momentum operator of the i th hard particle, and thus acts on E k jŜ (0) (p; k j ), as well as the n particle amplitude A n ({k m }). However, S (1) (q; p) does not depend on the set of hard particles labelled by momentum {k m }. Hence S (1) (q; p) acts only on the soft factor, and one can finally write the subleading CDST as: Similar to the other subleading CDST (3.7), the first term in the r.h.s. of (3.10) is product of single soft factors. However, the important difference is that the role of the soft gravitons with momentum p and q is interchanged in the first term of (3.10) and the first term of (3.7). Here, M 1 (q; p; {k i }) and M 2 (q; p; {k i }) are contact terms which can be expressed as follows: and, Again,p = p/E p = (1,p) and ǫ p and ǫ q refer to the polarisation of soft gravitons with momentum p and q respectively. In [29], the authors have considered similar consecutive limits for the double soft graviton and gluon amplitudes. There, they have imposed a gauge condition ǫ p · q = 0 and ǫ q · p = 0. However, our analysis proceeded without imposing any particular gauge condition. With the specific gauge condition used in [29], a few of the terms likeŜ (0) (q; p) and S (1) (q; p) drop out from the CDST result that we have obtained at the subleading level and we recover their result. This serves as a consistency check for our calculation. One can also verify the consistency of both the consecutive limits with the general result which was given in [22]. That is, both the CDST (3.7) and (3.10) are special cases of the double soft limit in [22]. The CDST (3.7) can be recovered by imposing the condition E p ≫ E q on the result of [22] and taking the leading limit in E q and subleading limit in E p . Similarly, the CDST (3.10) can be obtained by imposing the the same E p ≫ E q condition, but taking the leading limit in E p and subleading limit in E q . In the subsequent sections, we will argue that these soft theorems are equivalent to Ward identities of asymptotic symmetries when the scattering states are defined with respect to super-translated or super-rotated vacua. Introduction Having reviewed the relationship between Ward identities associated to the asymptotic symmetries and single soft graviton theorems, we now ask if there are Ward identities in the theory which are equivalent to the double soft graviton theorems at the leading and sub-leading order. In particular, we look for Ward identities that will lead us to the consecutive double soft theorems (CDST). Let us consider the family of Ward identities whose general structure is: where both Q 1 and Q 2 are either both supertranslation charges or Q 1 is a supertranslation charge and Q 2 is a superrotation charge. 3 Following [28], we present a derivation of this proposed Ward identity in Appendix A. In the following sections, we show that such a proposal leads to the consecutive double soft theorems discussed in Section 3. Depending on the choice of charges one gets the leading as well as the subleading consecutive double soft theorems. Ward Identity from Asymptotic Symmetries Following the discussion in Section 4.1, we explore the factorization arising from two supertranslation charges, Q f and Q g characterized by arbitrary functions f (z,z) and g(z,z), on the conformal sphere. We start with: Proceeding in a manner similar to the single soft case in Section 2, we can write Q f and Q g as sum of hard and soft charges as: Thus, the Ward identity (4.2) becomes: out| , S], the first and the second terms cancel each other. One may be tempted to cancel the third and fourth terms, on similar lines. However, we contend that this isn't quite correct as the action of Q soft f maps ordinary the Fock vaccuum to a supertranslated vaccuum state parametrised by f . As a result, we are really looking at the following Ward identity. out, f | [Q g , S] |in = 0 (4.5) where |out, f is a finite energy state defined with respect to the super-translated vacuum. The "in" state is defined w.r.t standard Fock Vacuum because of our prescription Q soft f |in = 0. We can re-write the above identity as: . Using the (known) action of charges on external states in (4.7) we finally arrive at the Ward identity: The factorization above is just the product of two factors of the type obtained from the Ward identity for supertranslation (2.5). It is natural therefore to expect that the soft theorem we obtain from (4.8) will also be the product of two leading single soft factors. In the next section, we show that this is indeed true. From Ward Identity To Soft Theorem From the factorization obtained in (4.8) from the Ward identity with two supertranslation charges, we try to understand what soft theorem follows from it. Motivated from the single soft case, we make the choices for arbitrary function f and g on the conformal sphere as: where the definition of the functions s(w 1 ,w 1 ; w p ,w p ) and s(w 2 ,w 2 ; w q ,w q ) can be read from (2.9). Substituting these choices in (4.8), we finally get: This is the same as the leading double soft theorem (3.5) for the case of two positive helicity soft gravitons with momenta p and q, localized at (w p ,w p ) and (w q ,w q ) respectively, on the conformal sphere. Although we have chosen both the soft graviton helicities to be positive in the above, one can do a similar analysis for both the helicities being negative or one positive and one negative, and a similar result holds. This provides the equivalence of the leading CDST and the Ward identity (4.2). We have thus shown that the leading order double soft graviton theorem is equivalent to the supertranslation Ward identity when this identity is evaluated in a Hilbert space built out of a super-translated vacuum that containing a single soft graviton. Ward Identity from Asymptotic Symmetries As motivated in Section 4.1, and derived in Appendix A, we now analyze with the Ward identity corresponding to one supertranslation charge (characterized by arbitrary function f ) and one superrotation charge (characterized by vector field V A ): We begin by writing the charges as sum of hard and soft charges: Now, using the Ward identity for superrotation, namely [Q soft , S], the first and the second term of (4.12) cancel each other. Again, one may be tempted to cancel the third and the fourth term of (4.12) instead, using the same superrotation Ward identity. However if we do not cancel them, we are led to Whence not cancelling the third and forth terms in (4.12) is tantamount to considering superrotation Ward identity in scattering states which are exitations around supertranslated vacuua. As we show below, it is precisely the Ward identity out, f | [Q V , S] |in = 0 that leads to a specific double soft graviton theorem. Hence the above identity (4.12) reduces to, Using the known action of the soft and hard charges, first term in the r.h.s. of (4.14) can be written as: wherex denotes the direction of the soft graviton parametrized by (w 1 ,w 1 ) on the conformal sphere. J + V represents the action of Q hard V on the soft graviton with energy E p . Similarly, the second term in (4.14) can be evaluated to: Hence, the Ward identity (4.14) simplifies to: Note that, the l.h.s. of (4.17) can be written as: It is important to note that the soft limits taken in the above equation do not follow any particular order in the energies of the soft gravitons. However as we show in the next section, the right hand side of the Ward identity is equivalent to the right hand side of one of the CDSTs . From Ward Identity to Soft Theorem Having derived the Ward identity (4.17), we now ask whether it can be interpreted as a soft theorem. Motivated by the single soft graviton case, we make the following choices for function f and vector field V : where s(w 1 ,w 1 ; w p ,w p ) and K + (wq,wq) follow the definitions in Section 2. Using this, (4.18) becomes: where the unit vectorsx andŷ denote the coordinates (w p ,w p ) and (w q ,w q ) on the conformal sphere. Further, for the r.h.s. of (4.17), we have: In the above expression, notice that in both the subleading factors S (1) (q; k i ) and S (1) (q; p), the soft graviton with momentum q is localized atŷ on the conformal sphere. However, the first one contains an angular momentum operator acting on the i th hard particle and the latter contains an angular momentum operator acting on the soft graviton with momentum p. Now, using the leading single soft theorem, the first term in (4.21) can be written as: For the second term in (4.21), we use the expansion of the (n + 1) particle amplitude (3.2) and we get a factorization of the form: The second term of (4.23) is at a higher order in soft graviton energy, and so does not contribute to (4.21). Thus, (4.21) finally becomes: Lastly, since S (1) (q; k i ) is a linear differential operator and S (1) (q; p) acts only on the soft coordinates, we can further simplify (4.24) as: Finally, putting this all together, we get a subleading double soft theorem: where, M 1 (q; p; {k i }) and M 2 (q; p; {k i }) are the same contact terms obtained in subleading CDST (3.10), whose expressions can be read off from (3.11), (3.12) respectively. This is the same subleading consecutive double soft theorm (3.10), that we studied in the Section 3. Note however that, in (4.18) there is no particular ordering in the limits of the soft graviton energy obtained from the successive action of the soft charges. Hence, the l.h.s. of the double soft theorem (4.26) contains independent limits as opposed to (3.10), where the limits have definite ordering. Although we believe this point needs to be better understood, what we have shown here is that the Ward identity of superrotation charges in a supertranslated vacuum leads to a particular CDST. It is also important to emphasise that there is a definite time ordering in Q f , [Q V , S] = 0. This is clear from the derivation of the Ward identity out| Q f , [Q V , S] |in = 0, which is presented in Appendix A. 5 Relating the Standard CDST to a Ward Identity As we saw above, the Ward identity [Q f , [Q V , S]] = 0, gave rise to a double soft theorem whose r.h.s. matched with the consecutive soft theorem, where we considered the subleading limit of the graviton which was taken soft first. This is in contrast to the more standard consecutive soft limit where we consider the leading soft limit of the graviton which is taken soft first and subleading soft limit of the graviton which is taken soft second. We will argue how this CDST could potentially arise out of the Ward identity: Expressing the charges in (5.1) as the sum of hard and soft charges, we get: Using the Ward identity for supertranslation, namely [Q soft , S], the first and the third terms cancel each other. Once again, this leads us to the following supertranslation Ward identity evaluated in states defined with respect to "superrotated vacuum". where by |out, V we mean a finite energy scattering state defined with respect to a vacuum which contains a subleading soft graviton mode. 5 However, as we explain in appendix B, unlike the action of Q soft f , the action of Q soft V is not well understood thus far. 6 Consequently, the proposed Ward identity remains rather formal at this point. We will still proceed further and show that this proposed Ward identity, if well defined is equivalent to the standard CDST. We can rewrite the Ward identity as We evaluate the two terms in the r.h.s. of (5.4) one by one. The first term can be written as: It was shown in [16] how Q soft V maps the vacuum to a different vacuum. 6 We are indebted to Prahar Mitra for emphasizing this point. Then, using the action of Q hard f and Q hard V on the external states, we can write the r.h.s. of (5.5) as: To evaluate the second term in (5.4), note that for a single particle state |k , Where, in going from the first line to the second, we have used the fact that a + (E p , w 2 ,w 2 ) ∼ 1 Ep . 7 Therefore, Using the above expression (5.7), we can evaluate the second term of (5.4) as: Lastly, using the single soft graviton theorem (with energy E p ), (5.9) simplifies to: Finally, substituting (5.6) and (5.10) in (5.4), we arrive at the Ward identity: where the l.h.s. can be expressed as: In order to proceed from the Ward identity (5.11) to a soft theorem we make the following choices for f and V : Substituting these in (5.11), we formally get the subleading CDST for positive helicity gravitons as: Again,x andŷ denote the points (w p ,w p ), (w q ,w q ) on the conformal sphere. This is the same consecutive double soft theorem (3.7) discussed in Section 3. However, as discussed in Appendix B, there are some important subtleties in the definition of soft operators, especially the soft super-rotation charge Q soft V . Due to this, in the evaluation of the Ward identity out| [Q V , [Q f , S]] |in = 0, the steps which involve the operation of charge Q soft V first on the "out" state before the other charge are not mathematically rigorous. However, we present this calculation here, in the hope that this might give some hint to the structure of a more mathematically sound proof of this soft theorem as well as a more rigorous understanding of the operation of the soft superrotation charge. Discussion and Conclusion It has now been well established in the literature that the supertranslation soft charge Q soft f shifts the Fock Vacuum to a vacuum parametrized by a soft graviton. If we consider Ward identities associated to superrotation charges Q V in this supertranslated vacuum, we are led to one of the two consecutive subleading double soft graviton theorems. In fact, as was argued in [16], the space of vacua of (perturbative) Quantum Gravity are parametrized by leading as well as subleading soft gravitons. Although we do not have a precise definition of a vacuum which is labelled by a subleading soft graviton, assuming such a definition exists, we can ask what the Ward identity of the supertranslation charge is in such a state. The answer appears to be related to the other consecutive double soft theorem at the subleading level. Many questions remain open. A precise formulation of these Ward identities will require a careful definition of Q soft V which is lacking thus far. It is also not entirely clear why Ward identity associated to Q V "in" states perturbed around the supertranslated vacuum leads to a specific CDST. It will also be interesting to extend the analysis to the case where the finite energy scattering states are massive. This will require a detailed understanding of the BMS algebra at time-like infinity. Finally, the problem of relating the subleading simultaneous double soft theorem to Ward identities associated to Asymptotic symmetries remain completely open. Based on our analysis above, we expect that this will require a detailed analysis of the moduli space of the vacuua (parametrized by leading and subleading soft gravitons) which is complicated by the non-Abelian nature of the BMS symmetries. Here we use a generic label Φ to label the quantum field associated to scattering particles. Q I ± [λ] are the asymptotic charges associated to large gauge transformations λ at future and past null infinity respectively. Before deriving the identity associated to the insertion of two charge operators, we first revisit the supertranslation Ward identity out| [Q f , S] |in = 0. Let Φ be any massless field that interacts with gravity and δ λ = δ f be the generator of supertranslation on the fields. We begin by noting that through LSZ reduction we have the following 8 We can schematically represent this step as, where we have used the fact that On the other hand, once again via LSZ and the fact that we see that We note that an identical derivation for Ward identity associated to large U(1) gauge transformations was already given in [35]. We will now derive the Ward identities Q f , [Q V , S] = 0 using this method. That is, we begin with the Ward identity where the superrotation δ V is applied after the supertranslation δ f . The starting point for the derivation is (45) in [28], which in the present context can be written as With our prescription that the soft charges annihilate the "in" vacuum, the l.h.s. of (A.8) reduces to On the other hand, using (A.4), it is easy to see that the r.h.s. of (A.8) is given by This is one of the Ward identities used in the main text of the paper. The remaining identites can be derived similarly. B Subtleties Associated to the Domain of Soft Operators We will now comment on the assumption that was implicitly used in previous section, and which has been used frequently in relating single soft theorems to BMS Ward identities. 9 From the expressions of the supertranslation and superrotation soft charges, we can see that these are singular limits of single graviton annihilation operators. A similar assumption is also made for the superrotation soft charge Q soft V . However, this does not take into account the fact that the supertranslation soft charge shifts the vacuum. This subtlety is now well understood for supertranslations. It was shown in [38][39][40][41] that the action of the supertranslation soft charge maps a standard Fock vaccuum to a supertranslated state which can be thought of as being labelled by a single soft graviton. With this is in mind the precise definition of out|Q soft where out, f | is the "out" state defined over the shifted vaccuum parametrized by f , generated by the action of supertranslation charge (Q soft f ) on the Fock vaccuum. In going from (4.17) to (4.18) we have made the same assumption for defining Q soft V on the shifted vacuum as has been made in the literature for defining it on the Fock vacuum, namely: out, f | lim is. That is, just as a rigorous definition of Q soft f being defined as an operator which maps the ordinary Fock vacuum to a supertranslated state [39,40], no corresponding definition is available for Q soft V as yet. Consquently, operator insertions like out|Q soft V Q soft f S |in are not mathematically well-defined, and we do not know how to make sense of them.
2018-03-20T16:14:47.000Z
2018-03-08T00:00:00.000Z
55686810
s2orc/train
v2
Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses
Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses We study differentially private (DP) stochastic optimization (SO) with loss functions whose worst-case Lipschitz parameter over all data points may be extremely large. To date, the vast majority of work on DP SO assumes that the loss is uniformly Lipschitz continuous over data (i.e. stochastic gradients are uniformly bounded over all data points). While this assumption is convenient, it often leads to pessimistic excess risk bounds. In many practical problems, the worst-case Lipschitz parameter of the loss over all data points may be extremely large due to outliers. In such cases, the error bounds for DP SO, which scale with the worst-case Lipschitz parameter of the loss, are vacuous. To address these limitations, this work provides near-optimal excess risk bounds that do not depend on the uniform Lipschitz parameter of the loss. Building on a recent line of work [WXDX20, KLZ22], we assume that stochastic gradients have bounded $k$-th order moments for some $k \geq 2$. Compared with works on uniformly Lipschitz DP SO, our excess risk scales with the $k$-th moment bound instead of the uniform Lipschitz parameter of the loss, allowing for significantly faster rates in the presence of outliers and/or heavy-tailed data. For convex and strongly convex loss functions, we provide the first asymptotically optimal excess risk bounds (up to a logarithmic factor). In contrast to [WXDX20, KLZ22], our bounds do not require the loss function to be differentiable/smooth. We also devise an accelerated algorithm for smooth losses that runs in linear time and has excess risk that is tight in certain practical parameter regimes. Additionally, our work is the first to address non-convex non-uniformly Lipschitz loss functions satisfying the Proximal-PL inequality; this covers some practical machine learning models. Our Proximal-PL algorithm has near-optimal excess risk. Introduction As the use of machine learning (ML) models in industry and society has grown dramatically in recent years, so too have concerns about the privacy of personal data that is used in training such models. It is well-documented that ML models may leak training data, e.g., via model inversion attacks and membershipinference attacks [FJR15,SSSS17,FK18,NSH19,CTW`21]. Differential privacy (DP) [DMNS06] ensures that data cannot be leaked, and a plethora of work has been devoted to differentially private machine learning and optimization [CM08, DJW13, BST14, Ull15, WYX17, BFTT19, FKT20, LR21b, CJMP21, AFKT21]. Of particular importance is the fundamental problem of DP stochastic (convex) optimization (S(C)O): given n i.i.d. samples X " px 1 , . . . , x n q P X n from an unknown distribution D, we aim to privately solve min wPW F pwq :" E x"D rf pw, xqs ( , where f : WˆX Ñ R is the loss function and W Ă R d is the parameter domain. Since finding the exact solution to (1) is not generally possible, we measure the quality of the obtained solution via excess risk (a.k.a. excess population loss): The excess risk of a (randomized) algorithm A for solving (1) is defined as EF pApXqq´min wPW F pwq, where the expectation is taken over both the random draw of the data X and the algorithm A. A large body of literature is devoted to characterizing the optimal achievable differentially private excess risk of (1) when the function f p¨, xq is uniformly L f -Lipschitz for all x P X -see e.g., [BFTT19,FKT20,AFKT21,BGN21,LR21b]. In these works, the gradient of f is assumed to be uniformly bounded with sup wPW,xPX }∇ w f pw, xq} ď L f , and excess risk bounds scale with L f . While this assumption is convenient for bounding the sensitivity [DMNS06] of the steps of the algorithm, it is often unrealistic in practice or leads to pessimistic excess risk bounds. In many practical applications, data contains outliers, is unbounded or heavy-tailed (see e.g. [CTB98, Mar08, WC11] and references therein for such applications). Consequently, L f may be prohibitively large. For example, even the linear regression loss f pw, xq " 1 2 pxw, x p1q y´x p2q q 2 with compact W and data from X " X p1qˆX p2q , leads to L f ě diameterpX p1q q 2 , which could be huge. Similar observations can be made for other useful ML models such as deep neural nets [LY21], and the situation becomes even grimmer in the presence of heavy-tailed data. In these cases, existing excess risk bounds, which scale with L f , becomes vacuous. While L f can be very large in practice (due to outliers), the k-th moment of the stochastic gradients is often reasonably small for some k ě 2 (see, e.g., Example 1). This is because the k-th moment r r k :" sup wPW }∇ w f pw, xq} k 2 ‰ 1{k depends on the average behavior of the stochastic gradients, while L f depends on the worst-case behavior over all data points. Motivated by this observation and building on the prior results [WXDX20,KLZ22], this work characterizes the optimal differentially private excess risk bounds for the class of problems with a given parameter r r k . Specifically, for the class of problems with parameter r r k , we answer the following questions (up to a logarithmic factor): • Question I: What are the minimax optimal rates for (strongly) convex DP SO? • Question II: What utility guarantees are achievable for non-convex DP SO? Prior works have made progress in addressing the first question above: 1 The work of [WXDX20] provided the first excess risk upper bounds for smooth DP (strongly) convex SO. [KLZ22] gave improved, yet suboptimal, upper bounds for smooth (strongly) convex f p¨, xq, and lower bounds for (strongly) convex SO. In this work, we provide optimal algorithms for convex and strongly convex losses, resolving Question I up to logarithmic factors. Our bounds hold even for non-differentiable/non-smooth F . Regarding Question II, we give the first algorithm for DP SO with non-convex loss functions satisfying the Proximal-Polyak-Łojasiewicz condition [Pol63,KNS16]. We provide a summary of our results for the case k " 2 in Figure 1, and a thorough discussion of related work in Appendix A. Preliminaries Let }¨} be the 2 norm. Let W be a convex, compact set of 2 diameter D. Function g : W Ñ R is µ-strongly convex if gpαw`p1´αqw 1 q ď αgpwq`p1´αqgpw 1 q´α p1´αqµ 2 }w´w 1 } 2 for all α P r0, 1s and all w, w 1 P W. If µ " 0, we say g is convex. For convex f p¨, xq, denote any subgradient of f pw, xq w.r.t. w by ∇f pw, xq P B w f pw, xq: i.e. f pw 1 , xq ě f pw, xq`x∇f pw, xq, w 1´w y for all w 1 P W. Function g is β-smooth if it is differentiable and its derivative ∇g is β-Lipschitz. For β-smooth, µ-strongly convex g, denote its condition number by κ " β{µ. For functions a and b of input parameters, write a À b if there is an absolute constant A such that a ď Ab for all feasible values of input parameters. Write a " r Opbq if a À b for a logarithmic function of input parameters. We assume that the stochastic gradient distributions have bounded k-th moment for some k ě 2: Assumption 1. There exists k ě 2 and r r pkq ą 0 such that E " sup wPW }∇f pw, xq} k 2 ‰ ď r r pkq for all ∇f pw, x i q P B w f pw, x i q. Denote r r k :" pr r pkq q 1{k . Clearly, r r k ď L f " sup t∇f pw,xqPBwf pw,xqu sup w,x }∇f pw, xq}, but this inequality is often very loose: Example 1. For linear regression on a unit ball W with 1-dimensional data x p1q , x p2q P r´10 6 , 10 6 s having truncated Normal distributions and Varpx p1q q " Varpx p2q q ď 1, we have L f ě 10 12 . On the other hand, r r k is much smaller than L f for small to moderate k: e.g., r r 2 ď 5, r r 4 ď 8, and r r 8 ď 14. Differential Privacy: Differential privacy [DMNS06] ensures that no adversary-even one with enormous resources-can infer much more about any person who contributes training data than if that person's data were absent. If two data sets X and X 1 differ in a single entry (i.e. d hamming pX, X 1 q " 1), then we say that X and X 1 are adjacent. Definition 1 (Differential Privacy). Let ě 0, δ P r0, 1q. A randomized algorithm A : X n Ñ W is p , δqdifferentially private (DP) if for all pairs of adjacent data sets X, X 1 P X n and all measurable subsets S Ď W, we have PpApXq P Sq ď e PpApX 1 q P Sq`δ. In this work, we focus on zero-concentrated differential privacy [BS16]: Definition 2 (Zero-Concentrated Differential Privacy (zCDP)). A randomized algorithm A : X n Ñ W satisfies ρ-zero-concentrated differential privacy (ρ-zCDP) if for all pairs of adjacent data sets X, X 1 P X n and all α P p1, 8q, we have D α pApXq||ApX 1 qq ď ρα, where D α pApXq||ApX 1 qq is the α-Rényi divergence 2 between the distributions of ApXq and ApX 1 q. Contributions and Related Work We discuss our contributions in the context of related work. See Figure 1 for a summary of our results when k " 2, and Appendix A for a more thorough discussion of related work. Optimal Rates for Non-Smooth (Strongly) Convex Losses (Section 3): We establish asymptotically optimal (up to logarithms) excess risk bounds for DP SCO under Assumption 1, without requiring differentiability of f p¨, xq: Theorem 4 (Informal, see Theorem 6, Theorem 12, Theorem 13, Theorem 14). Let f p¨, xq be convex. Grant Assumption 1. Then, there is a polynomial-time If f p¨, xq is µ-strongly convex, then EF pApXqq´F˚" r Further, these bounds are minimax optimal up to a factor of r Opr r 2 2k {r r 2 k q. The works [WXDX20, KLZ22] make a slightly different assumption than Assumption 1: they instead assume that the k-th order central moment of each coordinate ∇ j f pw, xq is bounded by γ 1{k for all j P rds, w P W. We also provide asymptotically optimal excess risk bounds for the class of problems satisfying the coordinate-wise moment assumption of [WXDX20, KLZ22] and having subexponential stochastic subgradients: see Appendix E.4. The previous state-of-the-art convex upper bound was suboptimal: Theorem 5.4]. 3 Their result also required f p¨, xq to be β f -smooth for all x P X , which can be restrictive with outlier data: e.g. this implies that f p¨, xq is uniformly L f -Lipschitz with L f ď 2β f D if ∇f pw˚pxq, xq " 0 for some w˚pxq P W. Our optimal µ-strongly convex bound also improves over the best previous upper bound of [KLZ22,Theorem 5.6], which again required uniform β f -smoothness of f p¨, xq. In fact, [KLZ22, Theorem 5.6] was incorrect as stated in the ICML 2022 version of their paper, as we explain in Appendix C. 4 However, after communicating with the authors of [KLZ22], they updated their result and proof in the arXiv version of their 2 For distributions P and Q with probability density/mass functions p and q, DαpP ||Qq :" 1 α´1 ln`ş ppxq α qpxq 1´α dx˘[Rén61, Eq. 3.3]. 3 We write the bound in [KLZ22,Theorem 5.4] in terms of Assumption 1, replacing their γ 1{k d by r r ? d. 4 In short, the mistake is that Jensen's inequality is used in the wrong direction to claim that the T -th iterate of their algorithm w T satisfies Er}w T´w˚} 2 s ď pE}w T´w˚} q 2 , which is false. Figure 1: Excess risk for k " 2, r r " ? d; we omit logarithms. κ " β{µ is the condition number of F ; κ f " β f {µ is the worst-case condition number of f p¨, xq. paper. The corrected version of [KLZ22, Theorem 5.6]-which we derive in Appendix C for completeness-is suboptimal by a factor of r Ωppβ f {µq 3 q. In practical applications, the worst-case condition number β f {µ can be very large, especially in the presence of outliers or heavy-tailed data. Our near-optimal excess risk bound removes this dependence on β f {µ and holds even for non-differentiable f . Our Algorithm 3 combines the iterative localization technique of [FKT20, AFKT21] with a noisy clipped subgradient method. With clipped (hence biased ) stochastic subgradients and non-Lipschitz/non-smooth f p¨, xq, the excess risk analysis of our algorithm is harder than in the uniformly Lipschitz setting. Instead of the uniform convergence analysis used in [WXDX20, KLZ22], we derive new results about the stability [KR99, BE02] and generalization error of learning with loss functions that are not uniformly Lipschitz or differentiable; prior results (e.g. [SSSSS09, LY20]) were limited to β f -smooth and/or L f -Lipschitz f p¨, xq. Specifically, we show the following for non-Lipschitz/non-smooth f p¨, xq: a) On-average model stability [LY20] implies generalization (Proposition 9); and b) regularized empirical risk minimization is on-average model stable, hence it generalizes (Proposition 10). We combine these results with an empirical error bound for biased, noisy subgradient method to bound the excess risk of our algorithm (Theorem 6). We obtain our strongly convex bound (Theorem 12) by a reduction to the convex case, ala [HK14, FKT20]. We also refine (to describe the dependence on r r k , D, µ), extend (to k " 1), and tighten (for µ " 0) the lower bounds of [KLZ22]: see Theorems 13 and 14. Linear-Time Algorithms for Smooth (Strongly) Convex Losses (Section 4): For convex, β-smooth F , we provide a novel accelerated DP algorithm (Algorithm 4), building on the work of [GL12]. 5 Our algorithm is linear time and attains excess risk that improves over the previous state-of-the-art (not linear time) algorithm [KLZ22, Theorem 5.4] in practical parameter regimes (e.g. d Á n 1{6 ). The excess risk of our algorithm is tight in certain cases: e.g., d Á p nq 2{3 or "sufficiently smooth" F (see Remark 16). To prove our bound, we give the first analysis of accelerated SGD with biased stochastic gradients. For µ-strongly convex, β-smooth losses, acceleration results in excessive bias accumulation, so we propose a simple noisy clipped SGD. Our algorithm builds on [KLZ22], but uses a lower-bias clipping mechanism from [BD14] and a new, tighter analysis. We attain excess risk that is near-optimal up to a r Oppβ{µq pk´1q{k q factor: see Theorem 17. Our bound strictly improves over the best previous bound of [KLZ22]. First Algorithm for Non-Convex (Proximal-PL) Losses (Section 5): We consider losses satisfying the Proximal Polyak-Łojasiewicz (PPL) inequality [Pol63, KNS16] (Definition 18), an extension of the classical PL inequality to the proximal setting. This covers important models like (some) neural nets, linear/logistic regression, and LASSO [KNS16, LY21]. We propose a DP proximal clipped SGD to attain near-optimal excess risk that almost matches the strongly convex rate: see Theorem 19. We also provide (in Appendix I) the first shuffle differentially private (SDP) [BEM`17,CSU`19] 5 In contrast to [WXDX20, KLZ22], we do not require f p¨, xq to be β f -smooth for all x. algorithms for heavy-tailed SO. Our SDP algorithms achieve the same risk bounds as their zCDP counterparts without requiring a trusted curator. Private Heavy-Tailed Mean Estimation Building Blocks In each iteration of our SO algorithms, we need a way to privately estimate the mean ∇F pw t q " E x"D r∇f pw t , xqs. If f p¨, xq is Lipschitz, then one can simply draw a random sample x t from X and add noise to the stochastic gradient ∇f pw t , x t q to obtain a DP estimator of ∇F pw t q: the 2 -sensitivity of the stochastic gradients is bounded by sup x,x 1 PX }∇f pw t , xq´∇f pw t , x 1 q} ď 2L f , so the Gaussian mechanism guarantees DP (by Proposition 22). However, in the setting that we consider, L f (and hence the sensitivity) may be huge, leading the privacy noise variance to also be huge. Thus, we clip the stochastic gradients (to force the sensitivity to be bounded) before adding noise. Specifically, we invoke Algorithm 1 on a batch of s stochastic gradients at each iteration of our algorithms. In Algorithm 1, Π C pzq :" argmin yPB2p0,Cq }y´z} 2 denotes the projection onto the centered 2 ball of radius C in R d . Lemma 5 bounds the bias and variance of Algorithm 1. Optimal Rates for Non-Smooth (Strongly) Convex Losses In this section, we establish the optimal rates (up to logarithms) for the class of DP SCO problems satisfying Assumption 1. We present our result for convex losses in Section 3.1, and our result for strongly convex losses in Section 3.2. In Section 3.3, we provide lower bounds, which show that our upper bounds are tight (up to logarithms). 5: Draw new batch B i of n i " |B i | samples from X without replacement. 7: Use Algorithm 2 with initialization w i´1 to minimize p F i over W i :" tw P W : }w´w i´1 } ď D i u, for T i iterations with clip threshold C i and noise σ 2 i " 4C 2 i Ti n 2 i 2 . Let w i be the output of Algorithm 2. 8: end for 9: Output: w l . The main ideas of Algorithm 3 are: 1. Clipping only the non-regularized component of the subgradient to control sensitivity and bias: Notice that when we call Algorithm 2 in phase i of Algorithm 3, we only clip the subgradients of f pw t , x j q, not the regularized loss f pw t , x j q`λ 2 }w t´wi´1 } 2 . Compared to clipping the full gradient of the regularized loss, our selective clipping approach significantly reduces the bias of our subgradient estimator. This is essential for obtaining our near-optimal excess risk. Further, this reduction in bias comes at no cost to the variance of our subgradient estimator: the 2 -sensitivity of our estimator is unaffected by the regularization term. 2. Solve regularized ERM subproblem with a stable DP algorithm: We run a multi-pass zCDP solver on a regularized empirical loss: Multiple passes let us reduce the noise variance in phase i by a factor of T i (via strong composition for zCDP) and get a more accurate solution to the ERM subproblem. Regularization makes the empirical loss strongly convex, which improves on-average model stability and hence generalization of the obtained solution (see Proposition 9 and 29). 3. Localization [FKT20, ADF`21] (i.e. iteratively "zooming in" on a solution): In early phases (small i), when we are far away from the optimum w˚, we use more samples (larger n i ) and large learning rate η i to make progress quickly. As i increases, w i is closer to w˚, so fewer samples and slower learning rate suffice. Since step size η i shrinks (geometrically) faster than n i , the effective variance of the privacy noise η 2 i σ 2 i decreases as i increases. This prevents w i`1 from moving too far away from w i (and hence from w˚). We further enforce this localization behavior by increasing the regularization parameter λ i and shrinking D i over time. We choose D i as small as possible subject to the constraint that argmin wPW p F i pwq P W i . This constraint ensures that Algorithm 2 can find w i with small excess risk. Next, we provide privacy and excess risk guarantees for Algorithm 3: Moreover, this excess risk is attained in r Opn p`1 q subgradient evaluations. The excess risk bound in Theorem 6 is optimal up to a logarithmic factor. A key feature of this bound is that it does not depend on L f . Further, the hypothesis on L f in Theorem 6 is easy to satisfy: if L f ă 8 (even if it is enormous), then we can choose p large enough to ensure that the condition on L f is satisfied. The only cost of choosing larger p is computational. Even if L f " 8, then the excess risk bound is still asymptotically optimal as n Ñ 8. By contrast, prior works [WXDX20, KLZ22] required uniform β f -smoothness of f p¨, xq, which implies the more severe restriction L f ď β f D for loss functions that have a vanishing gradient at some point. Further, the excess risk bounds in [WXDX20, KLZ22] depend on β f (and hence L f for loss functions with vanishing gradient). The proof of Theorem 6 consists of three main steps: i) We bound the empirical error of the noisy clipped subgradient subroutine (Lemma 7). ii) We prove that if an algorithm is on-average model stable (Definition 8), then it generalizes (Proposition 9). iii) We bound the on-average model stability of regularized ERM with non-smooth/non-Lipschitz f p¨, xq (Proposition 29), leading to an excess population loss bound for Algorithm 2 run on the regularized empirical objective (c.f. line 7 of Algorithm 3). By using these results with the proof technique of [FKT20], we can obtain Theorem 6. First, we bound the empirical error of the step in line 7 of Algorithm 3, by extending the analysis of noisy subgradient method to biased subgradient oracles: Lemma 7. Fix X P X n and let p F λ pwq " 1 n ř n i"1 f pw, x i q`λ 2 }w´w 0 } 2 for w 0 P W, where W is a closed convex domain with diameter D. Assume f p¨, xq is convex and p r n pXq pkq ě sup wPW 1 n ř n i"1 }∇f pw, x i q} k ( for all ∇f pw, x i q P B w f pw, x i q. Denote p r n pXq " " p r n pXq pkq ‰ 1{k andŵ " argmin wPW p F λ pwq. Let η ď 2 λ . Then, the output of Algorithm 2 satisfies where σ 2 " 4C 2 T n 2 2 . Proofs for this subsection are deferred to Appendix E.2. Our next goal is to bound the generalization error of regularized ERM with convex loss functions that are not differentiable or uniformly Lipschitz. We will use a stability argument to obtain such a bound. Recall the notion of on-average model stability [LY20]: Definition 8. Let X " px 1 ,¨¨¨, x n q and X 1 " px 1 1 ,¨¨¨, x 1 n q be drawn independently from D. For i P rns, let X i :" px 1 ,¨¨¨, x i´1 , x 1 i , x i`1 ,¨¨¨, x n q. We say randomized algorithm A has on-average model stability α (i.e. A is α-on-average model stable) if E " 1 n ř n i"1 }ApXq´ApX i q} 2 ‰ ď α 2 . The expectation is over the randomness of A and the draws of X and X 1 . On-average model stability is weaker than the notion of uniform stability [BE02], which has been used in DP Lipschitz SCO (e.g. by [BFTT19]); this is necessary for obtaining our learnability guarantees without uniform Lipschitz continuity. The main result in [LY20] showed that on-average model stable algorithms generalize well if f p¨, xq is β f -smooth for all x, which leads to a restriction on L f . We show that neither smoothness nor Lipschitz continuity of f is needed to ensure generalizability: Proposition 9. Let f p¨, xq be convex for all x. Suppose A : X n Ñ W is α-on-average model stable. Let p F X pwq :" 1 n ř n i"1 f pw, x i q be an empirical loss. Then for any ζ ą 0, ErF pApXqq´p F X pApXqqs ď r r p2q 2ζ`ζ 2 α 2 . Using Proposition 9, we can bound the generalization error and excess (population) risk of regularized ERM: Proposition 10. Let f p¨, xq be convex, w i´1 , y P W, andŵ i :" argmin wPW p F i pwq, where p F i pwq :" 1 ni ř jPBi f pw, x j q`λ i 2 }w´w i´1 } 2 (c.f. line 6 of Algorithm 3). Then, where the expectation is over both the random draws of X from D and B i from X. With the pieces developed above, we can now sketch the proof of Theorem 6: Sketch of the Proof of Theorem 6. Privacy: Since the batches tB i u l i"1 are disjoint, it suffices to show that w i (produced by T i iterations of Algorithm 2 in line 7 of Algorithm 3) is 2 2 -zCDP @i P rls. The 2 sensitivity of the clipped subgradient update is ∆ " sup w,X"X 1 } 1 ni ř ni j"1 Π Ci p∇f pw, x j qq´Π Ci p∇f pw, x 1 j qq} ď 2C i {n i . (Note that the regularization term does not contribute to sensitivity.) Thus, the privacy guarantees of the Gaussian mechanism (Proposition 22) and the composition theorem for zCDP (Lemma 23) imply that Algorithm 3 is 2 2 -zCDP. Excess risk: Our choice of D i ensures thatŵ i P W i . Combining Lemma 7 with Lemma 5 and proper choices of η and T i , we get: (2) Now, following the strategy used in the proof of [FKT20, Theorem 4.4], we write EF pw l q´F pw˚q " ErF pw l q´F pŵ l qs`ř l i"1 ErF pŵiq´F pŵi´1qs, whereŵ 0 :" w˚. Using (2) and r r k -Lipschitz continuity of F (which is implied by Assumption 1), we can bound ErF pw l q´F pŵ l qs for the right η and C l . To bound the sum (second term), we use Proposition 10 to obtain for the right choice of C i . Then properly choosing η completes the excess risk proof. Computational complexity: The choice T i " r Opn p i q implies that the number of subgradient evaluations is bounded by ř l i"1 n i T i " r Opn p`1 q. Remark 11 (Reduced Computational Complexity for Approximate or Shuffle DP). If one desires p , δq-DP or p , δq-SDP instead of zCDP, then the gradient complexity of Algorithm 3 can be improved to r Opn p`1 2 q: see Appendix E.2. The Strongly Convex Case Following [FKT20], we use a folklore reduction to the convex case (detailed in Appendix E.3) in order to obtain the following upper bound via Theorem 6: Theorem 12. Grant Assumption 1. Let ď ? d and let f p¨, xq be µ-strongly convex and L f -Lipschitz for all x P X , with L f À n p{2 r r 2k˜1 ? Lower Bounds The work of [KLZ22] proved lower bounds that are tight (by our upper bounds in Section 3) in most parameter regimes for D " µ " 1, r r k " ? d, and k " Op1q. 7 Our (relatively modest) contribution in this subsection is: refining these lower bounds to display the correct dependence on r r k , D, µ; tightening the convex lower bound [KLZ22, Theorem 6.4] in the regime d ą n; and extending [KLZ22, Theorems 6.1 and 6.4] to k " 1. Our lower bound constructions satisfy the condition on L f in the statements of Theorems 6 and 12. Our first lower bound holds even for affine functions: Theorem 13 (Smooth Convex, Informal). Let ρ ď d. For any ρ-zCDP algorithm A, there exist closed convex sets W, X Ă R d such that }w´w 1 } ď 2D for all w, w 1 P W, a β f -smooth, L f -Lipschitz, linear, convex (in w) loss f : WˆX Ñ R, and distributions D and D 1 on X such that Assumption 1 holds and if X " D n , then Remark 37 (in Appendix F) discusses parameter regimes in which Theorem 13 is strictly tighter than [KLZ22, Theorem 6.4], as well as differences in our proof vs. theirs. Next, we provide lower bounds for smooth, strongly convex loss functions: Theorem 14 (Smooth Strongly Convex, Informal). Let ρ ď d. For any ρ-zCDP algorithm A, there exist compact convex sets W, X Ă R d , a L f -Lipschitz, µ-smooth, µ-strongly convex (in w) loss f : WˆX Ñ R, and distributions D and D 1 on X such that: Assumption 1 holds, and if X " D n , then EF pApXqq´F˚" Ω¨r Thus, our upper bounds are indeed tight (up to logarithms). Having resolved Question I, next we will develop more computationally efficient, linear-time algorithms for smooth F p¨q. Noisy Clipped Accelerated SGD for Smooth Convex Losses Algorithm 4 is a one-pass accelerated algorithm, which builds on (non-private) AC-SA of [GL12]; its privacy and excess risk guarantees are given in Theorem 15. 5: Draw new batch B t (without replacement) of n{T samples from X. Besides being linear-time, another advantage of Theorem 15 is that it holds even for problems with L f " 8 and small n. The key ingredient used to prove (3) is a novel convergence guarantee for AC-SA with biased, noisy stochastic gradients: see Proposition 40 in Appendix G.1. Combining Proposition 40 with Lemma 5 and a careful choice of stepsizes, clip threshold, and T yields Theorem 15. Remark 16 (Optimal rate for "sufficiently smooth" functions). Note that the upper bound (3) scales with the smoothness parameter β. Thus, for sufficiently small β, the optimal rates are attained. For example, if k " 2, the upper bound in (3) matches the lower bound in Theorem 13 when β À r ; e.g. if β and D are constants and d ě p nq 1{5 . In particular, for affine functions (which were not addressed in [WXDX20, KLZ22] since these works assume ∇F pw˚q " 0), β " 0 and Algorithm 4 is optimal. Having discussed the dependence on β, let us focus on understanding how the bound in Theorem 15 scales with n, d and . Thus, let us fix β " D " 1 and r r k " ? d for simplicity. If k " 2, then the bound in (3) simplifies to O´b d n`m ax whereas the lower bound in Theorem 13 (part 2) is Ω´b d Noisy Clipped SGD for Strongly Convex Losses Our algorithm for strongly convex losses (Algorithm 6 in Appendix G.2) is a simple one-pass noisy clipped SGD. Compared to the algorithm of [KLZ22], our approach differs in the choice of MeanOracle, step size, and iterate averaging weights, and in our analysis. The bound (4) is optimal up to a r Oppβ{µq pk´1q{k q factor and improves over the best previous bound in [KLZ22, Theorem 5.6] by removing the dependence on β f (which can be much larger than β in the presence of outliers). The proof of Theorem 17 (in Appendix G.2) relies on a novel convergence guarantee for projected SGD with biased noisy stochastic gradients: Proposition 42. Compared to results in [ADF`21] for convex ERM and [AS20] for PL SO, Proposition 42 is tighter, which is needed to obtain near-optimal excess risk: we leverage smoothness and strong convexity. Our new analysis also avoids the issue in the proofs of [WXDX20, KLZ22]. Recall that the proximal operator of a convex function g is defined as prox ηg pzq :" argmin yPR d`ηgpyq`1 2 }y´z} 2f or η ą 0. We propose Noisy Clipped Proximal SGD (Algorithm 8 in Appendix H) for PPL losses. The algorithm runs as follows. For t P rT s: first draw a new batch B t (without replacement) of n{T samples from Finally, return the last iterate, w T . Thus, the algorithm is linear time. Furthermore: Then, there are parameters such that Algorithm 8 is 2 2 -zCDP, and: Moreover, Algorithm 8 uses at most n gradient evaluations. The bound in Theorem 19 nearly matches the smooth strongly convex (hence PPL) lower bound in Theorem 14 up to r Oppβ{µq p2k´2q{2 q, and is attained without convexity. To prove Theorem 19, we derive a convergence guarantee for proximal SGD with generic biased, noisy stochastic gradients in terms of the bias and variance of the oracle (see Proposition 45). We then apply this guarantee for MeanOracle1 (Algorithm 1) with carefully chosen stepsizes, clip threshold, and T , using Lemma 5. Proposition 45 generalizes [AS20, Theorem 6]-which covered the unconstrained classical PL problem-to the proximal setting. However, the proof of Proposition 45 is very different from the proof of [AS20, Theorem 6], since prox makes it hard to bound excess risk without convexity when the stochastic gradients are biased/noisy. Instead, our proof builds on the proof of [LGR22, Theorem 3.1], using techniques from the analysis of objective perturbation [CMS11, KST12]. See Appendix H for details. Concluding Remarks and Open Questions This paper was motivated by practical problems in which data contains outliers and potentially heavy tails, causing the worst-case Lipschitz parameter of the loss over all data points to be prohibitively large. In such cases, existing bounds for DP SO that scale with the worst-case Lipschitz parameter become vacuous. Thus, we operated under the more relaxed assumption of stochastic gradient distributions having bounded k-th moments. The k-th moment bound can be much smaller than the worst-case Lipschitz parameter in practice. For (strongly) convex loss functions, we established the asymptotically optimal rates (up to logarithms), even with non-differentiable losses. We also provided linear-time algorithms for smooth losses that are optimal in certain practical parameter regimes, but suboptimal in general. An interesting open question is: does there exist a linear-time algorithm with optimal excess risk? We also initiated the study of non-convex DP SO without uniform Lipschitz continuity, showing that the optimal strongly convex rates can nearly be attained without convexity, via the proximal-PL condition. We leave the treatment of general non-convex losses for future work. [And15] Alex Andoni. COMS E6998-9: algorithmic techniques for massive data, 2015. [ASZ21] Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. DP ERM and DP GLMs without Uniform Lipschitz continuity: The work of [ADF`21] provides bounds for constrained DP ERM with arbitrary convex loss functions using a Noisy Clipped SGD algorithm that is similar to our Algorithm 6, except that their algorithm is multi-pass and ours is one pass. In a concurrent work, [DKX`22] considered DP ERM in the unconstrained setting with convex and non-convex loss functions. Their algorithm, noisy clipped SGD, is also similar to Algorithm 6 and the algorithm of [ADF`21]. The results in [DKX`22] are not directly comparable to [ADF`21] since [DKX`22] consider the unconstrained setting while [ADF`21] consider the constrained setting, but the rates in [ADF`21] are faster. [DKX`22] also analyzes the convergence of noisy clipped SGD with smooth non-convex loss functions. The works of [SSTT21, ABG`22] consider generalized linear models (GLMs), a particular subclass of convex loss functions and provide empirical and population risk bounds for the unconstrained DP optimization problem. The unconstrained setting is not comparable to the constrained setting that we consider here: in the unconstrained case, a dimension-independent upper bound is achievable, whereas our lower bounds (which apply to GLMs) imply that a dependence on the dimension d is necessary in the constrained case. Other works on gradient clipping: The gradient clipping technique (and adaptive variants of it) has been studied empirically in works such as [ACG`16, CWH20, ATMR21], to name a few. The work of [CWH20] shows that gradient clipping can prevent SGD from converging, and describes the clipping bias with a disparity measure between the gradient distribution and a geometrically symmetric distribution. Optimization with biased gradient oracles: The works [AS20, ADF`21] analyze SGD with biased gradient oracles. Our work provides a tighter bound for smooth, strongly convex functions and analyzes accelerated SGD and proximal SGD with biased gradient oracles. DP SO with Uniformly Lipschitz loss functions: In the absence of outlier data, there are a multitude of works studying Lipschitz DP SO, mostly in the convex/strongly convex case. We do not attempt to provide a comprehensive list of these here, but will name the most notable ones, which provide optimal or state-of-the-art utility guarantees. The first suboptimal bounds for DP SCO were provided in [BST14]. The work of [BFTT19] established the optimal rate for non-strongly convex DP SCO, by bounding the uniform stability of Noisy DP SGD (without clipping). The strongly convex case was addressed by [FKT20], who also provided optimal rates in linear times for sufficiently smooth, convex losses. Since then, other works have provided faster and simpler (optimal) algorithms for the non-smooth DP SCO problem [BFGT20, AFKT21, KLL21, BGM21] and considered DP SCO with different geometries [AFKT21, BGN21]. State-of-the-art rates for DP SO with the proximal PL condition are due to [LGR22]. B Other Bounded Moment Conditions Besides Assumption 1 In this section, we give the alternate bounded moment assumption made in [WXDX20, KLZ22] and a third bounded moment condition, and discuss the relationships between these assumptions. The notation presented here will be necessary in order to state the sharper versions of our linear-time excess risk bounds and the asymptotically optimal excess risk bounds under the coordinate-wise assumption of [WXDX20, KLZ22] (which our Algorithm 3 also attains). First, we introduce a relaxation of Assumption 1: Assumption 2. There exists k ě 2 and r pkq ą 0 such that sup wPW E " }∇f pw, xq} k 2 ‰ ď r pkq , for all subgradients ∇f pw, x i q P B w f pw, x i q. Denote r k :" pr pkq q 1{k . Assumption 1 implies Assumption 2 for r ď r r. Next, we precisely state the coordinate-wise moment bound assumption that is used in [WXDX20, KLZ22] for differentiable f : Assumption 3. (Used by [WXDX20, KLZ22] 10 , but not in this work) There exists k ě 2 and γ ą 0 such that sup wPW E|x∇f pw, xq´∇F pwq, e j y| k ď γ, for all j P rds, where e j denotes the j-th standard basis vector in R d . Also, L fi sup wPW }∇F pwq} ď ? dγ 1{k . Lemma 20 allows us compare our results in Section 4 obtained under Assumption 2 to the results in [WXDX20, KLZ22], which require Assumption 3. Proof. We use the following inequality, which can easily be verified inductively, using Cauchy-Schwartz and Young's inequalities: for any vectors u, v P R d , we have Therefore, where we used convexity of the function φpyq " y k{2 for all y ě 0, k ě 2 and Jensen's inequality in the last inequality. Now using linearity of expectation and Assumption 3 gives us r pkq ď 2 k´Lk`dk{2 γ¯ď 2 k`1 d k{2 γ, since L k ď d k{2 γ by hypothesis. Remark 21. Since Assumption 2 is implied by Assumption 3, the upper bounds that we obtain under Assumption 2 also hold (up to constants) if we grant Assumption 3 instead, with r Ø ? dγ 1{k . Also, in Appendix E.4, we will use Lemma 20 to show that our optimal excess risk bounds under Assumption 1 imply asymptotically optimal excess risk bounds under Assumption 3. C Correcting the Errors in the Strongly Convex Upper Bounds Claimed in [KLZ22, WXDX20] While [KLZ22, Theorem 5.6] claims an upper bound for smooth strongly convex losses that is tight up to a factor of r Opκ 2 f q-where κ f " β f {µ is the uniform condition number of f p¨, xq over all x P X -we identify an issue with their proof that invalidates their result. A similar issue appears in the proof of [WXDX20, Theorems 5 and 7], which [KLZ22] built upon. We then show how to salvage a correct upper bound within the framework of [KLZ22], albeit at the cost of an additional factor of κ f . 11 The proof of [KLZ22, Theorem 5.6] relies on [KLZ22, Theorem 3.2]. The proof of [KLZ22, Theorem 3.2], in turn, bounds E}w T´w˚} ď pλ`LqpM`1qG λL in the notation of [KLZ22], where L is the smoothness parameter, λ is the strong convexity parameter (so L ě λ), and M is the diameter of W. Then, it is incorrectly deduced that Er}w T´w˚} 2 s ď´p λ`LqpM`1qG λL¯2 (final line of the proof). Notice that Er}w T´w˚} 2 s can be much larger than pE}w T´w˚} q 2 in general: for example, if }w T´w˚} has the Pareto distribution with shape parameter α P p1, 2s and scale parameter 1, then pE}w T´w˚} q 2 "´α α´1¯2 ! Ep}w T´w˚} 2 q " 8. To try to correct this issue, one could use Young's inequality to instead bound but the geometric series above diverges to`8 as T Ñ 8, since 2´1´2 λL pλ`Lq 2¯ě 1 ðñ pλ´Lq 2 ě 0. Evidently, there is no "easy fix" for the issue in the proofs of [KLZ22, Theorem 3.2] and [WXDX20, Theorem 5] (at least without imposing severe restrictions on λ and L and hence dramatically shrinking the function class). (in our notation). This correction was derived in collaboration with the authors of [KLZ22], who have also updated the arXiv version of their paper accordingly. By waiting until the very of the proof of [KLZ22, Theorem 3.2] to take expectation, we can derive for all t, where we use their L " β f and λ " µ notation but our notation F and r ∇F for the population loss and its biased noisy gradient estimate (instead of their L D notation). By iterating (7), we can get Squaring both sides and using Cauchy-Schwartz, we get Using L-smoothness of F and the assumption made in [KLZ22] that ∇F pw˚q " 0, and then taking expectation yields where G 2 ě E " } r ∇F pw T´t q´∇F pw T´t q} 2 ı for all t. It is necessary and sufficient to choose T " r ΩpL{λq to make the first term in (8) less than the second term (up to logarithms). With this choice of T , we get where κ f " L{λ. Next, we apply the bound on G 2 for the MeanOracle that is used in [KLZ22]; this bound is stated in the version of [KLZ22, Lemma B.5] that appears in the updated (November 1, 2022) arXiv version of their paper. The bound (for general γ) is G 2 " r Oˆγ 2{k D More Differential Privacy Preliminaries We collect some basic facts about DP algorithms that will be useful in the proofs of our results. Our algorithms use the Gaussian mechanism to achieve zCDP: Proposition 22. [BS16, Proposition 1.6] Let q : X n Ñ R be a query with 2 -sensitivity ∆ :" sup X"X 1 }qpXqq pX 1 q}. Then the Gaussian mechanism, defined by M : X n Ñ R, M pXq :" qpXq`u for u " N p0, σ 2 q, is ρ-zCDP if σ 2 ě ∆ 2 2ρ . The (adaptive) composition of zCDP algorithms is zCDP, with privacy parameters adding: Lemma 23. [BS16, Lemma 2.3] Suppose A : X n Ñ Y satisfies ρ-zCDP and A 1 : X nˆY Ñ Z satisfies ρ 1 -zCDP (as a function of its first argument). Define the composition of A and A 1 , A 2 : X n Ñ Z by A 2 pXq " A 1 pX, ApXqq. Then A 2 satisfies pρ`ρ 1 q-zCDP. In particular, the composition of T ρ-zCDP mechanisms is a T ρ-zCDP mechanism. The definitions of DP and zCDP given above do not dictate how the algorithm A operates. In particular, they allow A to send sensitive data to a third party curator/analyst, who can then add noise to the data. However, in certain practical applications (e.g. federated learning [KMA`19]), there is no third party that can be trusted to handle sensitive user data. On the other hand, it is often more realistic to have a secure shuffler (a.k.a. mixnet): in each iteration of the algorithm, the shuffler receives encrypted noisy reports (e.g. noisy stochastic gradients) from each user and applies a uniformly random permutation to the n reports, thereby anonymizing them (and amplifying privacy) [BEM`17,CSU`19,EFM`20,FMT20]. An algorithm is shuffle private if all of these "shuffled" reports are DP: Definition 24. (Shuffle Differential Privacy [BEM`17, CSU`19]) A randomized algorithm is p , δq-shuffle DP (SDP) if the collection of reports output by the shuffler satisfies Definition 1. E Details and Proofs for Section 3: Optimal Rates for (Strongly) Convex Losses In order to precisely state (sharper forms of) Theorems 6 and 12, we will need to introduce some notation. E.1 Notation For a batch of data X P X m , we define the k-th empirical moment of f pw,¨q by where the supremum is also over all subgradients ∇f pw, x i q P B w f pw, x i q in case f is not differentiable. For X " D m , we denote the k-th expected empirical moment by r e pkq m :" Erp r m pXq pkq s and let r r k,m :" pr e pkq m q 1{k . Note that r r k,1 " r r k . Our excess risk upper bounds will depend on a weighted average of the expected empirical moments for different batch sizes m P t1, 2, 4, 8,¨¨¨, nu, with more weight being given to r r m for large m (which are smaller, by Lemma 25 below): for n " 2 l , define r R k,n :" where n i " 2´in. 쨨¨ě r pkq . In particular, r R k,n ď r r k . Proof. Let l P N, n " 2 l and consider Taking expectations over the random draw of X " D n yields r e pkq n ď r e pkq n{2 . Thus, r R k,n ď r r k by the definition of r R n . E.2 Localized Noisy Clipped Subgradient Method (Section 3.1) We begin by proving the technical ingredients that will be used in the proof of Theorem 6. First, we will prove a variant of Lemma 5 that bounds the bias and variance of the subgradient estimator in Algorithm 2. Lemma 26. Let p F λ pwq " 1 n ř n i"1 f pw, x i q`λ 2 }w´w 0 } 2 be a regularized empirical loss on a closed convex domain W with 2 -diameter D. Let r ∇F λ pw t q " ∇ p F λ pw t q`b t`Nt " 1 n ř n i"1 Π C p∇f pw, x i qq`λpw´w 0 q`N t be the biased, noisy subgradients of the regularized empirical loss in Algorithm 2, with N t " N p0, σ 2 I d q and b t " 1 n ř n i"1 Π C p∇f pw t , x i qq´1 n ř n i"1 ∇f pw t , x i q. Assume p r n pXq pkq ě sup wPW 1 n ř n i"1 }∇f pw, x i q} k ( for all ∇f pw, x i q P B w f pw, x i q. Then, for any T ě 1, we have: Proof. Fix any t. We have by Lemma 5 applied with D as the empirical distribution on X, and x i in Lemma 5 corresponding to ∇f pw t , x i q in (10). Taking supremum over t of both sides of (10) and recalling the definition of p r n pXq pkq proves the bias bound. The noise variance bound is immediate from the distribution of N t . Using Lemma 26, we can obtain the following convergence guarantee for Algorithm 2: Lemma 27 (Re-statement of Lemma 7). Fix X P X n and let p F λ pwq " 1 n ř n i"1 f pw, x i q`λ 2 }w´w 0 } 2 for w 0 P W, where W is a closed convex domain with diameter D. Assume f p¨, xq is convex and p r n pXq pkq ě sup wPW 1 n ř n i"1 }∇f pw, x i q} k ( for all ∇f pw, x i q P B w f pw, x i q. Denote p r n pXq " " p r n pXq pkq ‰ 1{k andŵ " argmin wPW p F λ pwq. Let η ď 2 λ . Then, the output of Algorithm 2 satisfies Proof. We use the notation of Lemma 26 and write r ∇F λ pw t q " ∇ p F λ pw t q`b t`Nt " 1 n ř n i"1 Π C p∇f pw, x i qqλ pw´w 0 q`N t as the biased, noisy subgradients of the regularized empirical loss in Algorithm 2, with N t " N p0, σ 2 I d q and b t " 1 n ř n i"1 Π C p∇f pw t , x i qq´1 n ř n i"1 ∇f pw t , x i q. Denote y t`1 " w t´r ∇F λ pw t q, so that w t`1 " Π W py t`1 q. For now, condition on the randomness of the algorithm (noise). By strong convexity, we have where we used non-expansiveness of projection and the definition of r ∇F λ pw t q in the last line. Now, re-arranging this inequality and taking expectation, we get by optimality ofŵ and the assumption that the noise N t is independent of w t´ŵ and zero mean. Also, 2´2p r n pXq 2`2 λ 2 D 2`B2`Σ2¯, whereB :" sup tPrT s }b t } ď p rnpXq pkq pk´1qC k´1 andΣ 2 :" sup tPrT s E}N t } 2 " dσ 2 . by Lemma 26. We also used Young's and Jensen's inequalities and the fact that EN t " 0. Further, |Exb t , w t´ŵ y| ďB 2 λ`λ 4 E}w t´ŵ } 2 , by Young's inequality. Combining these pieces yields Iterating (11) gives us since η ď 2 λ . Plugging in the bounds onB andΣ from Lemma 26 completes the proof. Proposition 28 (Precise statement of Proposition 9). Let f p¨, xq be convex for all x and grant Assumption 2 for k " 2. Suppose A : X n Ñ W is α-on-average model stable. Then for any ζ ą 0, we have Proof. Let X, X 1 , X i be constructed as in Definition 8. We may write ErF pApXqq´p F X pApXqqs " Er 1 n ř n i"1 f pApX i q, x i q´f pApXq, Now, since ApX i q is independent of x i , we have: Combining the above inequalities and recalling Definition 8 yields the result. To prove our excess risk bound for regularized ERM (i.e. Proposition 10), we require the following bound on the generalization error of ERM with strongly convex loss: Proposition 29. Let f p¨, xq be λ-strongly convex, and grant Assumption 2. Let ApXq :" argmin wPW p F X pwq be the ERM algorithm. Then, ErF pApXqq´p F X pApXqqs ď 2r p2q λn . Proof. We first bound the stability of ERM and then use Proposition 9 to get a bound on the generalization error. The beginning of the proof is similar to the proof of [LY20, Proposition D.6]: Let X, X 1 , X i be constructed as in Definition 8. By strong convexity of p F X i and optimality of ApX i q, we have Now, for any w P W, by symmetry and independence of ApXq and X 1 . Re-arranging the above equality and using symmetry yields Combining (12) with (13) shows that ERM is α-on-average model stable for The rest of the proof is where we depart from the analysis of [LY20] (which required smoothness of f p¨, xq): Bounding the right-hand side of (14) by Proposition 9 yields or any ζ ą 0. Choosing ζ " λn 2 , we obtain and α 2 ď 4r p2q λ 2 n 2 . Applying Proposition 9 again yields (for any ζ 1 ą 0) ErF pApXqq´p F X pApXqqs ď r p2q 2ζ 1`ζ 1 2ˆ4 r p2q λ 2 n 2ď 2r p2q λn , by the choice ζ 1 " λn 2 . Proposition 30 (Precise statement of Proposition 10). Let f p¨, xq be convex, w i´1 , y P W, andŵ i :" argmin wPW p F i pwq, where p F i pwq :" 1 ni ř jPBi f pw, x j q`λ i 2 }w´w i´1 } 2 (c.f. line 6 of Algorithm 3). Then, where the expectation is over both the random draws of X from D and B i from X. Proof. Denote the regularized population loss by G i pwq :" Er p F i pwqs " F pwq`λ i 2 }w´w i´1 } 2 . By Proposition 29, we have Thus, since Er p F i pŵ i qs " Ermin wPW p F i pwqs ď min wPW Er p F i pwqs " min wPW G i pwq ď λi 2 }y´w i´1 } 2`F pyq. Subtracting F pyq from both sides of (15) completes the proof. We are ready to state and prove the precise form of Theorem 6, using the notation of Appendix E.1: Moreover, this excess risk is attained in r Opn p`1 q subgradient evaluations. If p P r1, 2q, then the same excess risk bound holds up to logarithmic factors. Proof. We choose σ 2 i " 4C 2 i Ti n 2 i 2 for C i and T i to be determined exactly later. Note that for λ i and η i defined in Algorithm 3, we have η i ď 2 λi for all i P rls. Privacy: Since the batches tB i u l i"1 are disjoint, it suffices (by parallel composition [McS09]) to show that w i (produced by T i iterations of Algorithm 2 in line 7 of Algorithm 3) is 2 2 -zCDP for all i P rls. With clip threshold C i and batch size n i , the 2 sensitivity of the clipped subgradient update is bounded by ∆ " sup w,X"X 1 1 ni } ř ni j"1 Π Ci p∇f pw, x j qq´Π Ci p∇f pw, x 1 j qq} " 1 ni sup w,x,x 1 }Π Ci p∇f pw, xqq´Π Ci p∇f pw, x 1 qq} ď 2Ci ni . (Note that the terms arising from regularization cancel out.) Thus, by Proposition 22, conditional on the previous updates w 1:t , the pt`1q-st update in line 5 of Algorithm 2 satisfies 2 2Ti -zCDP. Hence, Lemma 23 implies that w i (in line 7 of Algorithm 3) is 2 2 -zCDP. Excess risk: First, our choice of D i ensures thatŵ i P W i , since by definition ofŵ i and L f -Lipschitz continuity of f p¨, x j q for all j. Then by Lemma 7, we have , conditional on w i´1 and the draws of X " D n and B i " X ni . Taking expectation over the random sampling yields i ln pnq and η to be determined later (polynomial in n), we get Note that under Assumption 1, F is L-Lipshitz, where L " sup wPW }∇F pwq} ď r by Jensen's inequality. Now, following the strategy used in the proof of [FKT20, Theorem 4.4], we write EF pw l q´F pw˚q " ErF pw l q´F pŵ l qs`l ÿ i"1 ErF pŵ i q´F pŵ i´1 qs, whereŵ 0 :" w˚. Using (16), the first term can be bounded as follows: ErF pw l q´F pŵ l qs À r R 2k,n D¨1 ? if we choose Next, Proposition 10 implies ErF pŵ i q´F pŵ i´1 qs ď for all i P rls. Hence ErF pŵ i q´F pŵ i´1 qs À i¸. {k approximately equalizes the two terms above involving C i and we get where the last line holds verbatim if p ě 2 and holds up to an additional factor of lnpnq otherwise. Assume p ě 2. Now, choosing , . -‚ by the upper bound that we assumed on L f . Combining the above pieces completes the excess risk proof. Subgradient complexity: Our choice of T i " r Θ´1 λiηi¯À n p i ln pnq implies that Algorithm 3 uses ř l i"1 n i T i À lnpnqn p`1 subgradient evaluations. Remark 32 (Details of Remark 11). If one desires p , δq-DP or p , δq-SDP instead of zCDP, then the gradient complexity of Algorithm 3 can be improved to Opn p`1 2 lnpnqq by using Clipped Noisy Stochastic Subgradient Method instead of Algorithm 2 as the subroutine in line 7 of Algorithm 3. Choosing batch sizes m i « ? n i ă n i in this subroutine (and increasing σ 2 i by a factor of Oplogp1{δqq) ensures p , δq-DP by [ACG`16, Theorem 1] via privacy amplification by subsampling. The same excess risk bounds hold for any minibatch size m i P rn i s, as the proof of Theorem 6 shows. E.3 The Strongly Convex Case (Section 3.2) Our algorithm is an instantiation of the meta-algorithm described in [FKT20]: Initialize w 0 P W. For j P rM s :" rlog 2 plog 2 pnqqs, let N j " 2 j´2 n{ log 2 pnq, C j " , and let w j be the output of Algorithm 3 run with input data X j " px s q sPCj initialized at w j´1 . Output w M . Assume without loss of generality that N j " 2 p for some p P N. Then, with the notation of Appendix E.1, we have the following guarantees: Theorem 33 (Precise Statement of Theorem 12). Grant Assumption 1. Let ď ? d and f p¨, xq be µ-strongly convex and L f -Lipschitz for all x P X , with L f À n p{2 r R 2k,n˜1 ? n`ˆ? Excess risk: Note that N j samples are used in phase j of the algorithm. For j ě 0, let D 2 j " Er}w j´w˚} 2 s and ∆ j " ErF pw j q´F˚s. By strong convexity, we have D 2 j ď 2∆j µ . Also, for an absolute constant a ě 1, by Theorem 6. Denote E j " Then since N j " 2N j`1 , we have where the second inequality holds because for any m " 2 q , we have: Now,(19) implies that (18) can be re-arranged as Further, if M ě log log´∆ 0 E0¯, then for an absolute constant A ą 0, since ∆ 0 ď 2L 2 µ and E 0 ě 2L 2 µn implies ∆ 0 {E 0 " n a 2 ď n and 1 logp∆0{E0q " 1 logpnq´2 logpaq ď A logpnq for some A ą 0, so that´∆ 0 E.4 Asymptotic Upper Bounds Under Assumptions 2 and 3 We first recall the notion of subexponential distribution: Definition 34 (Subexponential Distribution). A random variable Y is subexponential if there is an absolute constant s ą 0 such that Pp|Y | ě tq ď 2 exp`´t s˘f or all t ě 0. For subexponential Y , we define }Y } ψ1 :" inf s ą 0 : Pp|Y | ě tq ď 2 exp`´t s˘@ t ě 0 ( . Essentially all (heavy-tailed) distributions that arise in practice are subexponential [Mck19]. Now, we establish asymptotically optimal upper bounds for a broad subclass of the problem class considered in [WXDX20, KLZ22]: namely, subexponential stochastic subgradient distributions satisfying Assumption 2 or Assumption 3. In Theorem 35 below (which uses the notation of Appendix E.1), we give upper bounds under Assumption 2: Theorem 35. Let f p¨, xq be convex. Assume r r 2k ă 8 and Y i " }∇f pw, x i q} 2k is subexponential with E n ě max iPrns p}Y i } ψ1 q @w P W, ∇f pw, x i q P B w f pw, x i q. Assume that for sufficiently large n, we have sup w,x }∇f pw, xq} 2k ď n q r p2kq for some q ě 1 and max´E n r p2kq , n dq , where }∇f pw, xq∇ f pw 1 , xq} ď β}w´w 1 } for all w, w 1 P W, x P X , and subgradients ∇f pw, xq P B w f pw, xq. Then, lim nÑ8 r R 2k,n ď 4r 2k . Further, there exists N P N such that for all n ě N , the output of Algorithm 3 satisfies EF pw l q´F˚" O¨r 2k D¨1 ? n`˜a d lnpnq n¸k´1 k‚ ‚ . If f p¨, xq is µ-strongly convex, then the output of algorithm A (in Section 3.2) satisfies While a bound on sup w,x }∇f pw, xq} is needed in Theorem 35, it can grow as fast as any polynomial in n and only needs to hold for sufficiently large n. As n Ñ 8, this assumption is easily satisfied. Likewise, Theorem 35 depends only logarithmically on the Lipschitz parameter of the subgradients β, so the result still holds up to constant factors if, say, β ď n p pr{Dq as n Ñ 8 for some p ě 1. Crucially, our excess risk bounds do not depend on L f or β. Asymptotically optimal upper bounds for Assumption 3 are an immediate consequence of Lemma 20 combined with Theorem 35. Namely, under Assumption 3, the upper bounds in Theorem 35 hold with r replaced by ? dγ 1{k (by Lemma 20). These upper bounds, and the ones in Theorem 35, are tight up to logarithms for their respective problem classes, by the lower bounds in Appendix F. Proof of Theorem 35. Step One: There exists N P N such that r r 2 2k,n ď 16r 2 2k for all n ě N . We will first use a covering argument to show that p r n pXq p2kq is upper bounded by 2 2k`1 r p2kq with high probability. For any α ą 0, we may choose an α-net with N α ď`3 D 2α˘d balls centered around points in W α " tw 1 , w 2 ,¨¨¨, w Nα u Ă W such that for any w P W there exists i P rN α s with }w´w i } ď α (see e.g. [KT59] for the existence of such W α ). For w P W, let r w denote the element of W α that is closest to w, so that }w´r w} ď α. Now, for any X P X n , we have where we used Cauchy-Schwartz and Young's inequality for the first inequality, and the assumption of β-Lipschitz subgradients plus the definition of W α for the second inequality. Further, by a union bound and Bernstein's inequality (see e.g. [Ver18, Corollary 2.8.3]). Choosing α " 2r 2k β ensures that Pp2 2k β 2k α 2k ą 2 2k`1 r p2kq q " 0 and hence (by union bound) by the assumption on n. Next, we use this concentration inequality to derive a bound on r e p2kq n : r e p2kq n " E " p r n pXq p2kq ı ď E " p r n pXq p2kq |p r n pXq p2kq ě 2 2k`1 r p2kq ı 1 n q`2 2k`1 r p2kq ď sup w,x }∇f pw, xq} 2k n q`2 2k`1 r p2kq ď p1`2 2k`1 qr p2kq , for sufficiently large n. Thus, r r 2 2k,n ď 16r 2 2k for all sufficiently large n. This establishes Step One. Step Two: lim nÑ8 r R 2k,n ď 4r 2k . For all n " 2 l , l, i P N, define h n piq " 2´ir r 2 2k,2´in 1 tiPrlog 2 pnqsu . Note that 0 ď h n piq ď gpiq :" 2´ir r 2 2k for all n, i, and that ř 8 i"1 gpiq " r r 2 2k ă 8 (i.e. g is integrable with respect to the counting measure). Furthermore, the limit lim nÑ8 h n piq " 2´i lim nÑ8 r r 2 2k,2´in exists since Lemma 25 implies that the sequence tr r 2 2k,2´in u 8 n"1 is monotonic and bounded for every i P N. Thus, by Lebesgue's dominated convergence theorem, we have lim nÑ8 r R 2 2k,n " lim where the last inequality follows from Step One. Therefore, lim nÑ8 r R 2k,n ď 4r 2k . By Theorem 6 and Theorem 12, this also implies the last two claims in Theorem 35. F Lower Bounds (Section 3.3) In this section, we prove the lower bounds stated in Section 3.3, and also provide tight lower bounds under Assumptions 2 and 3. Theorem 36 (Precise Statement of Theorem 13). Let k ě 2, D, γ, r pkq , r r pkq ą 0, β f ě 0, d ě 40, n ą 7202, and ρ ď d. Then, for any ρ-zCDP algorithm A, there exist W, X Ă R d such that }w´w 1 } ď 2D for all w, w 1 P W, a β f -smooth, linear, convex (in w) loss f : WˆX Ñ R, and distributions D and D 1 on X such that: 1. Assumption 1 holds and if X 1 " D 1 n , then EF pApX 1 qq´F˚" Ω¨r r k D¨1 ? n`m in 2. Assumption 2 holds and if X 1 " D 1 n , then EF pApX 1 qq´F˚" Ω¨r k D¨1 ? n`m in 3. Assumption 3 holds and if X " D n , then Proof. We will prove part 3 first. 3. We begin by proving the result for γ " D " 1. In this case, it is proved in [KLZ22] that EF pApXqq´F˚" Ω¨?d min -‚ for f pw, xq "´xw, xy with W " B d 2 p0, 1q and X " t˘1u d , and a distribution satisfying Assumption 3 with γ " 1. Then f p¨, xq is linear, convex, and β-smooth for all β ě 0. We prove the first (non-private) term in the lower bound. By the Gilbert-Varshamov bound (see e.g. [ASZ21, Lemma 6]) and the assumption d ě 40, there exists a set V Ď t˘1u d with |V| ě 2 d{20 , d Ham pν, ν 1 q ě d 8 for all ν, ν 1 P V, ν ‰ ν 1 . For ν P V, define the product distribution Q ν " pQ ν1 ,¨¨¨Q ν d q, where for all j P rds, for δ νj P p0, 1q to be chosen later. Then EQ νj :" µ νj " δ νj and for any w P W, x " Q ν , we have E|x∇f pw, xq´∇F pwq, e j y| k " E|x´x`Ex, e j y| k (23) for δ νj P p0, 1q. Now, let p :" a d{n and δ νj :" pνj ? Thus, E r F pAp r Xqq´r F˚" γ 1{k DrEF pApXqq´F˚s, so applying the lower bound for the case D " γ " 1 (i.e. for the unscaled F ) yields the desired lower bound via r F . 1. We will use nearly the same unscaled hard instances used to prove the private and non-private terms of the lower bound in part 3, but the scaling will differ. Starting with the non-private term, we scale the distribution , Dq. Let f p r w, r xq :"´x r w, r xy, which satisfies all the hypotheses of the theorem. Also, Now r w˚" Dw˚as before and letting r F p¨q :" E r x" r Qν f p¨, r xq, we have for any r w " Dw. Thus, applying the unscaled non-private lower bound established above yields a lower bound of Ω´r rD ? n¯o n the non-private excess risk of our scaled instance. Next, we turn to the scaled private lower bound. The unscaled hard distribution Q 1 ν given by -‚ , for any k and any ρ-zCDP algorithm A : X n Ñ W if X " D n . 14 So, it remains to a) prove the first term (d{n) in the lower bound, and then b) show that the scaled instance satisfies the exact hypotheses in the theorem and has excess loss that scales by a factor of γ 2{k {µ. We start with task a). Observe that for f defined above and any distribution D such that ED P W, we have EF pApXqq´F˚" (see [KLZ22, Lemma 6.2]), and E|x∇f pw, xq´∇F pwq, e j y| k " E|xx´Ex, e j y| k . Thus, it suffices to prove that E}ApXq´ED} 2 Á d n for some D such that E|xx´Ex, e j y| k ď 1. This is a known result for products of Bernoulli distributions; nevertheless, we provide a detailed proof below. First consider the case d " 1. Then the proof follows along the lines of [Duc21, Example 7.7]. Define the following pair of distributions on t˘1u: for δ P p0, 1q to be chosen later. Notice that if X is a random variable with distribution P ν (ν P t0, 1u), then E|X´µ| k ď E|X| k ď 1. Also, EP ν " δν for ν P t0, 1u and |EP 1´E P 0 | " δ (i.e. the two distributions are δ-separated with respect to the metric ρpa, bq " |a´b|). Then by LeCam's method (see e.g. [Duc21, Eq. 7.33] and take Φp¨q " p¨q 2 ), max νPt0,1u Now, by Pinsker's inequality and the chain rule for KL-divergence, we have Choosing δ " 1 ? 2n ă 1 ? 2 implies }P n 0´P n 1 } 2 T V ď nδ 2 " 1 2 . Hence there exists a distributionD P tP 0 , P 1 u on R such that For general d ě 1, we take the product distribution D :"D d on X " t˘1u d and choose W " B d 2 p0, ? dq to ensure ED P W (so that (38) holds). Clearly, E|xD´ED, e j y| k ď 1 for all j P rds. Further, the mean squared error of any algorithm for estimating the mean of D is by applying the d " 1 result to each coordinate. Next, we move to task b). For this, we re-scale each of our hard distributions (non-private given above, and private given in the proof of [KLZ22, Lemma 6.3] and below in our proof of part 2 of the theorem-see (43)): D Ñ γ 1{k µ D " r D, X Ñ γ 1{k µ X " r X , W Ñ γ 1{k µ W " Ă W and f : WˆX Ñ µf " r f : Ă Wˆr X . Then r f p¨, r xq is µ-strongly convex and µ-smooth for all r x P r X and E|x∇ r f p r w, r xq´∇ r F p r wq, e j y| k " µ k E|xr x´Er x, e j y| k " µ k Eˇˇˇˇˆγ 1{k µ˙x x´Ex, e j yˇˇˇˇk " γE|xx´Ex, e j y| k ď γ for any j P rds, x " D, r x " r D, r w P Ă W. Thus, the scaled hard instance is in the required class of functions/distributions. Further, denote r F pwq " E r f pw, xq, r w˚:" argmin r wP Ă W r F p r wq " E r D " γ 1{k µ ED. Then, for any w P W, r w :" γ 1{k µ w, we have: In particular, for w :" ApXq and r w :" γ 1{k µ ApXq, we get for any algorithm A : X n Ñ W. WritingÃpXq :" γ 1{k µ ApXq andX :" γ 1{k µ X for X P X n , we conclude for any r A : r X n Ñ Ă W. Therefore, an application of the unscaled lower bound E A,X"D n rF pApXqq´F˚s " Ω¨d n`d min , . -‚ , which follows by combining part 3a) above with [KLZ22, Lemma 6.3], completes the proof of part 3. 1. We begin by proving the first (non-private) term in the lower bound: For our unscaled hard instance, we will take the same distribution D " P d ν (for some ν P t0, 1u) on X " t˘1u d and quadratic f described above in part 1a with W :" B d 2 p0, ? dq. The choice of W ensures ED P W so that (38) holds. Further, 9d D, then r f p¨, r xq is µ-strongly convex and µ-smooth, and Moreover, if´r r k 3µ ? d¯A " r A : r X n Ñ Ă W is any algorithm and r X " r D n , then by (39) and (38), we have Next, we prove the second (private) term in the lower bound. Let f be as defined above. For our unscaled hard distribution, we follow [BD14, KLZ22] and define a family of distributions tQ ν u νPV on R d , where V Ă t˘1u d will be defined later. For any given ν P V, we define the distribution Q ν as follows: where p :" min´1, ? d n ? ρ¯. Now, we select a set V Ă t˘1u d such that |V| ě 2 d{20 and d Ham pν, ν 1 q ě d 8 for all ν, ν 1 P V, ν ‰ ν 1 : such V exists by standard Gilbert-Varshamov bound (see e.g. [ASZ21, Lemma 6]). For any ν P V, if x " Q ν and w P W :" B d 2 p0, Note also that our choice of W and p ď 1 ensures that ErQ ν s P W. Moreover, as in the proof of [KLZ22, Lemma 6.3], zCDP Fano's inequality (see [KLZ22,Theorem 1.4]) implies that for any ρ-zCDP algorithm A, Thus, , . -‚ for some ν P V, by (38). Now we scale our hard instance: Then r f p¨, r xq is µ-strongly convex and µ-smooth, and Moreover, if´r r k 2µ ? d¯A " r A : r X n Ñ Ă W is any ρ-zCDP algorithm and r X " r D n , then , . -‚ , by (44). 2. We use an identical construction to that used above in part 1 except that the scaling factor r r k gets replaced by r k . It is easy to see that E " sup wPW }∇f pw, xq} k ‰ « sup wPW E " }∇f pw, xq} k ‰ for our construction, and the lower bound in part 2 follows just as it did in part 1. This completes the proof. Remark 39. Note that the lower bound proofs construct bounded (hence subexponential) distributions and uniformly L f -Lipschitz, β f -smooth losses that easily satisfy the conditions in our upper bound theorems. G.1 Noisy Clipped Accelerated SGD for Smooth Convex Losses (Section 4.1) We first present Algorithm 5, which is a generalized version of Algorithm 4 that allows for any MeanOracle. This will be useful for our analysis. Proposition 40 provides excess risk guarantees for Algorithm 5 in terms of the bias and variance of the MeanOracle. Proposition 40. Consider Algorithm 5 run with a MeanOracle satisfying r ∇F t pw md t q " ∇F pw md t q`b t`Nt , where }b t } ď B (with probability 1), EN t " 0, E}N t } 2 ď Σ 2 for all t P rT´1s, and tN t u T t"1 are independent. 5: Draw new batch B t (without replacement) of n{T samples from X. Proof. We begin by extending [GL12, Proposition 4] to biased/noisy stochastic gradients. Fix any w t´1 , w ag t´1 P W. By [GL12, Lemma 3], we have F pw ag t q ď p1´α t qF pw ag t´1 q`αrF pzq`x∇F pzq, w t´z ys`β 2 }w ag t´z } 2 , for any z P W. Denote Υ t pwq :" α t xN t`bt , w´w t´1 y`α 2 t }N t`bt } 2 η t´β α 2 t and d t :" w ag t´w md t by the expression for d t . Now we apply [GL12, Lemma 2] with ppuq " α t rx r r ∇F t pw t q´E r ∇F t pw t q. Then we have B :" sup tPrT s }b t } ď r pkq pk´1qC k´1 and Σ 2 :" sup tPrT s Er}N t } 2 s ď dσ 2r 2 T n À dC 2 T 2 2 n 2`r 2 T n , by Lemma 5. Plugging these estimates for B and Σ 2 into Proposition 40 and setting C " rp n ? dT q 1{k , we get EF pw ag T q´F˚À Now, our choice of T implies that βD 2 T 2 ď rD G.2 Noisy Clipped SGD for Strongly Convex Losses (Section 4.2) We begin by presenting the pseudocode for our noisy clipped SGD in Algorithm 6. Algorithm 6 Noisy Clipped SGD for Heavy-Tailed DP SCO 1: Input: Data X P X n , T ď n, stepsizes tη t u T t"0 , averaging weights tζ t u T t"0 , w 0 P W. 2: for t P t0, 1,¨¨¨, T u do 3: Draw new batch B t (without replacement) of n{T samples from X. 4: r ∇F t pw t q :" MeanOracle1pt∇f pw t , xqu xPBt ; n T ; 2 2 q 5: Algorithm 7 is a generalized version of Algorithm 6 that allows for any MeanOracle and will be useful in our analysis. In Proposition 42, we provide the convergence guarantees for Algorithm 7 in terms of the bias Algorithm 7 Generic Noisy Clipped SGD Framework for Heavy-Tailed SCO 1: Input: Data X P X n , T ď n, MeanOracle, stepsizes tη t u T t"0 , averaging weights tζ t u T t"0 . 2: Initialize w 0 P W. 3: for t P t0, 1,¨¨¨, T u do 4: Draw new batch B t (without replacement) of n{T samples from X. Proposition 42. Let F : W Ñ R be µ-strongly convex and β-smooth with condition number κ :" β µ . Let w t`1 :" Π W rw t´ηt r ∇F t pw t qs, where r ∇F t pw t q " ∇F pw t q`b t`Nt , such that the bias and noise (which can depend on w t and the samples drawn) satisfy }b t } ď B (with probability 1), EN t " 0, E}N t } 2 ď Σ 2 for all t P rT´1s, and that tN t u T t"1 are independent. Then, there exist stepsizes tη t u T t"1 and weights tζ t u T t"0 such that the average iterate p w T :" Proof. Define gpw t q "´1 ηt pw t`1´wt q. Then Now, conditional on all randomness, we use smoothness and strong convexity to write: F pw t`1 q´F pw˚q " F pw t`1 q´F pw t q`F pw t q´F pw˚q ď xF pw t q, w t`1´wt y`β 2 }w t`1´wt } 2`x ∇F pw t q, w t´w˚y´µ 2 }w t´w˚} where we used the fact that xΠ W pyq´x, Π W pyq´yy ď 0 for all x P W, y P R d (c.f. [Bub15, Lemma 3.1]) to obtain the last inequality. Thus, 2η t Exgpw t q, w t´w˚y ď´2η t ErF pw t`1 q´F˚s Combining the above inequality with (58), we get Next, consider |Exb t`Nt , w t`1´w˚y | ď |Exb t`Nt , w t`1´wt y|`|Exb t`Nt , w t´w˚y | " |Exb t`Nt , w t`1´wt y|`|Exb t , w t´w˚y | ď |Exb t`Nt , w t`1´wt y|`B 2 µ`µ 4 E}w t´w˚} 2 by independence of N t (which has zero mean) and w t´w˚, and Young's inequality. Next, note that v :" w t´ηt p∇F pw t q`b t q is independent of N t , so ExN t , Π W pvqy " 0. Thus, by Cauchy-Schwartz and non-expansiveness of projection. Further, |Exb t , w t`1´wt y| " |Exb t ,´η t gpw t qy| E}gpw t q} 2 , by Young's inequality. Therefore, r pkq pk´1qC k´1 and Σ 2 :" sup tPrT s Er}N t } 2 s ď dσ 2`r 2 T n À dC 2 T 2 2 n 2`r 2 T n , by Lemma 5. Plugging these bias and variance estimates into Proposition 42, we get EF p p w T q´F˚À βD 2 expˆ´T 4κ˙`1 µTˆd C 2 T 2 2 n 2`r 2 T n˙`p r pkq q 2 C 2k´2 µ . H Details and Proofs for Section 5: Non-Convex Proximal PL Losses Proof. Our proof extends the ideas in [LGR22] to generic biased and noisy gradients without using Lipschitz continuity of f . By β-smoothness, for any r P rT´1s, we have EF pw r`1 q " ErF 0 pw r`1 q`f 1 pw r q`f 1 pw r`1 q´f 1 pw r qs ď E " F pw r q`"x r ∇F 0 r pw r q, w r`1´wr y`β 2 }w r`1´wr } 2`f 1 pw r`1 q´f 1 pw r q * Ex∇F 0 pw r q´r ∇F 0 r pw r q, w r`1´wr y " EF pw r q`E " x∇F 0 pw r q, w r`1´wr y`β 2 }w r`1´wr } 2`f 1 pw r`1 q´f 1 pw r q xb r`Nr , w r`1´wr y ı´E xb r`Nr , w r`1´wr y ď EF pw r q`E " x∇F 0 pw r q, w r`1´wr y`β}w r`1´wr } 2`f 1 pw r`1 q´f 1 pw r q xb r`Nr , w r`1´wr y where we used Young's inequality to bound Exb r`Nr , w r`1´wr y ď Next, we will bound the quantity E " x∇F 0 pw r q, w r`1´wr y`β}w r`1´wr } 2`f 1 pw r`1 q´f 1 pw r q`xb r`Nr , w r`1´wr y ‰ . Denote H priv r pyq :" x∇F 0 pw r q, y´w r y`β}y´w r } 2`f 1 pyq´f 1 pw r q`xb r`Nr , y´w r y and H r pyq :" x∇F 0 pw r q, y´w r y`β}y´w r } 2`f 1 pyq´f 1 pw r q. Note that H r and H priv r are 2β-strongly convex. Denote the minimizers of these two functions by y˚and y priv respectively. Now, conditional on w r and N r`br , we claim that H r py priv q´H r py˚q ď }N r`br } 2 2β . To prove (63), we will need the following lemma: Lemma 46. [LR21a, Lemma B.2] Let Hpyq, hpyq be convex functions on some convex closed set Y Ď R d and suppose that H is 2β-strongly convex. Assume further that h is L h -Lipschitz. Define y 1 " arg min yPY Hpyq and y 2 " arg min yPY rHpyq`hpyqs. Then }y 1´y2 } 2 ď L h 2β . On the other hand, H priv r py priv q " H r py priv q`xN r`br , y priv y ď H priv r py˚q " H r py˚q`xN r`br , y˚y. Combining these two inequalities yields H r py priv q´H r py˚q ď xN r`br , y˚´y priv y ď }N r`br }}y˚´y priv } ď }N r`br } 2 2β , as claimed. Also, note that w r`1 " y priv . Hence E " x∇F 0 pw r q, w r`1´wr y`β}w r`1´wr } 2`f 1 pw r`1 q´f 1 pw r q`xb r`Nr , w r`1´wr y ‰ where we used the assumptions that F is µ-PPL and F 0 is 2β-smooth in the last inequality. Plugging the above bounds back into (61), we obtain EF pw r`1 q ď EF pw r q´µ 2β rF pw r q´F˚s`2 pΣ 2`B2 q β , whence ErF pw r`1 q´F˚s ď ErF pw r q´F˚sp1´µ 2β q`2 pΣ 2`B2 q β . Theorem 47 (Precise statement of Theorem 19). Grant Assumption 2. Let ą 0 and assume F pwq " F 0 pwq`f 1 pwq is µ-PPL for β-smooth F 0 , with κ " β µ ď n{ lnpnq. Then, there are parameters such that Algorithm 8 is Proof. We choose σ 2 " 4C 2 T 2 2 n 2 . Privacy: By parallel composition (since each sample is used only once) and the post-processing property of DP (since the iterates are deterministic functions of the output of MeanOracle1), it suffices to show that r ∇F t pw t q is 2 2 -zCDP for all t ě 0. By our choice of σ 2 and Proposition 22, r ∇F t pw t q is 2 2 -zCDP, since it's sensitivity is bounded by sup X"X 1 ,w T n › › ř xPBt Π C r∇f 0 pw, xqs´ř x 1 PB 1 t Π C r∇f 0 pw, x 1 qs › › ď T n sup x,x 1 ,w }Π C r∇f 0 pw, xqsΠ C r∇f 0 pw, x 1 qs} ď 2CT n . Excess risk: For any iteration t P rT s, denote the bias of MeanOracle1 (Algorithm 1) by b t :" E r ∇F t pw t q∇ F pw t q, where r ∇F t pw t q " r ν in the notation of Algorithm 1. Also let p ∇F t pw t q :"ν (in the notation of Lemma 5) and denote the noise by N t " r ∇F t pw t q´∇F pw t q´b t " r ∇F t pw t q´E r ∇F t pw t q. Then we have B :" sup tPrT s }b t } ď r pkq pk´1qC k´1 and Σ 2 :" sup tPrT s Er}N t } 2 s ď dσ 2`r 2 T n ď 4dC 2 T 2 2 n 2`r 2 T n , by Lemma 5. Plugging these bounds on B 2 and Σ 2 into Proposition 45, and choosing T " 2 Q κ ln´∆ µ B 2`Σ2¯U À κ lnpnq where ∆ ě F pw 0 q´F˚, we have: EF pw T q´F˚ď 5pB 2`Σ2 q µ ď 5 µˆ2 r 2 T n`2 pr pkq q 2 pk´1q 2 C 2k´2`2 dC 2 T 2 2 n 2˙, for any C ą 0. Choosing C " r´ 2 n 2 dT 2¯1 {2k makes the last two terms in the above display equal, and we get EF pw T q´F˚À r 2 µ¨˜? d n κ lnpnq¸2 k´2 k`κ lnpnq n‚ , as desired. I Shuffle Differentially Private Algorithms In this next two subsections, we present two SDP algorithms for DP heavy-tailed mean estimation. The first is an SDP version of Algorithm 1 and the second is an SDP version of the coordinate-wise protocol of [KSU20, KLZ22]. Both of our algorithms offer the same utility guarantees as their zCDP counterparts (up to logarithms). In particular, this implies that the upper bounds obtained in the main body of this paper can also be attained via SDP protocols that do not require individuals to trust any third party curator with their sensitive data (assuming the existence of a secure shuffler). I.1 2 Clip Shuffle Private Mean Estimator For heavy-tailed SO problems satisfying Assumption 2, we propose using the SDP mean estimation protocol described in Algorithm 10. Algorithm 10 relies on the shuffle private vector summation protocol of [CJMP21], which is given in Algorithm 11. The useful properties of Algorithm 11 are contained in Lemma 48. Algorithm 11 P vec , a shuffle private protocol for vector summation 1: Input: database of d-dimensional vectors X " px 1 ,¨¨¨, x s with maximum norm bounded by C ą 0; privacy parameters p , δq. Output labeled messages tpj, m j qu jPrds 8: end procedure 9: procedure: Analyzer A vec pyq 10: for j P rds do 11: Run analyzer on coordinate j's messages z j Ð A 1D py j q 12: Re-center: o j Ð z j´L 13: end for 14: Output the vector of estimates o " po 1 ,¨¨¨o d q 15: end procedure Lemma 48. [CJMP21, Theorem 3.2] Let ď 15, δ P p0, 1{2q, d, s P N and C ą 0. There are choices of parameters b, g, p for P 1D such that for an input data set X " px 1 ,¨¨¨, x s q of vectors with maximum norm }x i } ď C, the following holds: 1) Algorithm 11 is p , δq-SDP. By the post-processing property of DP, we immediately obtain: Lemma 49 (Privacy, Bias, and Variance of Algorithm 10). Let tx i u s i"1 " D s have mean Ex i " ν and E}x i } k ď r pkq for some k ě 2. Denote the noiseless average of clipped samples in Algorithm 10 by p ν :" 1 n ř s i"1 Π C px i q. Then, there exist algorithmic parameters such that Algorithm 10 is p , δq-SDP and such that the following bias and variance bounds hold: }Er ν´ν} " }Ep ν´ν} ď E}p ν´ν} ď r pkq pk´1qC k´1 , and E}r ν´Er ν} 2 " E}r ν´Ep ν} 2 " Oˆd C 2 ln 2 pd{δq 2 s 2`r 2 s˙. Proof. Privacy: The privacy claim is immediate from Lemma 48 and the post-processing property of DP [DR14, Proposition 2.1]. Bias: The bias bound follows as in Lemma 5, since P vec is an unbiased estimator (by Lemma 48). Remark 50. Comparing Lemma 49 to Lemma 5, we see that the bias and variance of the two MeanOracles are the same up to logarithmic factors. Therefore, replacing Algorithm 1 by Algorithm 10 in our stochastic optimization algorithms yields SDP algorithms with excess risk that matches the bounds provided in this paper (via Algorithm 1) up to logarithmic factors. I.2 Coordinate-wise Shuffle Private Mean Estimation Oracle For SO problems satisfying Assumption 3, we propose Algorithm 13 as a shuffle private mean estimation oracle. Algorithm 13 is a shuffle private variation of Algorithm 12, which was employed by [KSU20, KLZ22]. Algorithm 12 Coordinate-wise Private MeanOracle2ptx i u s i"1 ; s; τ ; 2 2 ; mq [KSU20, KLZ22] 1: Input: X " tx i u s i"1 , x i " px i,1 ,¨¨¨, x i,d q P R d , ą 0, τ ą 0, m P rss such that m divides s. 2: for j P rds do 3: Partition j-th coordinates of data into m disjoint groups of size s{m. 4: for i P rms do 6: Compute average of Z i j : p ν i j :" m s ř zPZ i j z. The bias/variance and privacy properties of Algorithm 12 are summarized in Lemma 51. 4: Partition j-th coordinates of data into m disjoint groups of size s{m. 5: for i P rms do 6: Clip j-th coordinate of data in i-th group: Z i j :" ! Π r´τ,τ s px pi´1q s m`1 ,j q,¨¨¨, Π r´τ,τ s px i s m ,j q ) .
2022-09-16T06:42:19.602Z
2022-09-15T00:00:00.000Z
254095810
s2orc/train
v2
Sex differences in the human peripheral blood transcriptome
Sex differences in the human peripheral blood transcriptome Background Genomes of men and women differ in only a limited number of genes located on the sex chromosomes, whereas the transcriptome is far more sex-specific. Identification of sex-biased gene expression will contribute to understanding the molecular basis of sex-differences in complex traits and common diseases. Results Sex differences in the human peripheral blood transcriptome were characterized using microarrays in 5,241 subjects, accounting for menopause status and hormonal contraceptive use. Sex-specific expression was observed for 582 autosomal genes, of which 57.7% was upregulated in women (female-biased genes). Female-biased genes were enriched for several immune system GO categories, genes linked to rheumatoid arthritis (16%) and genes regulated by estrogen (18%). Male-biased genes were enriched for genes linked to renal cancer (9%). Sex-differences in gene expression were smaller in postmenopausal women, larger in women using hormonal contraceptives and not caused by sex-specific eQTLs, confirming the role of estrogen in regulating sex-biased genes. Conclusions This study indicates that sex-bias in gene expression is extensive and may underlie sex-differences in the prevalence of common diseases. Background Sexual dimorphism extends into marked cellular, metabolic, physiological and anatomical differences and leads to sex differences in disease prevalence, expression and severity of, for example, cardiovascular [1], and autoimmune [2] diseases, personality [3] and psychiatric disorders [4]. Sex inequalities are an increasingly recognized challenge in both basic research and clinical medicine [5], and understanding the molecular mechanisms behind sex differences may lead to new insights into sex-specific pathophysiology and treatment opportunities [6]. Sex differences at the DNA sequence level are restricted to the sex chromosomes. On the X-chromosome, most genes are equally expressed across sex due to X-inactivation in women [7]. The few unshared genes located on the Y chromosome are exclusively expressed in the testes, or are housekeeping genes with X-chromosome homologues that escape X-inactivation [8]. However, genome regulation seems highly sex-specific at secondary epigenetic levels such as DNA methylation [9], DNase hypersensitivity [10], chromatin structure [11] and gene expression [12,13]. Thus, a characterization of sex differences in genome regulation by gene expression will contribute to the understanding of the molecular basis of sexual dimorphism. Animal studies have shown that sex-biased gene expression is highly tissue dependent [14,15] and the evolution rates of sex-biased genes are higher than average [12,16]. Two recent studies in mice reported sex differences in gene expression networks of correlated transcripts [17,18]. Surprisingly few studies aimed at identifying and investigating sex-biased genes in humans, and only in small sample sizes (N < 250 [19][20][21][22]). Nonetheless, consistent evidence was obtained for sex-specific gene expression. Sex-differences in gene expression will depend on the hormonal status of the group considered. For instance, during menopause, much of the female-specific hormone production ceases, with downstream effects on gene expression in adipose tissue [23], monocytes [24], and bone [25]. In women using hormonal contraceptives, containing the hormones estrogen and progesterone, additional differences in gene expression may be evident as well. For many genes, expression levels are influenced by DNA polymorphisms (eQTLs). Although the sexes do not differ at the autosomal DNA sequence level, sex differences in gene expression may be caused by sex-specific eQTLs [26] (i.e. some SNPs may influence gene expression in one sex, but not in the other). Here we used microarrays to identify genome-wide sexbiased gene expression in the human peripheral blood transcriptome in a large sample (N = 5241 subjects) from the Netherlands. The sample size was sufficiently large to account for menopause status and hormonal contraceptive use. The identified sex-biased genes were characterized in terms of enrichment for functional gene ontology (GO) and disease categories, distribution across the autosomes and sex chromosomes, tissue specificity, evolution rates, participation in major gene expression networks and the extent to which sex differences in gene expression were caused by sex-specific eQTLs. Sample description The sample consisted of 5,241 individuals from the Netherlands Study of Depression and Anxiety (NESDA) and Netherlands Twin Register (NTR) cohorts (Table 1; [27]). Of the women, 22% were postmenopausal and 31% used hormonal contraceptives. For all participants, genome-wide gene expression in peripheral blood was assessed using microarrays with 47,122 probe sets targeting 19,250 genes. For each probe set, mixed models including demographic, and several technical covariates were used to test for sex effects (see Methods). Sex effects on gene expression Sex effects on gene expression were determined by comparing men (N = 1,814) and premenopausal women who did not use hormonal contraceptives (N = 1,594). When considering 45,418 autosomal transcripts targeting 18,495 genes, 993 transcripts from 582 genes (3.1% of all autosomal genes measured) were significantly influenced by sex (p < 1.2e-6, Bonferroni corrected at p < 0.05, FDR < 6e-5). The percentage of sex-biased genes increased when only genes with a mean expression above a certain threshold were considered. For example, a mean expression threshold of 5 (log 2 (intensity)) resulted in 5.5% sexbiased genes, and using a threshold of 9 resulted in 13.7% sex-biased genes ( Figure 1A). However, there were several transcripts with low mean expression level but with a high fold change between the sexes ( Figure 1B). In order to provide a comprehensive overview, we included all transcripts in the following analyses. Female-biased versus male-biased genes From the sex-biased transcripts on the autosomes, 572 (57.7%) were upregulated in females (female-biased genes, Figure 1C), and 421 in males (male-biased genes, Figure 1B). For each sex-biased transcript the log e fold change was computed ( Figure 1D). For female-biased transcripts the fold change was computed as the mean expression in females/mean expression in males, for male-biased genes we used -mean expression in males/ mean expression in females. Most absolute log e fold changes were smaller than 0.08 (99%), for 22 transcripts the absolute log e fold change was larger than 0.08 (6 femalebiased, targeting the genes ADM, CREB5, CNTNAP3, C9orf84, SORCS2 and GPR109A and 14 in men (KANK2, CTSG, MPO, BPI, GPER, DEFA4, EPB49, C19orf62, ERG, LCN2, CEACAM8, LTF, FECH and LTBP1), see Additional file 1 for sex-biased genes and corresponding fold changes and p-values). On the X chromosome, 1643 transcripts from 739 genes were measured. Out of these, 127 transcripts from 51 genes were sex-biased; 103 (from 38 genes) were femalebiased, and 24 (from 13 genes) male-biased. Seventeen of the corresponding log e fold changes were larger than 0.08 (targeting the genes EIF1AX, PRKX, KDM5C, ZFX, KDM6A, XIST, VSIG4, TSIX and SCARNA9L). Only the log e fold changes of the genes XIST and TSIX were larger than 0.5. Of the 63 transcripts targeting 26 genes on the Y chromosome, 48 transcripts from 16 genes had expression levels in men that were higher than the noise measured in women; 12 transcripts had a log e fold change larger than 0.5, targeting the genes EIF1AY, DDX3Y KDM5D, CYorf15B, CYorf15A and UTY. Genomic location of sex-biased genes For each chromosome we tested whether the genes on that chromosome enriched the sex-, male-or femalebiased genes. At the autosomes, the percentage of sexbiased genes differed only slightly between chromosomes, ranging from 1.6% on chromosome 20 to 4.2% on chromosome 14 ( Figure 1E); none of the autosomes enriched the sex-biased genes (p > 0.05, Fisher's exact test). The distribution of the male-and female-biased genes over the autosomes was more variable, ranging from 0.4% at chromosome 20 to 2.3% at chromosome 18 (female-biased genes), and from 0.7% at chromosome 4 to 2.4% at chromosome 22 (male-biased gene), however none of the autosomes enriched female or male-biased genes. As expected, female-biased genes were enriched for genes at the X-chromosome (5.1%), and male-biased genes for genes at the Y chromosome (61%). Figure 1 Characterization of female-and male-biased genes. For each of the 47,122 transcripts the sex effect was determined using a mixed model, resulting in 3.1% sex-biased genes. A) Transcripts were selected based on a threshold for mean expression, the percentage of sex-biased genes increases with the threshold that is used: in genes that are highly expressed there are more (up to 13%) sex-biased genes than in genes that have low expression. Nonetheless, also large male/female fold changes were observed in genes with low (B) and moderate (C) expression. D) For each transcript fold changes were computed; on the autosomes 57.7% of the sex-biased genes was female-biased, and absolute log e fold changes ranged from 0 to 0.2. E) For each chromosome, the number of male-and female-biased genes was computed, only the Y and X chromosomes were enriched for male-and female-biased genes, respectively. eQTL analysis of sex-biased gene expression eQTL analysis was performed using two sample subsets of 1523 men and 1373 premenopausal women who did not take hormonal contraceptives, for which genomewide SNP and gene expression data were available (see Methods). For each of the 993 autosomal sex-biased transcripts, eQTLs were computed for men and women separately. At a FDR of 0.01 there were 7978 cis eQTLs (p < 6e-05) and 514 trans eQTLs (p < 2e-09)) for men, and 6731 cis eQTLs (p < 5.2e-05) and 197 trans eQTLs (p < 1.8e-09)) for women. For the pooled eQTLs (9659 cis, 545 trans eQTLs) genotype-sex interactions were assessed using a mixed model that included data from men and women. At a FDR of 0.05 no significant genotype-sex interactions were observed. Sex-biased genes highly enrich modules of correlated transcripts Weighted Gene Co-Expression Network Analysis (WGCNA) [28] was used to identify modules of correlated transcripts, for men and women separately. Both analyses resulted in 9 modules with >70% transcript overlap between the male and the corresponding female module for 8 of the 9 modules (Additional file 7). One module had only~40% overlap. Thus, gene expression correlation structure is similar between men and women, and here we focus on properties of the intersection of the overlapping modules. Interestingly, 7 of these intersected modules were highly enriched with femalebiased or male-biased genes. There were three modules with more than 30% male-biased genes, and two modules with more than 30% female-biased genes. The modules were highly enriched for several GO terms (Additional file 7). We calculated the pairwise transcript correlations within each intersected module or men and women separately. For two modules containing malebiased genes the correlations were significantly stronger in males than in females (76% of the correlations were stronger in module #6, and 92% in module #9, Figure 2A & B respectively). Thus, these modules contained around 30% of male-biased genes, but also the majority (> 75%) of the interactions in the module were stronger in males compared to females. Evolution rates of sex-biased genes To test whether sex-biased genes have evolved faster than non sex-biased genes, we tested for enrichment in two sets of genes that were previously identified as rapidly evolving: 244 genes from the Human PAML Browser [29] and 40 genes from a study comparing human and chimpanzee genomes [30]. Sex-biased, male-biased and femalebiased genes were not enriched for any of the two gene sets (Fisher's exact test, p > 0.05). Next, we tested whether dN, dS and dN/dS (the evolution rates) as provided by [31] were different in sex-biased, male-biased and femalebiased genes compared to non sex-biased genes, but found no significant differences (all p > 0.05, Wilcoxon rank test). Tissue specificity of sex-biased genes We downloaded analysis results of two human studies that identified sex-biased genes in muscle [22] and in liver [21]. In muscle, 63 sex-biased genes were identified on the autosomes which were enriched with the sexbiased genes we identified (8 genes identified in both tissues, p < 0.01 (Fisher's exact test), Additional file 8). On the X chromosome 5 genes were identified as sexbiased in muscle, of which 4 were also identified in blood (p < 0.001 (Fisher's exact test), Additional file 8). In liver, 862 sex-biased genes were identified on the autosomes which were enriched with the sex-biased genes we identified (36 genes identified in both tissues, p < 0.05 (Fisher's exact test), Additional file 8). On the X chromosome 50 genes were identified as sex-biased in liver, of which 18 were also identified in blood (p < 1e-9 (Fisher's exact test), Additional file 8). Sex-biased genes in postmenopausal and hormonal contraceptive using women To examine whether sex differences in gene expression depend on hormonal status, sex effects were computed by comparing men (N = 1,814) with postmenopausal women (N = 740) and women using hormonal contraceptives (HC women, N = 1,093). On the autosomes, there were 697 transcripts differentially expressed between postmenopausal women and men. From these 697 transcripts (369 female-biased and 328 male-biased) 236 overlapped with the 993 sex-biased transcripts identified in non-hormonal contraceptives using premenopausal (NHC) women. When comparing the HC women with men, a much larger number of 2,125 differentially expressed transcripts were identified (1,157 female-biased, 968 in male-biased). From these transcripts, 755 were overlapping with the 993 sex-biased transcripts identified in NHC women. For the 933 transcripts identified in NHC women, log e fold changes were computed for the difference between each of the three groups of women (NHC, HC, postmenopausal) compared to men. When comparing these fold changes between postmenopausal and NHC women, it became clear that most of the fold changes have the same sign (85% in total, 99% of the negative fold changes) but that the fold changes in NHC women are larger than those in postmenopausal women for 80% of the transcripts ( Figure 3A). Also the fold changes of NHC women and HC women often have the same sign (96%), and the fold changes of HC women were often larger than those observed in NHC women (66% of all fold changes, 88% of the negative fold changes, Figure 3B). This shows that many gene expression differences between women and men become smaller when women reach menopause, and are larger when women use hormonal contraceptives, which reinforces the role of estrogen in regulating sex-biased genes. Age specific sex effects on gene expression Age has a strong influence on gene expression [32]. To examine whether sex effects on gene expression are agerange specific, we separately analyzed the data for three age groups (men versus premenopausal women who did not use hormonal contraceptives, age ranges 17-30 (N = 1047), 31-40 (N = 1191) and 41-88 (N = 1170)). In these 3 age groups we identified 49, 103 and 34 autosomal sex-biased genes respectively (p < 1.2e-6, Additional file 9), which overlapped for >98% with the sex-biased genes identified in the total sample (with same direction of effect). The three sets of sex-biased genes identified in these age groups overlapped to a lesser extent with each other (>38%, Additional file 9). However, the fold changes between men and women of the sex-biased genes identified in the total sample were highly concordant between age ranges (Additional file 10), suggesting that the identified sex effects occur at all ages, but that some effects may be stronger at a certain age or may not have been identified due to reduced power in the smaller groups of selected ages. Figure 2 Between transcript correlations are higher in males than in females for 2 modules. WGCNA (Weighted Gene Co-Expression Network Analysis) resulted in 9 modules with correlated transcripts, two of which were highly enriched for female-biased genes, and 3 for male-biased genes. From the latter three, two modules contained genes from which the pair-wise correlations were stronger in males compared to females. A) Module #6 contained 45 genes, 76% of the correlations computed in males (y axis) were larger than those computed in females (x axis). B) Module #9 contained 35 genes, 92% of the correlations computed in males (y axis) were larger than those computed in females (x axis). Discussion At the DNA autosomal sequence level sexes do not differ, as established by a well-powered meta-analysis [33], suggesting an important role for higher molecular levels, such as the transcriptome, in the manifestation of sexual dimorphisms. Indeed, animal studies have shown that the transcriptome is highly differential between sexes [14,15,34,35]. In humans, gene expression differences have been reported in liver [21], lymphoblastoid cell lines [19,20], and muscle [22], but only in studies with relatively small sample sizes (N < 250). Here we analyzed the sex differences in the peripheral blood transcriptome by assessing 47,122 probe sets targeting 19,250 genes genome wide in a well-characterized large Dutch cohort (N = 5,241) taking into account the impact of hormonal contraceptive use and menopausal status in women. Number of female-and male-biased genes On the autosomes, we identified 582 genes (3.1% of all genes measured) that were differentially expressed between men and premenopausal women not using hormonal contraceptives. Of these genes, 57.7% were female-biased. The autosomes had rather similar proportions of sex-biased genes indicating the sex-biased genes can be found equally frequent across the entire genome, as opposed to what was found in liver [21] where several chromosomes enrich sex-biased genes. It is important to note that the filter criteria used for selecting probe sets highly influences the number of sex-biased genes; the percentage of sex-biased genes increased with the threshold for mean expression level from 3.1% up to 13.7%. Importantly, we have shown that hormonal contraceptives and menopause status, which were not taken into account in previous studies in humans, highly influence the number and effect sizes of sex differences in gene expression. Although it has been indicated that the percentage of sexbiased genes in non-human vertebrates is highly tissue dependent (e.g. ranging from 13.6% in the brain to 72% in the liver [14,15]), our described range of 3.1-13.7% for sexbiased genes is comparable to that found in human liver (3.7%, [21]). Peripheral blood consists of a mixture of blood cell types (the main types are lymphocytes, neutrophiles and monocytes), hence the sex differences we identified must either be present in all subcell types or, when present in only one cell type, strong enough to be observed in the accumulative measurement. By stratifying the sample into three age groups we showed that the size of the sex effects may be age dependent for some genes, but the direction of the effects are highly concordant between age groups. We found a significant but small overlap of sex-biased autosomal genes identified in peripheral blood with those previously identified in muscle or liver, further confirming substantial tissue specificity of sex-biased genes. Across tissue circulating exosomes contain RNA and could contribute to the overlapping expression profiles between muscle, liver and blood [36]. Sex-biased X chromosome genes showed must larger overlap between tissues, indicating that escape from X-inactivation is highly similar between tissues. Previous studies have reported that sexbiased genes may evolve more rapidly than average in vertebrates [12], human brain [37] and liver [21]. However, sex-biased genes in the peripheral blood transcriptome identified in our study did not include enrichment of fast evolving genes. In women, most genes on one X chromosome are not expressed due to X chromosome inactivation Women were divided in three groups: postmenopausal, hormonal contraceptive using (HC), and non hormonal contraceptive using (NHC) women. For the 993 sex-biased transcripts identified in the comparison between males and NHC women, fold changes were computed for the difference between the three groups of women and the men. Positive fold changes are from female-biased genes, negative fold changes correspond to male-biased genes. A) Fold changes are for 80% larger in NHC women as compared to postmenopausal women. B) Fold changes in HC women are for 66% larger than those observed in NHC women, and the negative fold changes (male-biased genes) were for 88% larger in HC women. [38]. Some genes escape X-inactivation and are expressed from both X chromosomes [7]. We showed that in peripheral blood the X chromosome is enriched for femalebiased genes; 5.1% of the genes measured on the X chromosome are female-biased. This percentage, however, is only slightly higher than the average percentage identified at the autosomes (3.1%), which shows a major role of autosomal genes in sex-specific gene expression. The role of estradiol in gene expression sex differences Estrogen is the primary female sex hormone and estrogenic activity is present at about two fold increased concentration in women as compared to men. Estradiol, the predominant estrogen in terms of absolute serum levels, activates estrogen receptors that bind to DNA sequences to activate or suppress gene expression, and many efforts have been made to find its target genes (up to 5000) in MCF-7 cancer cell line [39][40][41] because of its role in breast cancer [42]. Here we show that in peripheral blood 18% of the identified sex-biased genes are known to be regulated by estradiol, and several additional findings suggest that the sex difference in estrogen levels underlie multiple sex differences in gene expression. First, from the 20 genes with high male/female fold changes, 7 are involved in common diseases and influenced by estrogen; GPER (g protein-coupled estrogen receptor-1, related to cancer [43]), ADM (coding for the peptide adrenomedulin, the main vasodilatory peptide involved in cardiovascular disease [44][45][46], LTF (lactoferrin, essential for the innate immune system and involved in cancer [47,48]), LCN2 (lipocalin-2, innate immune system and cancer related [49]), MPO (myeloperoxidase), a biomarker for cardiovascular disease risk [50], ERG (Ets Related Gene, proposed as a mediator of estrogen effect on prostate cancer [51], LTBP1 (latent-transforming growth factor beta-binding protein 1, linked to coronary heart disease [52]. This suggests that these genes mediate the effect of estrogen and thereby may contribute to the sex differences in the related diseases. Second, we showed that the sex differences in gene expression depend largely on the hormonal status of the subgroup of women considered. In postmenopausal women, in which estradiol levels are similar to those in men, we identified fewer sex-biased genes with smaller effect sizes as compared to premenopausal women. In hormonal contraceptive using women, with increased estradiol levels, we identified more sex-biased genes and larger effect sizes as compared to women not using hormonal contraceptives. Interestingly, the change in effect size was present for more than 65% of the female-biased genes, and for more than 85% of the male-biased genes. This gives an indication of the amount of sex-biased genes affected by estradiol, which is much higher than currently known from literature (IPA, 15% of sex-biased genes are known to be regulated by estradiol). In liver, sex differences in gene expression are mainly caused by sex-specific growth hormone secretion [21,53]. Growth hormones are regulated by estrogen [54,55], hence the effect of estrogen on sex-specific gene expression in peripheral blood may also be mediated by growth hormone secretion. Immune system processes predominant in female-biased genes The immune system function is known to be different between sexes; women produce more vigorous immune reactions and are more prone to autoimmune diseases [56]. Here we identified a large number of genes that potentially contribute to the immune system sex differences; 31.6% of female-biased genes are in the GO category immune system process. From the 95 femalebiased genes linked to the immune system, 45 are regulated by estradiol, which confirms the role of estrogen in the sex-specific immune system functioning [57]. Most interestingly, Ingenuity Pathway Analysis revealed that female-biased genes are highly enriched for genes involved in the toll-like receptor (TLR4 and TLR3 pathways, known as LPS and poly I:C response patterns) driven innate immune defense, suggesting some intrinsic innate immune activity sex differences. Increased female expression of immunoglobulin is reflective of concomitant more active humoral immune activity. These functions are compatible with an activated leukocyte, cytokine production and type 1 interferon activity observed in the GO enrichment analysis and might explain why women are more resistant to certain infections, and suffer a high incidence of autoimmune diseases compared to men [2]. For example, rheumatoid arthritis occurs almost twice as often in women as in men [58]. Female-biased genes were enriched for genes linked to rheumatoid arthritis, including the gene IL6R, which is a well-known target in rheumatoid arthritis treatment [59]. The identified female-biased genes provide a framework for future research to unravel the mechanism of sex-biased immune regulation and autoimmune diseases. Annotation of male-biased genes Surprisingly, male-biased genes were not enriched for GO categories, and thus serve a wide variety of biological functions. In IPA, however, male-biased genes were most significantly enriched for genes linked to renal cancer, including the well established renal cancer gene CSF1R [60]. It is notable that a recent meta-analysis on sex differences in renal cell cancer presentation and survival showed a ratio of 1.65 of renal cell carcinoma for males compared to females [61]. The cellular component GO categories indicate the part of a cell at which a gene product is located. Topographical categorization revealed that male-biased gene products occur more often intracellularly, in particular at the cytoplasm, whereas female-biased genes occur more often integral to the membrane. Sex-specific eQTLs do not underly sex-biased gene expression A previous study (in a smaller sample than the current one) showed that a substantial amount of eQTLs is sexspecific, but not for eQTLs from genes with sex-biased expression [26]. Here we confirm this finding by showing that for the sex-biased genes there were no significant eQTL-sex interactions. This shows the importance of other factors, such as estradiol and other hormones, in causing gene expression sex differences. Sex-biased genes in modules of correlated transcripts WGCNA analyses resulted in highly similar modules of correlated transcripts for men and women, similar to findings in mice [18]. The 9 modules were highly enriched for male or female-biased genes, indicating that sex-biased genes play an important role in the major gene expression networks. Module #2 and #3 contained each more than 30% female-biased genes and were enriched for the GO category immune system response, which shows that immune system genes operate in correlated groups that are partially sex-biased. Module #9 contained 31.4% male biased genes, enriched the GO category immune response (37%) and contained 92% stronger pairwise correlations in men than in women. This module contained the interleukin receptor IL2B gene, and IPA analysis showed that 11 of the 35 genes in this module are known to be regulated by the cytokine IL2, and 16 of them are related to cancer (Additional file 11) including the female-biased genes PRF1 and GZMH essential for natural killer (NK)-cell cytotoxicity [62,63]. Module #6 contained 37.8% male-biased genes, was enriched for the GO term coagulation (50%) and 76% of the pairwise correlations in this module are higher in men than in women. IPA analysis shows that from this module 16 genes are regulated by TGFB1 (Additional file 11), and 17 genes are related to heart or vascular disease, including the male-biased genes PTGS1 (coding for COX-1, which is inhibited by aspirin [64] that has a protective effect on cardiac events [65]), ITGA2B, ITGB3, F13A and GP1BA which are candidate stroke risk genes [66]. This suggests that the modules #9 and #6 may play a role in the sex differences in cancer and cardiovascular disease, respectively. Conclusions We showed that sex-biased genes occur in large numbers throughout the human peripheral blood transcriptome, suggesting an important role of sex-specific gene expression in sexual dimorphisms. Estrogen appears to be a key regulator of sex-biased genes, also shown by the effect of menopause and hormonal contraceptives on gene expression sex differences. Sex-biased genes are highly enriched with genes linked to common diseases and may contribute to sex-differences in these diseases. Understanding the molecular mechanisms behind sex inequalities can lead to new insights into sex-specific pathophysiology and treatment opportunities. Subjects The two parent projects that supplied data for this study are large-scale longitudinal studies: the Netherlands Study of Depression and Anxiety (NESDA) [67] and the Netherlands Twin Registry [68]. NESDA and NTR studies were approved by the Central Ethics Committee on Research Involving Human Subjects of the VU University Medical Center, Amsterdam (IRB number IRB-2991 under Federalwide Assurance 3703; IRB/institute codes, NESDA 03-183; NTR 03-180), and all subjects provided written informed consent. The sample consisted of 5391 subjects (before QC), 3327 participants from NTR (2 MZ triplets, 708 MZ twin pairs, 658 DZ twin pairs, 338 siblings from these twins and 251 unrelated individuals) and 2064 unrelated participants from NESDA. The age of the participants ranged from 17 to 88 years (mean 38, SD 13) and 65% of the sample was female. As part of the NESDA and NTR biobank protocols, data on menopause status and medication use, including hormonal contraceptives were collected in all participants. Blood sampling, RNA and DNA extraction The NTR and NESDA blood sampling and RNA extraction procedures have been described in detail previously [69,70]. In short; for NTR, venous blood samples were drawn between 0700-1100 after an overnight fast and usually in the subjects' homes. Within 20 minutes of sampling, heparinized whole blood was transferred into PAXgene Blood RNA tubes (Qiagen) and stored at -20°C. The PAXgene tubes were shipped to the Rutgers University Cell and DNA Repository (RUCDR), USA. Average time between blood sampling and RNA extraction was 211 weeks (included in mixed model for gene expression). Upon registration of samples, RNA was extracted using Qiagen Universal liquid handling system (PAXgene extraction kits as per the manufacturer's protocol). From the NESDA subjects, serial venous whole blood samples were obtained (8-10 AM, after overnight fasting) in one 7-mL heparin-coated tube (Greiner Bio-One, Monroe, North Carolina). Between 10 and 60 min after blood draw, 2.5 mL of blood was transferred into a PAXgene tube (Qiagen, Valencia, California). This tube was kept at room temperature for a minimum of 2 hours and then stored at −20°C. Average time between blood sampling and RNA extraction was 113 weeks (included in mixed model for gene expression). Total RNA was extracted at the VU University Medical Center (Amsterdam) according to the manufacturer's protocol (Qiagen) as described previously [70]. For both NESDA and NTR samples high molecular weight genomic DNA was isolated from frozen blood in EDTA tubes using Puregene DNA isolation kits (Qiagen). Gene expression measurements Gene expression assays were conducted at the Rutgers University Cell and DNA Repository (RUCDR, http:// www.rucdr.org). RNA quality and quantity was assessed by Caliper AMS90 with HT DNA5K/RNA LabChips. RNA samples that showed abnormal ribosomal subunits in the electropherograms were removed. NTR and NESDA samples were randomly assigned to plates with seven plates containing subjects from both studies to better inform array QC and study comparability. For cDNA synthesis, 50 ng of RNA was reverse-transcribed and amplified in a plate format on a Biomek FX liquid handling robot (Beckman Coulter) using Ovation Pico WTA reagents per the manufacturer's protocol (NuGEN). Products purified from single primer isothermal amplification (SPIA) were then fragmented and labeled with biotin using Encore Biotin Module (NuGEN). Prior to hybridization, the labeled cDNA was analyzed using electrophoresis to verify the appropriate size distribution (Caliper AMS90 with a HT DNA 5 K/RNA LabChip). Samples were hybridized to Affymetrix U219 array plates (GeneTitan) to enable highthroughput gene expression profiling of 96 samples at a time. The U219 array contains 530,467 probes for 49,293 transcripts. All probes are 25 bases in length and designed to be "perfect match" complements to a designated transcript. Array hybridization, washing, staining, and scanning were carried out in an Affymetrix GeneTitan System per the manufacturer's protocol. Genome-wide SNP measurements and QC Genotyping was conducted using Affymetrix Genome-Wide Human SNP Array 6.0 containing 931,946 SNPs, per the manufacturer's protocol. The resulting data were required to pass standard Affymetrix QC metrics (contrast QC > 0.4) before further analysis. SNP QC included removal of SNPs for non-unique mapping of probe sequences to NCBI Build 37/UCSC hg19, low minor allele frequency (< 0.005), substantial deviation from HapMap3 CEU founder allele frequencies, deviation from Hardy-Weinberg equilibrium (p HWE < 1×10 -8 ), and high missingness (> 0.05). After genotyping QC, 666 K autosomal SNPs were available. Subjects were eliminated from analysis for high missingness (> 0.05), outlying genome-wide homozygosity or ancestry, discrepant genetic and phenotypic sex, or twin relatedness not consistent with monozygosity or dizygosity. Gene expression QC Gene expression data were required to pass standard Affymetrix QC metrics (Affymetrix expression console) before further analysis. Probes were removed when their location was uncertain or if their location intersected a polymorphic SNP (dropped if the probe oligonucleotide sequence did not map uniquely to hg19 or if the probe contained a polymorphic SNP based on HapMap3 and 1000 Genomes project data). Expression values were obtained using RMA normalization implemented in Affymetrix Power Tools (APT, v 1.12.0). First, 70 samples with array results inconsistent with the phenotypic database were removed (inconsistent sex based on chr X and chr Y probe sets). Second, we used the pairwise correlation matrix of expression profiles across all arrays for additional QC. These quantities were expressed in terms of median absolute deviations to provide a sense of scale. We used: With r i the average of correlations for sample i, and r the average of all correlations. Larger values of D corresponded to poor quality; 80 samples with D > 5 were removed, decreasing the final number of subjects to 5,241. Mixed models for gene expression Linear mixed models allow for the correction for the presence of twin families in a sample [71]. For each of the 47,122 probe sets a mixed model was fit with gene expression as dependent variable. Independent model covariates were selected based on significance of the variable in the fitted mixed models. Several covariates that did not come out significantly were not included in the final model (alcohol use, education level, time between RNA amplification and RNA fragmentation, time between RNA fragmentation and RNA hybridization). Inclusion of depression status and psychotropic medication use as covariates in the mixed model did not affect the principle findings. Fixed effect covariates included in the final model were sex, age, body mass index (BMI, weight/height 2 in kg/m), smoking status (yes/no current smoking), D (see above), hemoglobin (mmol/L), group (NTR or NESDA), time of blood sampling, month of blood sampling, time between blood sampling and RNA extraction, and the time between RNA extraction and RNA amplification. Random effects were plate, well, family ID and zygosity (one factor for each monozygotic twin pair, for each other individual different factors [71]). In Additional file 12, for each of the variables the amount of probe sets for which the variable was significant is denoted. Mixed models and resulting p-values were computed using the R function lmer from the package lme4. eQTL analysis eQTL analysis was first performed in a screening step by MatrixeQTL [72]. Prior to eQTL analysis for each gene expression probeset the data was transformed into a normal distribution using an inverse quantile normal transformation. Genotypes were coded as 0, 1 or 2 and for each SNP-transcript pair a linear regression model was fitted including the covariates sex, age, body mass index, smoking status, D (see above), hemoglobin (mmol/L), group (NTR or NESDA), time of blood sampling, month of blood sampling, time between blood sampling and RNA extraction, time between RNA extraction and RNA amplification, plate and well plus three principle components (PCs) from the genotype data [73] and 5 PCs from the transformed expression data. Cis-eQTLs are transcriptassociated SNPs with distance < 1 Mb of transcript site. The trans-eQTLs are the complementary set of SNPs. In the screening step, males and woman were screened using MatrixeQTL as if the individuals were all un related. Benjamini-Hochberg q-value estimation was performed separately for cis-and trans-eQTLs. For each of the 993 autosomal sex-biased transcripts eQTLs were selected for men and women separately, and then pooled. For these eQTLs genotype-sex interactions were assessed using the full mixed model that included both men and women, with as independent variables genotype, sex, their interaction and the other covariate also used in the mixed model for gene expression (see above). GO category enrichment To test whether Gene Ontology [74] categories enriched sex-biased genes we used hypergeometric tests implemented in BINGO software [75]. The reference gene set consisted of all genes measured by the U219 microarrays. WGCNA The correlation structure of gene expression was examined using unsigned co-expression networks constructed using the WGCNA package in R [28]. Of all 47,122 probes a single probe of highest mean expression per gene was selected to be included in the network analysis using the CollapseRows function in WGCNA, resulting in the inclusion of 19,249 genes in the network. The choice of the probe of highest mean expression per gene has been shown to yield robust analysis across data sets [76]. The network construction for each entire data set was performed in a single block of maximum size 20,000 genes using the blockwiseModules function in WGCNA [28]. Using this block size in WGCNA ensured the theoretical advantage that the genes did not have to be pre-clustered by WGCNA. The network adjacency matrix is the gene pair-wise correlation matrix raised to the power of 6, chosen based on the scale-free topology criteria [77]. Rather than just using adjacency weights between genes, the topological overlap measure (TOM) is computed from the adjacency matrix. For each pair of genes, TOM is the adjacency weights of all the paths between the genes of length at most two (i.e. the genes are directly connected or have one gene between them) scaled by the minimum connectivity of the either gene. The topological overlap dissimilarity, defined as 1-TOM, is used for the average linkage hierarchical clustering algorithm. The resultant clustering tree is used to define the modules from its branches using the hybrid dynamic tree cutting algorithm [28]. The minimum module size was set to 30 and the cut-off for merging modules was set to 0.25. Each module is then characterized by its eigengene, the first principal component of the module expression data, which accounts for the greatest variation of the expression levels in the module. Genes were removed from modules if the correlations between their expression values and the module eigengenes were too low (less than 0.3). Modules were merged if the correlation between their eigengenes was high.
2016-05-12T22:15:10.714Z
2014-01-17T00:00:00.000Z
761600
s2orc/train
v2
Occupational radiation exposure to nursing staff during cardiovascular fluoroscopic procedures: A review of the literature
Occupational radiation exposure to nursing staff during cardiovascular fluoroscopic procedures: A review of the literature Abstract Fluoroscopy is a method used to provide real time x‐ray imaging of the body during medical procedures to assist with medical diagnosis and treatment. Recent technological advances have seen an increase in the number of fluoroscopic examinations being performed. Nurses are an integral part of the team conducting fluoroscopic investigations and are often located close to the patient resulting in an occupational exposure to radiation. The purpose of this review was to examine recent literature which investigates occupational exposure received by nursing staff during cardiovascular fluoroscopic procedures. Articles published between 2011 and 2017 have been searched and comprehensively reviewed on the referenced medical search engines. Twenty‐four relevant studies were identified among which seventeen investigated nursing dose comparative to operator dose. Seven researched the effectiveness of interventions in reducing occupational exposure to nursing staff. While doctors remain at the highest risk of exposure during procedures, evidence suggests that nursing staff may be at risk of exceeding recommended dose limits in some circumstances. There is also evidence of inconsistent use of personal protection such as lead glasses and skull caps by nursing staff to minimize radiation exposure. Conclusions: The review has highlighted a lack of published literature focussing on dose to nurses. There is a need for future research in this area to inform nursing staff of factors which may contribute to high occupational doses and of methods for minimizing the risk of exposure, particularly regarding the importance of utilizing radiation protective equipment. | INTRODUCTION Fluoroscopy is a method used to provide real time imaging of the body during medical procedures. It utilizes x-rays which pass through the patient to visualize internal structures. Historically x-ray fluoroscopy was primarily used for diagnosis, but recent advances in both imaging and procedural equipment have led to considerable growth in the range of fluoroscopically guided procedures, particularly in the field of interventional cardiology, (IC) and vascular intervention. [1][2][3] Interventional cardiovascular (CV) cases are often less costly than surgery and allow medical intervention to be conducted in a minimally invasive way, reducing the risk to the patient. 4 Although very useful for imaging, ionizing radiation may result in several detrimental effects to those exposed, including cellular damage, malignancies, and cataracts. [5][6][7][8] The greatest risk of occupational exposure occurs when the primary x-ray beam strikes the patient's skin and scatters, a portion of the x-ray photons are absorbed and scatter in the patient's body. 9 Scattered radiation levels near the patient can be relatively high, even under routine working conditions, and staff are subsequently exposed while conducting CV procedures. 1,10 There has been justifiable concern over the dose received by the physicians operating in this environment, but data detailing exposure to supporting staff during fluoroscopic procedures are scarce. 1,11,12 The fundamental premise is to keep exposure to ionizing radiation as low as reasonably achievable (ALARA) 6,13 and organizations such as the International Commission on Radiological Protection (ICRP) recommend dose limits to those that are occupationally exposed. 14 Staff radiation monitoring is performed as locally legislated to ensure that departments are complying with regulatory occupational dose limits, but problems with effective monitoring have been highlighted partly due to the attitude and radiation safety culture of staff. 15 Poor adherence to the ICRP recommendation to conduct measurements using two dosimeters, one worn above and the other underneath the lead apron, as well as irregular use of personal dosimeters and has been emphasized, 16 and it has been reported that appropriate dosimetry is essential to provide reasonable estimations of dose to the lens of the eye. [17][18][19] There has been increasing concern over recent epidemiological evidence suggesting that radiation-induced cataracts can occur at much lower doses than previously assumed. [20][21][22] Staff involved in fluoroscopic CV procedures have demonstrated an elevated incidence of radiation-associated lens changes. 16,21,[23][24][25][26] In response, in 2011 the ICRP recommended reducing the occupational dose limit for the eye from 150 mSv (millisievert) to 20 mSv per year. 27 This has resulted in numerous studies investigating the lens dose received by fluoroscopic operators, but there is very little research evaluating the risk of occupational eye exposure for nursing and allied health staff. 1,11,19 Nurses are an integral part of the team conducting CV procedures, and many cases require staff to stand adjacent to the patient resulting in inadvertent exposure to radiation. To minimize the risk of exposure, it is vital that occupational dose to individuals is monitored and quantified. To date, the occupational exposure to nurses within the CV setting is widely unexplored. 1.A | Review objective The purpose of this review is to provide a current account of research specifically examining occupational dose to nursing staff during x-ray guided CV procedures. It will compare results of publications within procedural contexts, critically review the findings, and assess areas in which further research would be beneficial. | MATERIALS AND METHODS A search for relevant literature published between 2011 and 2017 was undertaken between November 2016 and June 2017 to retrieve articles related to occupational radiation dose to nursing staff present during fluoroscopically guided CV procedures. A combination of keywords was used correlated to occupational radiation dose to nurses, i.e.: "nurse occupational dose", "nursing fluoroscopy", "staff fluoroscopy dose", and "occupational fluoroscopy dose". Search terms were purposefully general to ensure that articles which did not explicitly articulate 'cardiovascular' terminology were included in the initial screening for suitability for inclusion in the review. Due to the relatively small number of identified studies, reference lists of located manuscripts were also used to detect additional articles. Due to the rapid advancements in both imaging and procedural equipment in the last decade, searches were limited to those published after 2010 to ensure relevance to current operating practices. A total of thirty potentially relevant articles were identified and of these six articles were excluded from the review as the investigated radiation doses to nurses were not directly related to the imaging of the CV system as illustrated in Fig. 1 2.A | Radiation dose monitoring It has been demonstrated that the dose to nursing staff during fluoroscopic procedures can be similar or higher than that received by the physician [28][29][30] with evidence of an increasing trend toward higher dose levels to nurses working in this environment. 28 It is therefore important to quantify the radiation exposure to individuals working within fluoroscopic departments. [31][32][33] Typically, the devices used to evaluate the individual cumulative radiation exposure are personal dosimeters, which are usually badges worn by occupationally exposed staff during procedures. The ICRP recommends the proper use of personal monitoring badges in interventional fluoroscopic laboratories to monitor and audit occupational radiation dose. 14 There was a variety of styles, anatomical positioning, and calibration of dosimeters utilized in the reviewed literature (Table 2). Active dosimetry systems, such as DoseAware (Philips Medical Systems, Amsterdam, The Netherlands) provide real time visualization of radiation dose rate. It consists of a personal dosimeter worn by staff [ Fig. 2(a) (SD, 0.01). 39 None of these reductions were reported as statistically significant with one cited explanation the possibility that the nurses had a restricted view of the readout monitor during cases, but it is acknowledged that real time dose feedback can be effective in dose reduction. [35][36][37][38][39] 2.B | The effect of equipment and staff location Radiation scatter is the primary mechanism of operator and staff exposure, and understanding the factors that can affect its magnitude and distribution is essential. 40 As X-ray scatter from the patient is the primary source of radiation dose to in-room personnel, 41 staff location within the fluoroscopy room influences the level of occupational exposure. 1,19,42 In x-ray guided CV procedures, the area of greatest scatter alters as the geometry of the x-ray tube changes ( Fig. 3). 43 Nursing staff may undertake several roles within fluoroscopic suites, and the in-room location of the nurse may vary during procedures. In many of the reviewed articles, the role of the nurse was not well-defined and it was unclear whether staff were performing the scrub or scout role 12,32,35,[44][45][46] and consequently reported data may represent an average of the dose of both duties. during the same procedure. 31 The authors also identified that personal behavior within the fluoroscopic suite alters dose considerably. Depending on their responsibilities during the procedure nurses may have greater opportunity of deliberately increasing their distance from the patient resulting in a decrease in dose. 1,25,29,39 Some authors investigated dose in relation to proximity to the xray tube. 25,34,38,[47][48][49] Explanatory diagrammatic representation of the position of staff was provided in several articles 25,38,[47][48][49] which allows comparison by dosimetric location rather than assigned role. Specific articulation of staff distances from the x-ray tube or 2.C | Lead shielding Lead shielding refers to the use of lead, or lead equivalent products to shield staff from radiation. Variations in accessibility and utilization of lead shielding devices by staff in fluoroscopic suites have been well documented 50,51 and this has been reflected in reported use of personal protection in the reviewed studies (Table 3). Thyroid shields were either not worn 12,44 or inconsistently worn by staff at some centers. 52 Only one reviewed article specifically articulated the use of a lead skull cap during fluoroscopic procedures and was utilized by the operator only. 11 Lead glasses also had varying degrees of use with several studies reporting that while doctors routinely used lead eye protection, nursing staff did not. 11,19,44,47,53 Consideration should also be given to the location of lead protection. This may include items such as ceiling mounted lead glass, table mounted, or stand-alone lead shields (Fig. 4). This equipment provides a barrier between the scattered radiation from the patient and the staff member, but correct positioning is vital for effective dose minimization. 54 The importance of careful positioning of the movable ceiling mounted lead shield has been previously reported 55 especially when using biplane equipment, 56 and this was echoed in the reviewed literature. 1,11,19,25,31,32,34,35,46,48,52,53 (Table 2). 1,11,47,48,52 Several studies positioned dosimeters external to protective lenses 19,44,[46][47][48] acquisitions 31 and increasing staff distance during acquisitions especially when using large tube angles. 31,38 Adequate staff training and education were also seen as essential, and this was successfully supplemented by using real time feedback monitors. 34 38 Physicians should also let other in-room staff know of an impending DSA acquisition so that the staff know to not approach the patient and stay behind shielding if possible. 38,63 Research indicates a considerable number of parameters which can cause a significant variation in resultant dose levels during fluoroscopic cases, even within the same type of procedures. 1 The Optimization of RAdiation protection for MEDical (ORAMED) staff study also revealing a large variability of practices between cases and workplaces. 56 Given the variation in procedure type, operator, tube geometry, and staff position, correlation of dose conditions within differing procedures proved difficult. This was exacerbated by the different reporting values used by the authors. 2.E | Imaging parameters The ICRP notes that radiation training may be lacking which may result in a radiation safety issue for staff as well as patients 69 and recommends that departments implement an effective optimization program through training and raising consciousness of radiology protection in individuals. 70 The effectiveness in dose reduction to staff following radiation education has been highlighted 65,66,71 as has the need for radiation training of occupationally exposed nursing staff. 72 Several authors noted that nursing staff are at risk of exceeding recommended dose levels if radiation protection tools are not properly used. Given the variables that exist for nursing staff during fluoroscopic procedures, dose minimization is not as simple as increasing distance from the source of the scattered radiation. Given the invisible nature of radiation, staff should be provided with appropriate information and training to highlight factors which influence dose allowing them to become conscious contributors to personal dose minimization. CONF LICT OF I NTEREST The authors declare no conflict of interest.
2018-10-22T06:13:30.579Z
2018-10-08T00:00:00.000Z
52928600
s2orc/train
v2
$N\bar{N}$ production in $e^{+}e^{-}$ annihilation near the threshold revisited
$N\bar{N}$ production in $e^{+}e^{-}$ annihilation near the threshold revisited Production of $p\bar{p}$ and $n\bar{n}$ pairs in $e^{+}e^{-}$ annihilation near the threshold of the process is discussed with account for the new experimental data appeared recently. Since a significant part of these new data was obtained at energies noticeably exceeding the threshold, we also take into account the form factor describing the amplitude of $N\bar{N}$ pair production at small distances. The effective optical potential, which describes a sharp dependence of the $N\bar{N}$ production cross sections near the threshold, consists of the central potential for $S$ and $D$ waves and the tensor potential. These potentials differ for the states with isospin $I=0$ and $I=1$ of $N\bar{N}$ pair. The optical potential describes well $N\bar{N}$ scattering phases, the cross sections of $p\bar{p}$ and $n\bar{n}$ production in $e^{+}e^{-}$ annihilation near the threshold, the electromagnetic form factors $G_{E}$ and $G_{M}$ for protons and neutrons, as well as the cross sections of the processes $e^{+}e^{-}\to6\pi$ and $e^{+}e^{-}\to K^{+}K^{-}\pi^{+}\pi^{-}$. I. INTRODUCTION A strong energy dependence of the cross sections of baryon-antibaryon and meson-antimeson pair production has been observed in many processes near the thresholds of the corresponding reactions. Some of these processes are e + e − → pp [1][2][3][4][5][6][7][8], e + e − → nn [9][10][11], e + e − → Λ (c)Λ(c) [12][13][14][15], e + e − → BB [16], and e + e − → φΛΛ [17]. This anomalous behavior can naturally be explained by small relative velocities of the produced particles. Therefore, they can interact strongly with each other for a sufficiently long time. As a result, the wave function of the produced pair changes significantly (the so-called final-state interaction). The idea on the final-state interaction as a source of anomalous energy dependence of the cross sections near the thresholds has been expressed in many papers [18][19][20][21][22][23][24][25][26][27][28]. However, the technical approaches used in these papers were different. It turned out that in almost all cases the anomalous behavior of the cross sections is successfully described by the final-state interaction. Unfortunately, information on the potentials, which are responsible for the final-state interaction, is very limited. However, instead of trying to find these potentials from the first principles, one can use some effective potentials, which are described by a small number of parameters. These parameters are found by comparison of the predictions with a large amount of experimental data. Such an approach has justified itself in all known cases. One of the most complicated processes for investigation is NN pair production in e + e − annihilation near the threshold. To describe the process, it is necessary to take into account the central part of the potential for S and D waves and the tensor part of the potential. In addition, these potentials are different in the isoscalar and isovector channels. Another circumstance, that is necessary to take into account, is a large number of NN annihilation channels to mesons. As a result, instead of the usual real potentials, one has to use the so-called optical potentials containing the imaginary parts. Note that in a narrow region near the thresholds of pp and nn production, the Coulomb interaction of p andp should also be taken into account as well as the proton and neutron mass difference. The details of approach that allows one to solve the specified problem are given in our paper [24]. However, in that paper the parameters of the potentials and the corresponding predictions for various characteristics of the processes were based on the old experimental data on the production of pp and nn pairs. Moreover, a significant part of the uncertainty in the parameters of the model was related to a poor accuracy of the experimental data on the cross section of nn pair production. Recently, new data have appeared on nn pair production in e + e − annihilation near the threshold [10,11]. These data differ significantly from the previous ones and have a fairly high accuracy compared to the previous experiments. Therefore, it became necessary to perform a new analysis of the numerous experimental data within our model. The approach in Ref. [24] was based on the assumption that the amplitude of a hadronic system production at small distances weakly depends on energy of the system near the threshold of the process. Therefore, in Ref. [24] this amplitude was considered as energy independent, and strong energy dependence of the cross section has appeared via the energy dependence of the wave function due to the final-state interaction. In order to use the new data obtained at energies significantly above the threshold (but in the non-relativistic approximation), in the present paper we introduce the phenomenological dipole form factor which describes the amplitude of a hadronic system production at small distances. The aim of the present work is the analysis of NN real and virtual pair production in e + e − annihilation with the new experimental data taken into account. We show that our model, which contains a relatively small number of parameters, successfully describes the energy dependence of NN scattering phases (see Ref. [29] and references therein), the energy dependence of the cross sections of pp and nn pair production near the threshold [1][2][3][4][5][6][7][8][9][10][11], the electromagnetic form factors G E and G M for protons and neutrons in the time-like region [1][2][3][4][5]8], as well as the anomalous behavior of the cross sections of the processes e + e − → 6π [6,[30][31][32] and e + e − → K + K − π + π − [6,33,34]. II. DESCRIPTION OF THE MODEL The wave function of the NN system produced in e + e − annihilation through one virtual photon contains four components, namely, pp pair in S and D waves and nn pair in S and D waves. It is necessary to take into account pp and nn pairs together in the wave function due to the charge-exchange processes pp ↔ nn. Contributions of S and D waves must be taken into account together due to a tensor potential, which, for the total angular momentum J = 1 and the total spin s = 1, leads to mixing of states with the orbital angular momenta L = 0 and L = 2. In the absence of the effects violating the isotopic invariance (the Coulomb pp interaction and the proton and neutron mass difference), the potential in the states with a certain isospin I = 0, 1 has the form where s is the spin operator of NN pair (s = 1), n = r/r, and r = r N − rN . The potentials V I S (r), V I D (r), and V I T (r) correspond to interaction in the states with L = 0 and L = 2, as well as the tensor interaction. With account for the effects violating the isotopic invariance we have to solve not two independent systems for each isospin but one system of equations for the four-component wave function Ψ (see Ref. [24] for more details) where Ψ T denotes a transposition of Ψ, (−p 2 r ) is the radial part of the Laplace operator, u p (r), w p (r) and u n (r), w n (r) are the radial wave functions of pp or nn pair with L = 0 and L = 2, respectively, m p and m n are the proton and neutron masses, E is the energy of a system counted from the pp threshold, = c = 1. In Eq. (2), V is the matrix 4 × 4 which accounts for the pp interaction and nn interaction as well as transitions pp ↔ nn. This matrix can be written in a block form as where the matrix elements read and α is the fine-structure constant and I is the unit matrix 2 × 2. The equation (2) has four linearly independent regular at r → 0 solutions Ψ iR (i = 1 ÷ 4) with asymptotics at r → ∞ given in [24]. The proton and neutron electromagnetic form factors are expressed in terms of the components of these wave functions as follows Here F D (q) is the phenomenological dipole form factor that takes into account the energy dependence of the amplitude of the hadronic system production at small distances, u p iR (0) and u n iR (0) are the energy-dependent components of the wave function at r = 0, g p and g n are energy-independent fitting parameters. The cross sections of pp and nn pair production, which we refer to as the elastic cross sections, have the form In the absence of the final-state interaction, we have u p 1R (0) = u n 3R (0) = 1, and the rest u p iR (0) and u n iR (0) vanish. The functions u p 3R (0) and u n 1R (0) differ from zero due to the charge-exchange process, while nonzero values of u p 2R (0), u n 2R (0), u p 4R (0), and u n 4R (0) are the consequence of the tensor forces. Note that |G p E /G p M | and |G n E /G n M | differ from unity solely due to the tensor forces. For E = 0 these ratios are equal to unity, since at the threshold the contribution of the D wave vanishes. In addition to the strong energy dependence of the cross sections σ p el and σ n el near the threshold, a strong energy dependence reveals also in the cross sections of meson production in e + e − annihilation near the NN pair production threshold [6,[30][31][32][33][34]. Such a behavior is related to the production of virtual NN pair below and above the threshold with the subsequent annihilation of this pair into mesons. Since the probability of virtual NN pair production strongly depends on energy, then the probability of meson production through the intermediate NN state also strongly depends on energy. Meanwhile, the probability of meson production through other mechanisms has weak energy dependence near the NN threshold. To find the cross section σ I in of meson production through NN intermediate state (the inelastic cross section) with a certain isospin I, one can use the optical theorem. Due to this theorem, the cross sections σ I tot = σ I el + σ I in are expressed via the imaginary part of the Green's function D (r, r |E) of the Schrödinger equation: 0, 1, 0) , The cross sections σ I el have the form The Green's function satisfies the equation, and is expressed in terms of regular and irregular solutions of the Schrödinger equation (2) (see Ref. [24] for details). where τ 1,2 are isospin Pauli matrices for nucleon and antinucleon, respectively. Therefore, V I S,D,T in Eq. (1) have the form In our model, we use the simplest parametrization of the potentials U I (r), where θ(x) is the Heaviside function, U I i , W I i , a I i are free real parameters fixed by fitting the experimental data, and U π i (r) are the terms in the pion-exchange potential (see, e.g., [35]). To fit the parameters of our model, we use the following experimental data: NN scattering phases obtained by the Nijmegen group (see Ref. [29] and references therein), the cross sections of pp and nn production near the threshold [2-6, 10, 11], modules of electromagnetic form factors |G p E | and |G p M | [4], as well as the ratios |G p E /G p M | [2-5, 8] and |G n E /G n M | [11]. The resulting values of parameters are given in Table I. For these parameters we obtain χ 2 /N df = 98/85, where N df is the number of degrees of freedom. Fig. 1 shows a comparison of our predictions for partial cross sections of pp scattering with the results of partial wave analysis [29]. Fig. 2 shows the energy dependence of pp and nn pair production cross sections. Fig. 3 shows |G p E | and |G p M |, as well as the ratios |G p E /G p M | and |G n E /G n M |. Good agreement of the predictions with the available experimental data is seen everywhere. As mentioned above, the optical theorem allows one to predict the contributions σ I in to the cross sections of meson production in e + e − annihilation associated with the NN pairs in an intermediate state. In Fig. 4 the cross sections σ I tot , σ I el , and σ I in are shown. It can be seen that in the channel with I = 1 there is a large dip in the cross section σ 1 in at the threshold of real NN pair production. At the same time, in the channel with I = 0 this dip is practically invisible. The experimental data are taken from BABAR [2], CMD-3 [3], SND [11], and BESIII [4,5,8]. A dip was found in the cross sections of the processes e + e − → 3 (π + π − ) [6,30,31], e + e − → 2 π + π − π 0 [30,32], and e + e − → K + K − π + π − [6,33,34]. Since in our approach we cannot predict the cross sections in each channel, for comparison of our predictions with experimental data we use the following procedure. We assume that strong energy dependence of the cross sections for the production of mesons in each channel near the NN threshold is related to a strong energy dependence of the amplitude of virtual NN pair production in an intermediate state. We also suppose that the amplitudes of virtual NN pair transitions to specific meson states weakly depend on energy near the threshold of NN production. Evidently, other contributions to meson production cross sections, which are not related to NN in an intermediate state, have also a weak energy dependence. Therefore, we approximate the cross section σ I mesons of meson production in a state with a certain isospin by the function where a, b, c и d are some fitting parameters, which depend on the specific final states. The 6π final state has isospin I = 1 due to G-parity conservation. Comparison of our predictions for the 6π production cross section with the experimental data is shown in Fig. 5. For these processes the fit shows that we can set b = 0, and the remaining parameters are a = 0.14, c = 3.3 · 10 −3 nb/MeV, d = 0.84 nb for 3 (π + π − ) production and a = 0.4, c = 2 · 10 −3 nb/MeV, d = 3.8 nb for 2 π + π − π 0 case. It can be seen that there is good agreement between our predictions and experimental data. Consider now the process e + e − → K + K − π + π − . Unlike the 6π state, the state K + K − π + π − may be in both isospin states, I = 1 and I = 0. Since our calculations show that the cross section σ 0 in has no sharp energy dependence near the NN threshold, then the contribution of state with I = 0 can be taken into account in the parameters b, c, and d. Thus, we can compare the cross section of the process e + e − → K + K − π + π − with formula (13) for I = 1. The fitting parameters for this process are a = 0.11, b = −6.1 · 10 −5 nb/MeV 2 , c = 1.7 · 10 −3 nb/MeV, d = 4.2 nb. Comparison of our predictions with experimental data is also shown in Fig. 5. Again, there is good agreement of our predictions and experimental results. Figure 5. The energy dependence of the cross sections for the processes e + e − → 3 π + π − , e + e − → 2 π + π − π 0 , and e + e − → K + K − π + π − . The experimental data are taken from Refs. [6,30,31], [30,32], and [6,33,34], respectively. IV. CONCLUSION Using new experimental data on the production of pp and nn pairs in e + e − annihilation, a simple model is suggested that successfully describes the cross sections of a few processes with production of real or virtual NN pairs. These processes are e + e − → pp, e + e − → nn, e + e − → 6π, and e + e − → K + K − π + π − near the NN production threshold. Moreover, this model describes well the energy dependence of partial cross sections for nucleon-antinucleon scattering in states with L = 0, 2, s = 1 and J = 1, as well as the electromagnetic form factors of proton and neutron in the time-like region. Since new experimental data were obtained at energies noticeably exceeding the NN production threshold, an effective dipole form factor was introduced. It accounts for the energy dependence of the amplitude of real or virtual NN pair production at small distances. Since the new data on nn production have noticeably better accuracy compared to the previous ones, our predictions became more accurate. The analysis of meson production in different channels shows that the strong energy dependence of the meson production cross sections near the NN threshold is related solely to a strong energy dependence of the amplitude of virtual NN pair production in an intermediate state.
2022-07-29T06:42:41.220Z
2022-07-28T00:00:00.000Z
251135400
s2orc/train
v2
Successful Improvement of Metabolic Disorders, Including Osteopenia, by a Dopamine Agonist in a Male Patient with Macro-Prolactinoma
Successful Improvement of Metabolic Disorders, Including Osteopenia, by a Dopamine Agonist in a Male Patient with Macro-Prolactinoma Patient: Male, 43 Final Diagnosis: Prolactinoma Symptoms: — Medication: — Clinical Procedure: Treatments by a dopamine agonist Specialty: Endocrinology and Metabolic Objective: Unknown ethiology Background: Bone metabolic disorders in patients with prolactinoma have not been fully characterized. The case presented herein illustrates potential causal associations between prolactinoma and osteopenia, with a reversal of the disorder by treatment with a dopamine agonist. Case Report: A 43-year-old male with macro-prolactinoma [PRL 7770 ng/mL] was referred to our hospital. He suffered was overweight [body mass index (BMI) 29.4 kg/m2] and had impaired glucose tolerance, hypertriglyceridemia, and osteopenia. The patient was administered cabergoline, a dopamine D2 receptor agonist, and the dose was gradually increased up to 9 mg/week over the period of 1 year. One year later, the patient’s serum PRL levels decreased to within the normal range (19.1 ng/mL), and his pituitary tumor mass decreased to 1/4 of its initial size. His weight, dyslipidemia, and impaired glucose tolerance improved within 1 year. A marked increase in the bone mineral density (BMD) at the second to fourth lumbar spine (from 0.801 g/cm2 to 0.870 g/cm2, +8.6%) and at the femoral neck (from 0.785 g/cm2 to 0.864 g/cm2, +10.1%) were observed despite the presence of unresolved hypogonadism. Conclusions: Treatments with dopamine agonists represent a beneficial strategy for patients with prolactinoma accompanied with bone loss, in addition to their established efficacy in shrinkage of the size of pituitary tumors, normalization of PRL levels, and improvement of metabolic disorders. Background Recent studies have shown that prolactinoma is associated with metabolic disorders, such as obesity, dyslipidemia, and glucose intolerance [1][2][3][4], and that these metabolic disorders are improved by treatment with dopamine agonists [1,4]. Patients with prolactinoma, particularly male patients, have a high prevalence of osteoporosis or osteopenia compared to patients without prolactinoma [5]. However, bone metabolism in these patients has not been fully characterized. We report a patient diagnosed with macro-prolactinoma accompanied with osteopenia in addition to known metabolic disorders, including dyslipidemia, hyperglycemia, and being overweight, which were successfully improved by treatment with a dopamine agonist. Case Report A 43-year-old male patient with a large pituitary tumor was referred to our hospital. He suffered from headaches for 6 months. Magnetic resonance imaging (MRI) of his head was performed, which revealed an enlarged pituitary gland with a . His osteopenia was diagnosed based on decreased BMD of the second to fourth lumbar spine (L2-4) and the femoral neck (FN) (T-score: -2.1 SD and -0.6 SD, respectively). An oral glucose tolerance test (OGTT) confirmed impaired glucose tolerance with a 120 min glucose level of 213 mg/dL. However, HbA1c was normal (5.2%). According to these clinical findings, he was diagnosed with macro-prolactinoma, hypogonadotropic hypogonadism, osteopenia, dyslipidemia, and impaired glucose tolerance. The dopamine agonist cabergoline was administered at an initial dose of 0.5 mg/week and then increased 0.75 mg/week every month, up to 9 mg/week over the period of a year. It has been reported that cavernous sinus invasion and male sex were associated with dopamine agonist-resistance [6], suggesting that our patient, with markedly high PRL level and large tumor, could be resistant to treatments with dopamine agonists, and his PRL level seemed not to be normalized by the usual doses of cabergoline. Ono et al. have shown that high-dose cabergoline treatment of prolactinoma (the highest dose; 12 mg/week) is effective in poor responders [7]. Therefore, we treated our patient with these high doses of cabergoline. No adverse effects of cabergoline, such as nausea or appetite loss, were observed. One year later, his serum PRL levels decreased to within the normal range (19.1 ng/mL) (Figure 2), and the size of the pituitary tumor decreased to one-quarter of its initial size ( Figure 1C, 1D). His serum TG levels gradually improved after administration of a low dose of cabergoline ( Figure 2). His free testosterone level (6.2 ng/mL) and peak LH and FSH concentrations after LHRH loading (10.2 mIU/mL at 30 min and 7.6 mIU/mL at 90 min) after 1 year of treatment with cabergoline indicated that his hypogonadotropic hypogonadism had not sufficiently improved. However, his BMD values were markedly increased at L2-4 (from 0.801 g/cm 2 to 0.870 g/cm 2 , +8.6%) and at the femoral neck (from 0.785 g/cm 2 to 0.864 g/cm 2, +10.1%) ( Figure 3A), accompanied with an increase in bone turnover markers (bone alkaline phosphatase (BAP): 53.6 to 74.2 µg/L; urinary NTX 38.5 to 56.3 nMBCE/mMCr) ( Figure 3B). His body weight decreased from 84.0 kg to 74.1 kg over the year (Figure 2). His glucose level after 75-g OGTT at 120 min was improved (129 mg/dL), and the hyperinsulinemia was also ameliorated ( Figure 4). Three years later, the BMD at L2-4 and the femoral neck only slightly increased (0.881 g/cm 2 and 0.855 g/cm 2 , respectively), while bone turnover markers decreased to nearnormal levels (BAP: 23.5 µg/L; urinary NTX 42.3 nMBCE/mMCr), Discussion As previous studies have demonstrated, our patient had metabolic disorders, i.e., dyslipidemia, glucose intolerance, and being classified as overweight [1][2][3][4], which were markedly improved with cabergoline treatment [4]. A notable finding in this case was a marked increase in BMD after treatment with a dopamine agonist. To the best of our knowledge, this is the first case report of reversed osteopenia in a patient with prolactinoma who was treated with cabergoline. Hypogonadotropic hypogonadism is a well-known complication in patients with hyperprolactinemia and causes increased bone turnover compared to subjects without prolactinoma [8]. Men with prolactinoma have a high prevalence of osteopenia and osteoporosis, as determined by bone mineral density [5], and this bone loss in patients with prolactinoma is primarily caused by hypogonadism [5]. However, our case showed that 1-year treatment with a dopamine agonist markedly increased BMD, without a matching and correspondent reversal of hypogonadism. This observation demonstrates that hypogonadism was not a dominant cause of decreased BMD in our patient. Several in vitro studies have shown that osteoblasts express PRL receptor [9] and that PRL administration suppresses mRNA levels of Runt-related transcription factor 2 (Runx2), a key transcriptional factor for osteoblastic differentiation [10], alkaline phosphatase activity [11], and osteocalcin mRNA expression [11]. These findings suggest that PRL may directly deteriorate bone formation via inhibition of proliferation and differentiation of osteoblasts. Furthermore, PRL promotes mRNA expression of osteoclast differentiating factors, including receptor activator of nuclear factor-kappa B ligand (RANKL), and inhibits osteoclastogenesis inhibitory factors, such as osteoprotegerin [11], which acts as a decoy receptor for RANKL. These reports imply that PRL indirectly promotes bone resorption through osteoblast-derived factors. Mice that are homozygous for a deletion of dopamine transporters exhibit decreased bone mass and deteriorated bone strength compared to wild-type animals [12]. Mutant mice deficient of dopamine b-hydroxylase, which converts dopamine to epinephrine or norepinephrine, had increased BMD [13], which may indicate that dopamine enhances osteogenesis. In our case, treatment with a dopamine agonist improved bone mineral density concomitant with increased BAP and urinary NTX levels, suggesting that treatment with a dopamine agonist primarily accelerates bone formation by decreasing PRL levels. In a tentatively similar manner to that observed in mice [11], cabergoline might contribute to this bone density increase. Conclusions The generally overlooked bone metabolic disorders, as well as the classical characteristics of prolactinoma, were remarkably reversed by treatments with dopamine agonists. In vitro studies have demonstrated that prolactin and dopamine affect bone metabolism. These observations suggest that dopamine agonists might also favorably affect bone loss in patients with prolactinoma.
2016-05-04T20:20:58.661Z
2016-03-13T00:00:00.000Z
12277900
s2orc/train
v2
Ephedrine alkaloids-free Ephedra Herb extract: a safer alternative to ephedra with comparable analgesic, anticancer, and anti-influenza activities
Ephedrine alkaloids-free Ephedra Herb extract: a safer alternative to ephedra with comparable analgesic, anticancer, and anti-influenza activities It is generally accepted that the primary pharmacological activities and adverse effects of Ephedra Herb are caused by ephedrine alkaloids. Interestingly, our research shows that Ephedra Herb also has ephedrine alkaloid-independent pharmacological actions, such as c-MET inhibitory activity. This study describes the preparation of an ephedrine alkaloids-free Ephedra Herb extract (EFE) by ion-exchange column chromatography, as well as in vitro and in vivo evaluation of its pharmacological actions and toxicity. We confirmed that EFE suppressed hepatocyte growth factor (HGF)-induced cancer cell motility by preventing both HGF-induced phosphorylation of c-Met and its tyrosine kinase activity. We also investigated the analgesic effect of EFE. Although the analgesic effect of Ephedra Herb has traditionally been attributed to pseudoephedrine, oral administration of EFE reduced formalin-induced pain in a dose-dependent manner in mice. Furthermore, we confirmed the anti-influenza virus activity of EFE by showing inhibition of MDCK cell infection in a concentration-dependent manner. All assessments of toxicity, even after repeated oral administration, suggest that EFE would be a safer alternative to Ephedra Herb. The findings described here suggest that EFE has c-Met inhibitory action, analgesic effect, and anti-influenza activity, and that it is safer than Ephedra Herb extract itself. Therefore, EFE could be a useful pharmacological agent. Introduction Ephedra Herb is a crude drug containing ephedrine alkaloids, and is used in Japan as a component in many Kampo formulae, including maoto, kakkonto, shoseiryuto, and eppikajutsubuto. Ephedra Herb is defined in the sixteenth edition of the Japanese Pharmacopoeia (JP) as the terrestrial stem of Ephedra sinica Staf., Ephedra intermedia Schrenk et C.A. Meyer, or Ephedra equisetina Bunge (Ephedraceae), which have stems with ephedrine alkaloids (ephedrine and pseudoephedrine) content greater than 0.7 % [1]. Ephedra Herb has anti-inflammatory [2], analgesic, anti-influenza [3], and anti-metastatic effects [4]. However, because ephedrine alkaloids stimulate both sympathetic and parasympathetic nerves, Ephedra Herb has some adverse effects, including palpitations, hypertension, insomnia, and dysuria. The Food and Drug Administration (FDA) of the United States banned the sale of dietary supplements containing ephedra plants in 2004 because of health risks [5]. Since Professor Nagayoshi Nagai reported ephedrines to be the active constituents in Ephedra Herb [6], most of the pharmacological actions of Ephedra Herb have been attributed to ephedrine alkaloids, although the plant contains other constituents, such as phenolics and tannins [7]. Therefore, the adverse effects caused by ephedrine alkaloids are thought to be an unavoidable consequence associated with the pharmacological effects of Ephedra Herb. Our previous research found that maoto, an Ephedra Herb-containing formulation, suppressed cancer metastasis by inhibiting cancer cell motility [8,9] and prevented hepatocyte growth factor (HGF)-induced cancer cell motility by inhibiting phosphorylation of the c-Met receptor. Our studies confirmed that the c-Met inhibitory activity of maoto derives from Ephedra Herb, which impairs HGF-induced cancer cell motility by suppressing the HGF-c-Met signaling pathway through inhibition of c-Met tyrosine kinase activity [4]. HGF-c-Met signaling regulates several cellular processes, including cell proliferation, invasion, scattering, survival, and angiogenesis. Dysregulation of HGF-c-Met signaling promotes tumor formation, growth, progression, metastasis, and therapeutic resistance [10,11]. Therefore, Ephedra Herb may have applications in cancer therapy as a novel c-Met inhibitor. Moreover, we have discovered that Ephedra Herb contains herbacetin 7-O-neohesperidoside and herbacetin 7-O-glucoside [12]. Herbacetin, the aglycone of these herbacetinglycosides, inhibits HGF-induced cell migration and phosphorylation of c-Met [13]. These findings suggest that herbacetin-glycosides are bioactive constituents of Ephedra Herb that may be responsible for its pharmacological actions not mediated by ephedrine alkaloids. However, the c-Met inhibitory activity of Ephedra Herb extract cannot be explained by herbacetin-glycosides alone, because the herbacetin-glycoside content in Ephedra Herb extract is less than 0.1 % [14]. Moreover, we confirmed that ephedrine had no effect on HGF-c-Met signaling [15]. Therefore, we predicted that c-Met inhibitory activity may be produced by the non-alkaloidal fraction of Ephedra Herb extract, which contains herbacetin-glycosides and other bioactive molecules that produce synergistic effects. The non-alkaloidal fraction of Ephedra Herb is useful for cancer patients, because adverse effects caused by ephedrine alkaloids are avoided. We utilized ion-exchange column chromatography to eliminate ephedrine alkaloids from Ephedra Herb extract, resulting in ephedrine alkaloids-free Ephedra Herb extract (EFE) [14]. In this study, we report the pharmacological and toxicological properties of EFE. Preparation of EFE and Ephedra Herb extract Preparation of EFE and Ephedra Herb extract was carried out as described by Oshima et al. [14]. Ephedra Herb (200 g, E. sinica, Japanese pharmacopoeia grade) was added to water (2000 ml), extracted at 95°C for 1 h, and filtered, after which the residue was washed with water (200 ml). The extract was centrifuged at 1800g for 10 min, after which half of the supernatant was concentrated under reduced pressure to obtain Ephedra Herb extract (14.1 g), while the other half was passed directly through DIAION TM SK-1B ion-exchange resin (100 ml) which was treated with 1 M HCl (30 ml) and water (100 ml) prior to use, then washed with water (100 ml). The unadsorbed fraction (1100 ml) was adjusted to pH 5 using 5 % NaHCO 3 aq. (60 ml), and the solution was then evaporated under reduced pressure to obtain EFE (11.8 g). LC-PDA analysis of Ephedra Herb extract and EFE One milliliter of methanol was added to 50 mg samples of Ephedra Herb extract and EFE, which were exposed to ultrasonic waves for 30 min and centrifuged. The supernatants were filtered through 0.45-lm membrane filters, after which 20 ll of each sample was subjected to LC-PDA analysis. Trans-well migration assay MDA-MB-231 cells (5 9 10 4 cells/100 ll) were suspended in 100 ll DMEM containing EFE (10, 20, or 40 lg/ml), Ephedra Herb extract (40 lg/ml), or SU11274 (5 lM). The cells were poured into the upper well of a trans-well permeable support system (Corning Inc., Acton, MA, USA). DMEM (600 ll) containing 50 ng/ml HGF was added to the lower well of the trans-well system, which was incubated for 20 h at 37°C. Finally, the number of cells that had migrated from the upper layer to the lower well was counted. Detection of phosphorylated c-Met (p-Met), c-Met, and GAPDH MDA-MB-231 cells (2 9 10 6 cells/4 ml) were incubated in 4 ml of 10 % FCS-DMEM for 48 h, washed three times with DMEM, and incubated in 4 ml DMEM for 24 h. After the cells were washed three times with DMEM, they were incubated for 15 min at 37°C in 4 ml DMEM or DMEM containing 50 ng/ml of HGF with or without 0.5, 1, or 10 lg/ml EFE, 10 lg/ml Ephedra Herb extract, or 5 lM SU11274. After the cells were washed three times with cold PBS without Ca 2? and Mg 2? (PBS(-)) they were treated with 1 ml Complete Lysis-M with phosphatase inhibitor (Roche Diagnostics Co., Indianapolis, IN, USA) for 5 min on an ice bath. The lysates were collected and centrifuged, after which the supernatants were incubated with 59 sodium dodecyl sulfate (SDS) loading buffer for 5 min at 95°C. The lysates were separated by SDSpolyacrylamide gel electrophoresis (PAGE) and electrotransferred to a polyvinylidene difluoride (PVDF) membrane. The membrane was blocked at room temperature for 1 h with 5 % non-fat dry milk in Tris-buffered saline (10 mM Tris-HCl, pH 7.5, 100 mM NaCl) containing 0.1 % Tween 20 (TBS-T). After the membrane was washed with TBS-T, it was incubated with the anti-p-Met (Tyr1234/1235) mAb (CST#3077), anti-Met mAb (CST #8198), or anti-GAPDH Ab (SC-25778) overnight at 4°C and washed with TBS-T. Horseradish peroxidase-labeled anti-rabbit IgG Ab (CST#7074) was applied for 1 h at room temperature, after which the membranes were washed with TBS-T. The Abs were detected with an enhanced chemiluminescent (ECL) reaction (GE Healthcare Japan, Tokyo, Japan) and imaged using an Image-Quant Las 4000 mini system (GE Healthcare Japan). Measurement of c-Met tyrosine-kinase activity and determination of IC 50 values Met kinase activity was measured using the ADP-Glo kinase assay kit (Promega, Madison, WI, USA) according to the manufacturer's instructions. Briefly, 10 ll of a reaction mixture containing 2 lg/ml of recombinant Met kinase domain, 0.2 lg/ml poly(E4Y1), and 10 lM ATP was incubated with Ephedra Herb extract or EFE at room temperature for 60 min. The kinase reactions were terminated by the addition of 10 ll ADP-Glo reagent, after which the resulting mixture was incubated for 40 min at room temperature. Next, 20 ll of Kinase Detection Reagent was added, after which the mixture was incubated J Nat Med (2016) 70:571-583 573 for 30 min at room temperature. Luminescence was measured with an EnSpire multi-plate reader (Perkin Elmer, Foster City, CA, USA). The experiments were repeated three times. Each IC 50 was calculated using a four-parameter logistic model (Prism 5.0, GraphPad Software, San Diego, CA, USA). Formalin test ICR male mice (5 weeks of age, 8 mice per group) were orally administered water, 350 mg/kg EFE, or 700 mg/kg Ephedra Herb extract for 3 days. On the third day, pawlicking was induced in the mice by intraplantar injection of 20 ll of 2.5 % formalin 6 h after extract/water administration. After the injection, the mice were individually placed into a glass cage, in which the amount of time that the animal spent licking the injected paw was measured as an indicator of pain. Paw-licking was recorded for 30 min in two phases, the first phase (0-5 min) and second phase (15-30 min). The protocol for animal experiments was approved by the Ethics Review Committee for Animal Experimentation of the National Institute of Health Sciences. Evaluation of anti-influenza activity Madin-Darby canine kidney (MDCK) cells (3 9 10 4 cells/ 100 ll) were incubated in 100 ll of 10 % FCS-minimal essential medium (MEM) in a 96-well plate for 24 h and washed with MEM. Next, the cells were incubated for 72 h at 37°C in 100 ll of MEM or MEM containing a twofold serial dilution of 10 lM oseltamivir, 50 lg/ml EFE, or 50 lg/ml Ephedra Herb extract with or without 100 TCID 50 of influenza virus A/WSN/33(H1N1). Living cells were then stained with crystal violet, after which the absorbance (560 nm) of each cell sample was quantified using a microplate reader. These experiments were performed externally by AVSS Corporation. Each IC 50 was calculated using a four-parameter logistic model (Prism 5.0, GraphPad Software). Repeated oral dose toxicity assessment Special pathogen-free ICR mice (Crl:CD1) (5 weeks old) were obtained from Charles River Laboratories (Boston, MA, USA). The mice were kept in a laboratory animal facility with temperature and relative humidity maintained at 20-26°C and 30-70 %, respectively, a 12 h-light-dark cycle, and 8-10 air charges per hour. The mice were housed in polycarbonate cages and offered CE-2 pellet feed (Nippon Formula Feed Mfg. Co., Ltd., Ehime, Japan) and groundwater that was disinfected with 0.5 % chlorine and filtered through a 5-lm filter. The mice were acclimated for 7 days before the start of the study. The mice were grouped into three groups: water, EFE, or Ephedra Herb extract. Each group included five male mice and five female mice. The dosages of EFE and Ephedra Herb extract were converted to 50-fold of the human maximum dose of Ephedra Herb extract, equivalent to 6 g of cut crude drug. The doses of EFE and Ephedra Herb extract were 632 mg/ kg/day and 755 mg/kg/day, respectively. The mice were orally administered water, EFE, or Ephedra Herb extract once per day for 14 days. Clinical signs and mortality were assessed several times per day. Body weight, food consumption, and water consumption were measured twice per week throughout the experiment. After 14 days, all mice were anesthetized by isoflurane inhalation, after which blood samples were collected from the abdominal aorta. After the collection of the blood samples, the organs were harvested from each mouse and weighed. Colon weight was measured after washing out the colon contents with saline solution. Statistical analysis All data are expressed as mean ± standard deviation (SD). Data were analyzed by ANOVA. Significant differences between the control and treatment groups were determined by Student's t test, Dunnett's test, and Tukey's test using GraphPad Prism 5J software (MDF Co., Ltd., Tokyo, Japan). p \ 0.05 was considered statistically significant. Inhibitory effect of EFE on HGF-induced motility of MDA-MB-231 cells We confirmed the inhibitory effect of EFE on HGF-induced motility of MDA-MB-231 cells using a trans-well permeable support system. HGF (50 ng/ml) significantly induced MDA-MB-231 cell motility; however, this effect was inhibited by 5 lM SU11274, a c-Met-specific inhibitor (Fig. 2a). We previously reported that 40 lg/ml Ephedra Herb extract suppressed the HGF-induced migration of MDA-MB-231 cells [4], so the same concentration was used in this study. The results show that both EFE and Ephedra Herb extract significantly suppressed the HGFinduced motility at a concentration of 40 lg/ml (Fig. 2a). We also examined the effects of Ephedra Herb extract and EFE on the viability of MDA-MB-231 cells and found that the extracts had no effect on cell viability (Fig. 2b). This indicates that the inhibitory activities of these extracts on HGF-induced motility are independent of cytotoxicity. Subsequently, we investigated the effects of various concentrations of EFE on HGF-induced migration of MDA-MB-231 cells. We found that EFE significantly inhibited the HGF-induced motility of MDA-MB-231 cells in a concentration-dependent manner (Fig. 2c), thus confirming that EFE possesses inhibitory activity against HGFinduced cancer cell motility. Inhibitory effect of EFE on HGF-induced c-Met phosphorylation and tyrosine kinase activity HGF binding activates c-Met, which initiates receptor dimerization and auto-phosphorylation of tyrosine residues, propagating downstream signals. Accordingly, we confirmed the inhibitory effect of EFE on HGF-induced phosphorylation of c-Met in MDA-MB-231 cells. Tyrosine phosphorylation of c-Met was induced by HGF (50 ng/ml) and inhibited by 5 lM SU11274 (Fig. 3a). An Ephedra Herb extract concentration of 10 lg/ml was used in this study because we demonstrated in a previous study that 10 lg/ml Ephedra Herb extract suppressed HGF-induced phosphorylation of c-Met [4]. No phosphorylation of c-Met was observed following the addition of 50 ng/ml HGF with 10 lg/ml EFE (Fig. 3a). Moreover, we investigated the effects of various concentrations of EFE on the HGF-induced phosphorylation of c-Met. EFE prevented the HGFinduced phosphorylation of c-Met in a concentration-dependent manner (Fig. 3b). We also investigated the inhibitory activity of EFE on tyrosine kinase activity of c-Met. Ephedra Herb extract and EFE produced concentrationdependent inhibition of the tyrosine kinase activity of c-Met (Fig. 3c). The IC 50 of Ephedra Herb extract and EFE were 0.887 and 0.530 lg/ml, respectively. These results suggest that EFE suppresses the HGF-induced motility by inhibiting the phosphorylation of c-Met though the prevention of tyrosine kinase activity. In addition, the c-Met inhibitory activity of Ephedra Herb was confirmed to be independent of ephedrine alkaloids. Effect of EFE on formalin-induced pain in mice The analgesic effect of Ephedra Herb has been traditionally believed to be mediated by pseudoephedrine [2,16], but we recently found that herbacetin, a component of Ephedra Herb, suppressed the formalin-induced pain [17]. Therefore, we examined the analgesic effect of EFE using the formalin test. Ephedra Herb extract and EFE showed no effects during the first phase of the formalin test. Ephedra Herb extract and EFE reduced paw-licking time in a dosedependent manner during the second phase of the formalin test. The paw-licking time during the second phase of the formalin test was significantly decreased by oral administration of 700 mg/kg Ephedra Herb extract, 350 mg/kg EFE, and 700 mg/kg EFE (Fig. 4). These results reveal that EFE possessed the analgesic action. Effect of EFE on influenza virus infection in MDCK cells Ephedra Herb has been reported to possess anti-influenza activity [3]. Therefore, we examined the effect of EFE on the survival rate of MDCK cells infected with influenza virus A/WSN/33(H1N1). Oseltamivir, an anti-influenza drug, suppressed influenza virus infection in MDCK cells in a concentration-dependent manner without causing cytotoxicity (See Supplemental Fig. 1). The IC 50 of oseltamivir was 3.49 lM. Neither Ephedra Herb extract nor EFE affected MDCK cell viability (Fig. 5a), whereas Ephedra Herb extract and EFE prevented cell death caused by influenza virus infection in a concentration-dependent manner (Fig. 5b). The IC 50 values of Ephedra Herb extract and EFE were 8.6 lg/ml and 8.3 lg/ml, respectively. These results indicate that EFE retains the anti-influenza activity of Ephedra Herb, indicating that this activity is not mediated by ephedrine alkaloids. Safety assessment of EFE We evaluated the safety of EFE in comparison with that of Ephedra Herb extract and water through a repeateddose toxicity study. Extract-related death and abnormal clinical signs were not observed in mice treated with EFE or Ephedra Herb extract during the testing period. After 2 weeks, gross abnormalities were not observed in any of the treated mice, and there were no significant differences in mice weights between the three groups (Table 1). However, the colon weights of the male mice subjected to oral administration of EFE or Ephedra Herb extract were significantly lower than those of male mice treated with water, although the reduction was moderate. Differences in the colon weights of the groups of female mice were not significant. There was no significant difference in the weight of any other tissue among the groups (Table 1). Serum biochemistry and hematological data are shown in Tables 2 and 3, respectively. The groups showed no significant differences in serum biochemistry parameters ( Table 2). The PLT of male mice taking Ephedra Herb extract was significantly higher than that of the water group, but there was no significant difference between the PLT of the male EFE-treated group and that of the male water-treated group ( Table 3). The WBC of male mice treated with Ephedra Herb extract was significantly lower than of the water male group, but there was no significant b Fig. 2 difference between the WBC of the male EFE-treated group and that of the male water-treated group ( Table 3). The groups of female mice showed no significant differences in any hematological parameter. These results indicate that EFE may be less toxic than Ephedra Herb extract and could represent a safer alternative. Discussion Administration of Kampo medicines containing Ephedra Herb is contraindicated for patients with hypertension or cardiomyopathies, while administration of these medicines to elderly patients or those with extreme sensitivity to b Fig. 3 Effects of EFE, Ephedra Herb extract, and SU11274 on HGFinduced phosphorylation of c-Met, and effects of EFE and Ephedra Herb extract on the tyrosine-kinase activity of c-Met. a MDA-MB-231 cells were incubated in DMEM, DMEM containing 50 ng/ml HGF, or DMEM containing 50 ng/ml HGF with 10 lg/ml EFE, 10 lg/ml Ephedra Herb extract, or 5 lM SU11274 for 15 min at 37°C. Tyrosine phosphorylation of c-Met was determined by immunoprecipitation and Western blot analysis. b MDA-MB-231 cells were incubated in DMEM containing 50 ng/ml of HGF with 0, 0.5, 1, 5, or 10 lg/ml of EFE for 15 min at 37°C. The level of tyrosine phosphorylation of c-Met in the cells was determined by immunoprecipitation and Western blot analysis. c The kinase activity of c-Met was measured using the ProfilerPro Kit. A recombinant c-Met kinase domain was pre-incubated with and without a twofold serial dilution of 8 lg/ml EFE or Ephedra Herb extract at 28°C for 15 min. The fluorescence-labeled peptide substrate, 1.5 lM 5-carboxyfluorescein-EAIYAAPFAKKK-NH 2 , and 79.5 lM ATP were added, followed by incubation at 28°C for 90 min. The kinase reactions were terminated by the addition of 3 mM EDTA. Phosphorylated peptides were separated from substrate peptides and quantified using a LabChip 3000 Fig. 4 Effects of EFE and Ephedra Herb extract on formalin-induced pain. ICR mice were treated orally with water, 350 mg/kg EFE, 700 mg/kg EFE, or Ephedra Herb extract for 3 days. On the third day of treatment, formalin tests were performed 6 h after drug or placebo administration. The amount of time that each animal spent licking the injection paw was recorded for 30 min in two phases, the first (0-5 min) and second (15-30 min) phases. Statistical significance was determined by Dunnett's test. *p \ 0.05 or **p \ 0.01 vs. control Ephedra Herb requires special attention. The primary effects and adverse effects of Ephedra Herb have been traditionally believed to be mediated by ephedrine alkaloids, because ephedrine alkaloids are structurally similar to adrenaline and stimulate both sympathetic and parasympathetic neurons. However, recent data suggest that Ephedra Herb contains active ingredients other than ephedrine alkaloids, such as herbacetin glycosides [12], and possesses ephedrine alkaloid-independent pharmacological actions [13]. We hypothesized that several pharmacological actions from the herb would remain after removing the ephedrine alkaloids from it. In the present study, we show that EFE has a c-Met inhibitory action, analgesic effect, and anti-influenza activity without toxicity. We have previously reported that Ephedra Herb suppresses HGF-induced cancer cell motility by prevention of c-Met phosphorylation via inhibition of its tyrosine kinase activity [4]. Our findings suggest that Ephedra Herb could be utilized as a novel type of c-Met inhibitor in c-Metexpressing cancer patients. However, Ephedra Herb is not suitable for cancer patients, because they do not have sufficient physical strength to tolerate its associated adverse effects. In this study, EFE exhibited efficacy as a c-Met inhibitor similar to that of Ephedra Herb extract, indicating that the c-Met inhibitory activity of Ephedra Herb is not derived from ephedrine alkaloids. Therefore, EFE or pseudo-Kampo medicines containing EFE instead of Ephedra herb could be utilized as treatments for c-Metexpressing cancer patients, because ephedrine alkaloid-induced side effects should not limit their use. Kampo medicines containing Ephedra Herb, such as eppikajutsubuto, makyoyokukanto, kakkonto, and maoto, are used to treat myalgia, arthralgia, and rheumatism. The analgesic effects of these Kampo medicines containing Ephedra Herb are explained by the anti-inflammatory action of pseudoephedrine, a constituent of Ephedra Herb. Ephedra Herb has been reported to inhibit acute inflammation [2], and its main anti-inflammatory action is thought to be carried out by pseudoephedrine due to its inhibition of prostaglandin E2 biosynthesis [16]. However, we have recently found that herbacetin, a component of Ephedra Herb, suppressed formalin-induced pain via inhibition of NGF-TrkA signaling [17]. Formalin injection induces two different phases of pain. In the first phase of formalin-induced pain, neurogenic pain is caused by direct activation of type C fibers in nociceptive nerve endings, which release substance P, glutamine, and bradykinin, among other pain mediators. Non-steroidal anti-inflammatory agents (NSAIDs) such as aspirin and diclofenac are ineffective against the first phase of the formalin test [19,20]. The second phase of formalin-induced pain occurs through ventral horn neuronal activation at the spinal cord level and is characterized as inflammatory pain related to the release of chemical mediators such as histamine, serotonin, bradykinin, prostaglandins, and excitatory amino acids [19,21]. The pain associated with the second phase of the formalin test is suppressed by NSAIDs. Central analgesics, such as morphine, inhibit the pain associated with the first and second phases of the formalin test. EFE reduced the second phase of formalin-induced pain in the same manner as Ephedra Herb, suggesting that EFE acts on inflammatory pain, while indicating that the analgesic effect of Ephedra Herb is independent from pseudoephedrine. EFE could represent a novel analgesic drug without the adverse effects associated with ephedrine alkaloids. Hayashi reported that maoto, which contains Ephedra Herb, relieved bone pain associated with treatment with zoledronic acid hydrate, a therapeutic agent used to treat patients with bone lesions derived from bone cancer metastasis [22]. Therefore, EFE and pseudo-Kampo medicines containing EFE instead of Ephedra Herb may treat cancer and cancer-related pain simultaneously. The maoto formula consists of four herbal substances: Apricot Kernel, Cinnamon Bark, Glycyrrhiza, and Ephedra Herb, the principal component. Maoto affects the early phase of influenza virus infection, and its anti-influenza activity is comparable with that of oseltamivir [23]. Furthermore, it has been reported that Ephedra Herb has an inhibitory effect on the acidification of intracellular compartments, such as endosomes and lysosomes, which inhibits the growth of influenza virus [3]. Our study revealed that EFE prevented influenza virus infection, in a manner independent of ephedrine alkaloids. EFE and a pseudo-maoto formula, consisting of the herbal substances mentioned above with EFE instead of Ephedra Herb, have none of the adverse effects associated with ephedrine alkaloids; therefore, they may be of use as therapeutic and prophylactic measures against influenza infection, especially in the elderly. We evaluated the safety of EFE by carrying out repeated-dose toxicity studies. After 2 weeks of oral administration of Herb Ephedra, EFE, or water, there was no significant difference in the weight of any tissue, except for the colon, among the groups. The colon weights of the male mice treated with Herb Ephedra extract or EFE were significantly lower than that of the water group. However, the reduction in colon weight was small and not associated with morphological abnormalities. Furthermore, colon weight showed no significant difference between the groups of female mice. Thus, EFE has almost no effect on the colon. Neither serum biochemistry data nor hematological data showed any significant differences between mice taking EFE or water. On the other hand, there were significant differences in PLT and WBC between male mice taking Ephedra Herb extract and water. These results suggest that EFE may be safer than Ephedra Herb extract. Ephedra Herb has an antitussive action and removes nasal obstructions by sympathomimetic effects derived from ephedrine alkaloids, but EFE is predicted to produce neither of these effects. Therefore, EFE may be unsuitable for treatment of patients with a common cold. Until now, the pharmacological effects of Ephedra Herb were widely believed to be mediated by ephedrine alkaloids. Harada obtained alkaloid-free Ephedra Herb by selectively removing ephedrine alkaloids through ether extraction under ammonium hydroxide alkali conditions, followed by extraction with water, evaporation of the product to dryness, and addition of the product to a neutral extract, obtained by liquid-liquid partition of the ether extract. Harada reported that alkaloid-free Ephedra Herb extract did not raise blood pressure, inhibit inflammation, or reduce the severity of carrageenan-induced edema [18]; therefore, he concluded that the pharmacological actions of Ephedra Herb were due to ephedrine alkaloids. However, as noted by the author, some components in Ephedra Herb may be altered after exposure to ammonium hydroxide [18]. For example, pyran rearrangements of procyanidins have been reported at alkaline pH [24]. Moreover, Ephedra Herb has been reported to contain proanthocyanidins [7], which might be rearranged by alkaline treatment. Therefore, it is possible that alkaloid-free Ephedra Herb may lack some of the pharmacological activities of Ephedra Herb. Parameter ALB (g/dl) AST (IU/l) ALT (IU/l) ALP (IU/l) LDH (IU/l) LAP (IU/l) c-GT (IU/1) T-BIL (mg/dl) Group AVG ± SD AVG ± SD AVG ± SD AVG ± SD AVG ± SD AVG ± SD AVG ± SD AVG ± SD In this study, we demonstrated for the first time that Ephedra Herb extract does not lose its pharmacological activity after elimination of its ephedrine alkaloids. Our current objectives include identifying active substances present in the non-alkaloidal fraction of Ephedra Herb extract and obtaining licensing approval for therapeutic use of EFE.
2022-12-28T14:59:13.673Z
2016-03-04T00:00:00.000Z
255165310
s2orc/train
v2
Ultracold Rydberg Atoms in a Ioffe-Pritchard Trap
Ultracold Rydberg Atoms in a Ioffe-Pritchard Trap We discuss the properties of ultracold Rydberg atoms in a Ioffe-Pritchard magnetic field configuration. The derived two-body Hamiltonian unveils how the large size of Rydberg atoms affects their coupling to the inhomogeneous magnetic field. The properties of the compound electronic and center of mass quantum states are thoroughly analyzed. We find very tight confinement of the center of mass motion in two dimensions to be achievable while barely changing the electronic structure compared to the field free case. This paves the way for generating a one-dimensional ultracold quantum Rydberg gas. I. INTRODUCTION Powerful experimental cooling techniques have been developed in the past decades that allow us to probe the micro and nanokelvin regime while controlling the internal and external degrees of freedom of atomic systems. As a result dilute ultracold gases that qualify perfectly for the study of quantum phenomena on a macroscopic scale [1,2,3] can nowadays be prepared almost routinely. Although being dilute, interactions play an important role and rich collective phenomena, reminiscent of e.g. those in traditional condensed matter physics, appear. The attractiveness of Rydberg atoms arises from their extraordinary properties [4]. The large displacement of the valence electron and the atomic core is responsible for the exaggerated response to external fields and, therewith, for their enormous polarizability. Rydberg atoms possess large dipole moments and, despite being electronically highly excited, they can possess lifetimes of the order of milliseconds or even more. Due to their susceptibility with respect to external fields and/or their long range interaction, ensembles of Rydberg atoms represent intriguing many-body systems with rich excitations and decay channels. Starting from laser cooled ground state atoms, a laser typically excites a subensemble of the atoms to the desired Rydberg states. Since the ultraslow motion of the atoms can be ignored on short timescales, Rydberg-Rydberg interactions dominate the system and we encounter a so-called frozen Rydberg gas [5]. The strength of the interaction can be varied by tuning external fields and/or by selecting specific atomic states. An exciting objective are the many-body effects to be unraveled in ultracold Rydberg gases (see Refs. [6,7] and references therein). At a certain stage of the evolution ionization might take over leading to a cold Rydberg plasma. Beyond the above there is a number of topical and promising research activities involving cold Rydberg states. One example are long range molecular Rydberg states [8] with unusual properties if exposed to magnetic fields [9]. Another one is due to the strong dipole-dipole interaction of Rydberg atoms which strongly inhibits excitation of their neighbors [10,11]. The resulting local excitation blockade is state dependent and can turn Rydberg atoms into possible candidates for quantum information processing schemes [12,13]. A precondition for enabling the processing of Rydberg atoms is the availability of tools to control their quantum behavior and properties. An essential ingredient in this respect is the trapping of electronically highly excited atoms. The present work provides a major contribution on this score. Let us briefly address previous works on Rydberg atoms exposed to inhomogeneous static field configurations. First evidence for trapped Rydberg gases has been experimentally found by Choi et al. [14,15]. The authors use strong bias fields to trap "guiding center" drift atoms for up to 200 ms. Quantum mechanical studies of highly excited atoms in magnetic quadrupole fields demonstrated the existence of e.g. intriguing spin polarization patterns and magnetic field-induced electric dipole moments [16,17]. These investigations were based on the assumption of an infinitely heavy nucleus. A description of the coupled center of mass (c.m.) and electronic dynamics has been presented in Refs. [18,19]: Trapping has been achieved for quantum states with sufficiently large total, i.e. electronic and c.m., angular momentum. Pictorially speaking this addresses atoms that circle around the point of zero field at a sufficiently large distance. Recently it has been demonstrated that trapping in a Ioffe-Pritchard configuration is possible without imposing the condition of large c.m. angular momenta [20]. The present investigation works out this setup in detail and provides comprehensive results for Rydberg atoms exposed to the Ioffe-Pritchard field configuration. In detail we proceed as follows. Sect. II contains a derivation of our working Hamiltonian for a highly excited atom in the inhomogeneous field including the coupling of the electronic and c.m. motion of the atom. In Sect. III we introduce an adiabatic approximation in order to solve the corresponding stationary Schrödinger equation. In Sect. IV we analyze the obtained spectra and point out the capacity of the Ioffe bias field to regulate the distance between the surfaces, and with that the quality of the adiabatic approach. Intersections through the surfaces show their deformation when the field gradient is increased. Subsequently we characterize the electronic wave functions by discussing relevant expectation values. Sect. V is dedicated to the c.m. dynamics in the uppermost adiabatic energy surface. We arrive at a confined quantized c.m. motion without the need to impose any restriction on its properties. Examining the fully quantized states we observe that the extension of the electronic cloud can exceed the extension of the c.m. wave function. A. Two-body Approach The large distance of the highly excited valence electron (particle 1) from the remaining closed-shell ionic core of an alkali Rydberg atom (particle 2) renders it possible to model the mutual interaction by an effective potential which is assumed to depend only on the distance of the two particles. For alkali atoms, in particular, whose core possess zero total angular momentum and zero total spin, the only essential difference to the Coulombic case is due to the finite size of the core. In any case, the effective potential V (r) only noticeably differs from the pure Coulomb potential at small distances r. States of high electronic angular momenta l, on which we focus in the present investigation, almost exclusively probe the Coulombic tail of this potential. The coupling of the charged particles to the external magnetic field is introduced via the minimal coupling, p → p − qA, where q is the charge of the particle and A is a vector potential belonging to the magnetic field B. Including the coupling of the magnetic moments to the external field (µ 1 and µ 2 originate from the electronic and nuclear spin, respectively), our initial Hamiltonian reads (we use atomic units except when stated otherwise) We do not take into account spin-orbit-coupling and relativistic mass changes. The difference in energy shift for adjacent, large angular momentum states (l, l ± 1) due to these relativistic corrections is ∆W F S = α 2 /2n 5 [21], where α is the fine structure constant, and therefore negligible for Rydberg states. At n = 30 one receives ∆W F S = 1.1 × 10 −12 atomic units. To give an idea of the scope of this approximation we anticipate a result from Sec. IV: The energy gap between two adjacent high-l electronic states is approximately E dist = B/2 a.u. Demanding ∆W F S /E dist ≪ 1 results is constraining the Ioffe field strength B to be much larger than 5 mG. Before we focus on the Ioffe-Pritchard configuration let us first examine a general field B composed of a constant term B c , a linear term B l and higher order terms, B = B i . The vector potential shall satisfy the Coulomb gauge. The squared terms can then be simplified taking advantage of the vanishing commutator [A(r 1 ), p 1 ] to obtain (p 1 − qA(r 1 )) 2 = p 2 1 − 2qA(r 1 ) · p 1 + q 2 A(r 1 ) 2 . In the so-called symmetric gauge the vector potential of a constant magnetic field is given by A c (r 1 ) = 1/2 B c × r 1 . The analogon for a linear field is A l (r 1 ) = 1/3B l (r 1 )×r 1 . It can be proven that the vector potential of an arbitrary magnetic field can be expanded in a corresponding form [22] permitting a representation of the vector potential as a cross product A(r 1 ) = i A i (r 1 ) =B(r 1 ) × r 1 , wherẽ B(r 1 ) = g i B i (r 1 ) and i ∈ {c, l, . . . } denotes the order of the corresponding terms of A and B with respect to spacial coordinates. g i are the coefficients 1 2 , 1 3 etc. The particular form of this potential and the vanishing divergence of magnetic fields admit the simplification where we exemplarily defined the angular momentum of particle 1, L 1 = r 1 × p 1 . Since the interaction potential depends only on the distance of the two particles, it is natural to introduce relative and c.m. coordinates, r 1 = R + (M 2 /M )r and r 2 = R−(M 1 /M )r with the total mass M = M 1 +M 2 . If no external field was present, the new coordinates would decouple the internal degrees of freedom from the external c.m. ones. Yet even a homogeneous magnetic field couples the relative and the c.m. motion [23,24]. For neutral systems in static homogeneous magnetic fields, however, a so-called 'pseudoseparation' can be performed providing us with an effective Hamiltonian for the relative motion, that depends on the c.m. motion only parametrically via the eigenvalues of the pseudomomentum [23,25,26,27] which is associated with the c.m. motion. Such a procedure is not available in the present case of a more general inhomogeneous field. In the new coordinate system the Hamiltonian (1) becomes where the angular momenta of the particles read (see also Ref. [19]), and the terms that do not depend on the field are summarized to H 0 = p 2 2m + P 2 2M + V (r). Here, L r = r × p, L R = R × P , and the reduced mass m = M 1 M 2 /M have been introduced. To simplify the Hamiltonian we apply a unitary transformation that eliminates c.m. momentum dependent coupling terms generated by the homogeneous field component H 0 transforms as follows The transformation of the remaining terms generates exclusively additional terms, that are quadratic with respect to the magnetic field. Exploiting now the fact that the mass of the ionic core is much larger than the mass of the valence electron, we only keep magnetic field dependent terms of the order of the inverse light mass 1/M 1 (which becomes 1 in atomic units). We arrive at the Hamiltonian The diamagnetic terms which are proportional to A 2 (and herewith proportional to B 2 , see Eq. (2)) have been neglected. Due to the unitary transformation U , R-dependent terms that are quadratic in the Ioffe field strength B do not occur and only an electronic term B 2 (x 2 + y 2 )/8 remains whose typical energy contribution amounts to B 2 n 4 /8 ≈ 10 5 B 2 for n = 30. Besides we obtain a term quadratic in the field gradient G. The term quadratic in the Ioffe field is negligible in comparison with the dominant shift due to the linear Zeeman term as long as B is significantly smaller than 10 4 Gauss which is guaranteed in our case. Moreover, the c. m. coordinate dependance of this diamagnetic term is much weaker than the c. m. coordinate dependance of the terms linear in the field gradient. The term quadratic in the field gradient can be neglected in comparison with the corresponding linear term. Up to now we did not use the explicit form of the Ioffe-Pritchard field configuration. (In anticipation of the special field configuration we leave the term containing A l in its original form.) B. Ioffe-Pritchard Field Configuration Two widely spread magnetic field configurations that exhibit a local field minimum and serve as key ingredients for the trapping of weak-field seeking atoms are the 3D quadrupole and the Ioffe-Pritchard configuration. The Ioffe-Pritchard configuration resolves the problem of particle loss due to spin flip by means of an additional constant magnetic field. A macroscopic realization uses four parallel current carrying Ioffe bars which generate the quadrupole field. Encompassing Helmholtz coils create the additional constant field. There are many alternative layouts, the field of a clover-leaf trap for example features the same expansion around the origin [28]. On a microscopic scale the Ioffe-Pritchard trap has been implemented on atom chips by a Z-shaped wire [29]. The vector potential and the magnetic field read where A q = Q 4 (x 2 + y 2 − 4z 2 )(−ye x + xe y ) and B q = Q(2xze x + 2yze y + (x 2 + y 2 − 2z 2 )e z ). B c is the constant field created by the Helmholtz coils with B being the Ioffe field strength. B l originates from the Ioffe bars and depends on the field gradient G. B q designates the quadratic term generated by the Helmholtz coils whose magnitude, compared to the first Helmholtz term, can be varied by changing the geometry of the trap, Q = B · 3 2 (R 2 − 4D 2 )/(R 2 + D 2 ) 2 , where R is the radius of the Helmholtz coils, and 2D is their distance from each other. If we now insert the special Ioffe-Pritchard field configuration using Eqs. (6,7) into the transformed Hamiltonian (5) we obtain where H A = p 2 /2 − 1/r is the operator for a field free atom. The well known Zeeman term BL z /2 comes from the uniform Ioffe field generated by the Helmholtz coils. The following term, involving the field gradient G, arises from the linear field generated by the Ioffe bars and couples the relative and c.m. dynamics. The part in the squared brackets originates from the quadratic term, again created by the coils. It is the only one that depends on the Z coordinate, we will see below that its contribution is negligible under certain conditions. The last term couples the spin of particle two to the magnetic field. Since the electronic spins of closed shells combine to zero, the spin of particle two is the nuclear spin only. Even though µ 2 B scales with 1/M 2 , we will still keep the term. Being the only one containing the nuclear spin it is essential for a proper symmetry analysis. operator operation Pj ,Ŝj, andΣj are exemplified by j = x, but hold of course also for j = y, z. C. Symmetries, Scaling and the Approximation of a Single n-Manifold Our Hamiltonian is invariant under a number of symmetry transformations U S that are composed of the elementary operations listed in Tab. I. The parity operations P j , j ∈ {x, y, z}, are defined by their action on the spatial laboratory coordinates of the particles which translates one-to-one to c.m. and relative coordinates. In order to exchange the x and y components of the electronic spin we introduce the operator where S xy S * xy = 1. T represents the conventional time reversal operator for spinless particles which, in the spatial representation, corresponds to complex conjugation. Our unitary symmetries are The Hamiltonian is also left invariant under the antiunitary symmetry transformation By consecutively applying the latter operator and the unitary operators (9a), (9b) and (9c) it is possible to create further antiunitary symmetries: Paying regard to the fact that S 2 xy = −Ŝ z and Σ 2 xy = −Σ z and that T neither commutes withŜ y nor with S xy and Σ xy , one finds that the operators (9a-11c) form a symmetry group. If no Ioffe field is present (B = 0), eight additional symmetries can be found leaving the Hamiltonian invariant. For an effective one particle approach (and the corresponding one particle symmetries) this was discussed in Ref. [30]. As indicated before, the quadratic magnetic field term is small and can be tuned by changing the trap geometry. It can provide a longitudinal confinement which may be treated by perturbative methods. In the case of negligible quadratic field B q , which we assume in the following, the term in the squared brackets of the Hamiltonian (8) drops out and the Z coordinate is cyclic. The corresponding conjugated momentum P z is consequently conserved and the longitudinal motion is integrated by simply employing plane waves |k Z = exp{iZk Z }. The constraints for this approximation to be valid can be obtained by comparing the above-mentioned term in squared brackets with the Zeeman term, BL z /2. Estimating x ≈ n 2 , xp y ≈ yp x ≈ n, and using |Q| B/(D 2 + R 2 ) we find where D and R characterize the trap geometry. Eqs. (12,13) are easily fulfilled. We are therefore left with the Hamiltonian where the electronic Hamiltonian reads For all laboratory fields one finds the magnetic field strength B and the magnetic field gradient G to be a lot smaller than 1. Our Hamiltonian (8) is thus dominated by H A . The energies of the field free spectrum E n A = −1/2n 2 are n 2 -fold degenerate. We can assume the Ioffe-Pritchard field not to couple adjacent n-manifolds as long as |E n A − E n±1 A |/E Zee ≫ 1. The resulting constraints B ≪ n −4 , G ≪ n −6 and GR ≪ n −4 yield B ≪ 2900 G, G ≪ 6 · 10 6 T/m for n = 30 and R ≪ 2.9 mm if we additionally assume the field gradient G to be as large as 100 T/m. In our parameter regime each n-manifold can therefore be considered separately. We thus project the full Hamiltonian on the hydrogenic eigenfunctions |α = |n, l, m l , m s , H A |α = E n A |α , with fixed principal quantum number n, that cover an entire nmanifold. l denotes the orbital angular momentum quantum number, m l the one of its z component L z and m s stands for the quantum number of the electronic spin. Working in a single n-manifold we can reformulate the term in the Hamiltonian (14) involving the field gradient G into a more compact form. We first consider the commutator [yz, since |α and |α ′ are eigenkets to the same eigenvalue E n . Establishing the relation to the orbital angular momentum operator via yp z = L x + zp y results in The same procedure can be applied to xp z leading to Furthermore α|XY p z |α ′ = 0 since p z ∼ [H A , z], and eventually we can write where we omitted the bracketing alphas, but keep in mind that the above identity holds in a single n-manifold only. In order to remove the separate dependencies on the field parameters B, G, and on the mass M from the coupling terms, we introduce scaled c.m. coordinates, R → γ − 1 3 R, with γ = GM , and simultaneously we introduce the energy unit ǫ = γ and omitting the constant energy offset E n A , the Hamiltonian can be given the advantageous form The first term is the c.m. kinetic energy. µ is the 2n 2dimensional matrix representation of the total magnetic moment of the electron, 1 2 (L r +2S), and the second term in (21) describes its coupling to the effective magnetic field G. The latter results from the original field B c + B l in Eq. (7) taking into account the corresponding coordinate and energy scaling factors. S i are the components of the electronic spin, S = −µ 1 . The nuclear spin term −µ 2 · B(R) has been omitted since it is several orders of magnitude smaller than the electronic one. III. ADIABATIC APPROACH The large difference of the particles' masses and velocities in our two body system makes it plausible to adiabatically separate the electronic and the c.m. motion. The corresponding time scales differ substantially even for large principal quantum numbers n. However, due to the enormous level density in case of Rydberg atoms it is a priori unclear whether isolated energy surfaces might exist or whether, as one might naturally assume, non-adiabatic couplings are ubiquitous and therefore an adiabatic approach might invalidate itself. The procedure is reminiscent of the Born-Oppenheimer ansatz in molecular systems and is based on the idea that the slow change of the heavy particle's position allows the electron to adapt instantaneously to the inhomogeneous field. The electronic energy of the system can thus be considered as a function of the position of the heavy particle. The adiabatic approximation is introduced by subtracting the transversal c.m. kinetic energy, T = (P 2 x + P 2 y )/2, from the total Hamiltonian (21). The remaining electronic Hamiltonian for fixed center of mass reads The electronic wave function ϕ κ depends parametrically on R and the total atomic wavefunction can be written as where |ψ ν (R) is the center of mass wave function. The internal problem posed by the stationary, electronic Schrödinger equation is solved for the adiabatic electronic potential energy surfaces E κ (X, Y ), that serve as a potential for the c. m. dynamics. Within this approximation, the equation of motion for the center of mass wave function reads The spatially dependent transformation U(X, Y ), that diagonalizes the matrix representation H e of the electronic Hamiltonian, is composed of the vector representations of the electronic eigenfunctions, U κ = (U κα ) = ( α|ϕ κ (r; R) ). Since U depends on the c.m. coordinates, the transformed kinetic energy involves non-adiabatic couplings ∆T (26) that have been neglected in the adiabatic approximation of Eq. (25), They can be calculated explicitly as soon as the electronic adiabatic eigenfunctions have been computed. Nonadiabatic contributions can be neglected if the conditions are fulfilled [19]. The energy denominator in (28) and (29) indicates that one can expect non-adiabatic couplings to become relevant between the adiabatic energy surfaces when they come very close in energy, i.e. in the vicinity of avoided crossings. Recalling the results of the symmetry analysis, it can be demonstrated that the energy surfaces E κ , exhibit three mirror symmetries. Within the adiabatic approximation, X and Y are parameters in the electronic Schrödinger equation. Symmetry operations applied to the electronic Hamiltonian thereby merely act onto the electronic subspace. If we apply the corresponding restricted symmetry operation U P = P x P yŜzΣz (9a), that was already shown to leave the full Ioffe-Pritchard Hamiltonian (8) invariant, to the electronic Hamiltonian H e (15), we find Since unitarily equivalent observables, A and U † AU , possess the same eigenvalue spectrum, we find the energy surfaces to be inversion symmetric with respect to the origin in the X-Y plane. The symmetry operator U Y = T P y , and the operator that is composed of U Y and U P , namely U X = T P xŜzΣz (see (10) and (11a)), mirror the energy surfaces at the axes, The electronic problem (24), with the core fixed at an arbitrary position, is three-dimensional. No symmetry arguments can be exploited to reduce the dimensionality of the problem. In order to solve it, we employ the variational method, which maps the stationary Schrödinger equation onto an ordinary algebraic eigenvalue problem. Since the matrix representation of the electronic Hamiltonians is sparsely occupied, an Arnoldi decomposition is used. Both, this decomposition and the surfaces' mirror symmetries, help to reduce the computational cost of solving the electronic Schrödinger equation. IV. ELECTRONIC POTENTIAL ENERGY SURFACES In this section the properties of the electronic adiabatic energy surfaces are analyzed for different regimes of Ioffe field strengths and field gradients. These two parameters can be used to shape the potential in which the center of mass dynamics takes place. To understand how this takes place, we inspect the electronic Hamiltonian to unravel the influence of the individual terms for different parameter regimes. The characteristic length scale of the center of mass dynamics is of the order of one in scaled atomic units. It is therefore adequate to compare the magnitudes of the different parts of the electronic Hamiltonian (22) in order to estimate their impact on the center of mass motion, putting X and Y equal to one. The first part, µ·G(X, Y ), consists of the coupling terms X( 1 2 L x +S x )− Y ( 1 2 L y + S y ), that are then of the order of L i ≈ n for high angular momentum states, and of the Zeeman term ζ( 1 2 L z + S z ), which can be as large as ζn. The second part, γ 1/3 (xyp z + xS x − yS y ), is quadratic in the relative coordinates which makes it particularly important for high principal quantum numbers n. If we consider the expectation values of the relative coordinates to be of the order of n 2 , and yp z ≈ L x ≈ n, the overall magnitude can be estimated to γ 1/3 n 3 . In a nutshell, we have for the mentioned three terms the following relative orders of magnitude, 1 , ζ and γ A. Regulating Capacity of the Ioffe Field To understand the impact of the Ioffe field strength B on the adiabatic energy surfaces, we isolate its effect by suppressing other influences. This can be done by choosing a relatively low field gradient G and/or a small principal quantum number n (see Tab. II). The factor γ 1 3 n 2 becomes small, and the last term in Eq. (22) will hardly provide any contribution. Within this regime, that we focus on in this subsection, approximate analytical expressions for the electronic adiabatic energy surfaces can be derived. We diagonalize the approximate electronic HamiltonianH by applying the spatially dependent unitary transformation for the transformed approximate electronic Hamiltonian. The spatially dependent transformation U D locally rotates the magnetic moment of the electron, which includes its spin and its angular momentum, such that it is parallel to the local direction of the magnetic field. The operators L z and S z are not identical to the ones before having applied the transformation (35), they are rather related to the local quantization axis defined by the local magnetic field direction [18]. The adiabatic potential surfaces evaluate to The possible combinations of m l and m s yield 2n + 1 energy surfaces. The surfaces highest and lowest in energy correspond to circular states, (|m l | = l max = n − 1, m l + 2m s = ±n), and they are the only non-degenerate ones. For the other surfaces (|m l + 2m s | < n), the multiplicity of (m l + 2m s ), and with that the degree of degeneracy of the corresponding surfaces, is given by 2n − |m l + 2m s + 1| − |m l + 2m s − 1|. Starting from the highest energy surface, the levels of degeneracy thus are 1, 2, 4, 6, . . . . The approximate surfaces E κ (37) are rotationally symmetric around the z-axis. An expansion around this axis (ρ = √ X 2 + Y 2 ≪ ζ) yields a harmonic potential, while we find a linear behavior, when the center of mass is far from the z-axis (ρ ≫ ζ). For reasons of illustration we demonstrate the behavior of the adiabatic surfaces with increasing Ioffe field by means of a somewhat artificial example where other, previously neglected interactions might be more important. Fig. 1 shows sections through all the surfaces for n = 3. This principal quantum number has been chosen in order to keep the sections simple while displaying the entire n-manifold. We employ 87 Rb in this expository example although the electronic ground state of its outermost electron is 5s. The sections have been calculated for the field gradient G = 1 T/m and for different field strengths B using the total electronic Hamiltonian (22). These parameters yield γ 1/3 n 2 = 0.003, and values for ζ ranging from 0.01 to 1. The surfaces in the different graphs of Fig. 1 indeed validate the approximate expression (37): We find 2n + 1 degenerate surfaces and the FIG. 1: Sections along the X-axis through the electronic adiabatic energy surfaces of an entire n = 3 manifold. The field gradient is fixed at G = 1 Tesla/m in order to suppress the influence of the last term in He (22). From left to right, ζ = BM γ −2/3 increases due to an increasing Ioffe field. (22) is not completely suppressed as can be seen from the lifted degeneracies in the upper subfigures. harmonic behavior for |X| ≪ ζ gives way to a linear increase for |X| ≫ ζ. The energetic distances and lengths in the different graphs are comparable, since the scaling factor for the center of mass coordinates γ = c 2 M has not been changed. We can conclude that increasing the Ioffe field strength B separates the surfaces from each other. The data presented in Fig. 2 have been computed for the n = 30 manifold. In order to keep the last term in (22) small, the field gradient has been set to G = 0.1 T/m (→ γ 1/3 n 2 = 0.14). The uppermost 21 energy surfaces are shown for different values of the magnetic field strength B. Similar to the n = 3 case, one can see the harmonic behavior around the origin. The surfaces' minimal distance becomes larger for increasing ζ. Since ζ and γ 1/3 n 2 are of the same order of magnitude in subfigure (a), the contribution of the last term in (22), that lifts the degeneracy of the curves, is visible. The energetic distance of the approximate surfaces described by Eq. (37) increases with larger distances from the Z-axis, ρ, and with larger ζ. The minimum energetic gap between two adjacent surfaces is at the origin and reads The parameter ζ (an hence the field strength B) is the tool to control the energetic distance between the adiabatic surfaces. Increasing ζ, one can thus also minimize the non-adiabatic couplings ∆T (27) discussed in Sect. III, since they scale with the reciprocal energetic distance of the surfaces. To check the range of validity of our approximation, the minimal energetic distance between the two uppermost adiabatic surfaces in the n = 30 manifold has been calculated for different parameters, subtracting the full 2D surfaces from each other, that have been obtained using the electronic Hamiltonian (22). One finds the minimal distance to be located at the origin, as expected. ∆ in Tab. III denotes the relative deviation between the predicted (Eq. (40)) and the computed value in percent. It is small for large Ioffe field strengths B and low field gradients G. Then we have ζ ≫ γ 1/3 n 2 , the last term in the electronic Hamiltonian is negligible and our approximation that leads to (40) is justifiable. B. High Gradients A more complicated picture of the surface properties arises when the field gradients become larger. The last term in the electronic Hamiltonian, that accounts for finite size effects of the atom, is no longer small compared to the others in equation (22). This results in modulations of the adiabatic surfaces we already spotted in the previous section, even though the term does not feature any dependency on X and Y . These modulations lift the degeneracy that was found in the limit of small gradients. Their dependency on the c.m. coordinates is introduced by the transformation U(X, Y ) that diagonalizes the electronic problem (cf. Sec. III). In order to isolate the effect of the term (41) on the adiabatic surfaces, we vary the scaling factor γ = GM by changing the field gradient G, while keeping ζ = BM 1/3 G −2/3 constant. It is, for example, reasonable to demand ζ = 5 and to adjust the Ioffe field strength B to meet this condition. Fig. 3 demonstrates the increasing influence of the interaction (41) when G is increased. The spectra are computed for the n = 30 manifold of 87 Rb, ζ = 5, while G is varied from 4.8 to 4800 T/m. For small field gradients ((a), B/(Gn 2 ) = 10), the surfaces approach the shapes predicted in the limit addressed in the previous subsection (IV A): The adiabatic surfaces with the same value of the magnetic moment (m l +2m s )/2 are approximately degenerate. The uppermost energy is the only non-degenerate one and to the corresponding eigenstate the quantum numbers m l = n − 1 and m s = 1/2 can be assigned. An increasing field gradient lifts the degeneracy and groups of curves can be observed ((b), B/(Gn 2 ) = 5). The energetic distance between these groups stays tunable by the bias field strength, as we elucidated above (see Eq. (40)). For even higher field gradients, the different parts of the electronic Hamiltonian are of comparable size and finite size effects substantially alter the shape of the energy surfaces ((c), (d), B/(Gn 2 ) = 1). Avoided level crossings appear and non-adiabatic transitions are likely to occur. The uppermost energy surface, however, proves to be very robust when the field gradient is varied. It is energetically well-isolated from the other adiabatic surfaces. Its distance to the surface, that is formed by the second highest eigenvalue, only decreases significantly when the ratio B/(Gn 2 ) approaches one ((c), (d)). This holds true for the entire X-Y-plane. Inspecting the full uppermost surface one furthermore finds the azimuthal symmetry, that is found for large ratios B/(Gn 2 ) (see Sect. IV A), to be approximately conserved. Another example for the complicated structure of the adiabatic electronic energy surfaces is shown in Fig. 4. The data are calculated for a Ioffe field strength of 0.01 G and a field gradient of 20 T/m. For these parameters, the contributions of all terms in the electronic Hamiltonian are of the same order of magnitude around X = 1. One immediately notices the large number of avoided crossings between the surfaces. The uppermost curve however remains isolated from the rest of the curves. Far away from the trap center, i.e. for large ρ = √ X 2 + Y 2 , the coupling term in (22), X( 1 2 L x + S x ) − Y ( 1 2 L y + S y ), becomes dominant. A Zeeman like splitting of the surfaces emerges, visible in the smaller graphs on the right. C. Electronic Wave Functions To characterize the electronic wave function ϕ κ (r; R), that corresponds to the energy eigenvalues constituting the uppermost adiabatic surface, we analyze its radial extension, angular momentum and spin. The electronic wave function depends parametrically on the c.m. position and is, in general, distorted compared to the field free case by the external magnetic field. This is reflected in the expectation value r e (R) = ϕ κ (r; R)|r|ϕ κ (r; R) which is shown in Fig. 5 for different ratios B/(Gn 2 ). The limits of the graphs with respect to X and Y correspond to thirty characteristic lengths of the c.m. motion. While keeping G = 100 T/m, B is increased for the different plots from left to right. For the smallest ratio under consideration ((a), B/Gn 2 < 1), a pronounced maximum of the expectation value r e can be observed at the trap center. This maximum breaks up into four maxima arranged along the diagonals when the ratio is increased ((b), B/Gn 2 > 1), while the amplitude of the spatial variation of r e decreases. For an even higher value of B ((c), B/Gn 2 ≫ 1), only a marginal deviation from the hydrogenic field free value for the highest possible angular momentum quantum number remains (for n = 30 one finds r H (n = 30, l = 29) = 915). In the region of local homogeneity, where the magnetic field does not vary significantly over the extension of the electronic cloud (i.e. far from the z-axis), the expectation value approaches the field free value in all subfigures that are shown in Fig. 5. In accordance with the abovementioned scaling property of the electronic Hamiltonian H e , changing the field parameters while keeping the ratio B/Gn 2 unaltered only modifies the scale of the c.m. coordinates, whereas the shape of the bright regions and the energy range of the eigenvalues are not changed. Let us study the angular momentum and its orientation. It is to be expected that for dominating Ioffe field, i.e. for very large ratios B/(Gn 2 ), the expectation value of the angular momentum, L r = ( L x , L y , L z ), is oriented in the Ioffe-field direction (z-axis). Since the Ioffe field in any case dominates around the origin, L x and L y are expected to vanish at (X, Y ) = (0, 0) while L z becomes maximal. This behavior can be observed in Fig. 6 where L i are displayed (a,b,c) for B = 0.1 G and G = 100 T/m. These parameters yield B/(Gn 2 ) = 2.1. The alignment of L r and the local field direction G(X, Y ) is found to be very good in the entire X-Y -plane (the maximum angle between the two is smaller than 3.6 • ). In subplot (d) we provide the spatial behavior of the projection of L r onto this local field axis, Π = L r · G(R)/|G(R)|. In the local homogeneity limit, Π approaches the maximal value for L z , namely m l,max = n − 1. In the same manner the expectation value L 2 , which is displayed in subplot (e), converges to the maximal value, l max (l max +1) = n(n−1). Far from the z-axis, the uppermost surface hence corresponds to the circular state |m l,max , l max . The deviation of Π and L 2 from the maximal values close to the z-axis reflect the admixture of states with lower quantum numbers m and l to the state of the uppermost surface. Increasing the applied Ioffe field by a factor of 10 (→ B/(Gn 2 ) = 21), decreases the angle between L r and G(X, Y ) by a factor of 10 2 , i.e. a quasi perfect alignment is found. As can be seen in Fig. 7, the projection Π now only deviates marginally from m l,max . Consequently, also L 2 exhibits only minor deviations from its maximum value in the whole X-Y -plane. For high ratios B/(Gn 2 ), the admixture is therefore marginal and one can in a very good approximation assume the electronic state in the uppermost surface to be the circular state |m l,max , l max for any c.m. position. Similar observations can be made considering the respective expectation values for the spin. For the parameters in Fig. 7 the projection of S onto G differs less than 10 −4 from 1/2. The expectation values of the examined electronic observables converge to the field free values for increasing ratios B/(Gn 2 ). Our findings indicate that the electronic structure of the atom is barely changed in the limit of large ratios B/(Gn 2 ). The radiative lifetimes can hence be expected to differ only slightly from the field free ones [19]. V. QUANTIZED CENTER OF MASS MOTION The energetically uppermost adiabatic electronic energy surface is the most appropriate to achieve confinement. It does not suffer a significant deformation when the field gradient is increased and it stays well isolated from lower surfaces for a wide range of parameters. Large energetic distances to adjacent surfaces suppress nonadiabatic couplings (Eqs. (28) and (29)). In order to obtain the quantized c.m. states we therefore solve the Schrödinger equation (25) for the c.m. motion in the uppermost surface E 2n 2 by discretizing the Hamiltonian on a grid. The wave function for the fully quantized state is hence composed of the eigenfunction |ϕ κ (r; R) of the electronic Hamiltonian in equation (24), the wave function for the center of mass motion in the X-Y plane, |ψ ν (R) , and the plain wave in Z direction, In Fig. 8 the probability densities of the ground state and two excited states of the c.m. motion in the uppermost surface of the n = 30 manifold of 87 Rb are displayed. These densities reflect the spatial symmetries According to the discussion in Sec. IV A, the electronic surface then exhibits a harmonic behavior around the origin, and the system resembles the two dimensional isotropic harmonic oscillator in the potential E h (X, Y ) = (ζ + ρ 2 /2ζ) · n/4 (cf. Eq. (38), m l = n − 1). The first two probability densities (from left to right) in Fig. 8 explicitely demonstrate the analogy to the harmonic oscillator. The nodal structure of the tenth excited state is not due to a Cartesian product of 1D harmonic oscillators but a different combination of the harmonic oscillators in the corresponding degenerate subspace. To describe the properties of the compound quantized state, we analyze the extension of the center of mass motion, which can be measured by the expectation value and the mean distance of the core and the electron r . The mean distance of the Rydberg electron from the core r is calculated weighting that very quantity for a fixed c.m. position r e (X, Y ), with the probability density of the c.m. wave function: It is depicted in Fig. 10, along with ρ , versus the degree of excitation of the c.m. motion ν. ρ and r are of comparable size due to the very tight confinement. For a Ioffe field strength of B = 0.1 G and a field gradient of G = 100 T/m, for instance, the ratio of ρ and r for the ground state (ν = 1) is as small as ρ / r = 0.4. The extension of the c.m. wave function is thus smaller than the extension of the electronic cloud. This strongly supports the proposition that our Rydberg atoms cannot be considered as point-like particles. The expectation value r for the electron remains nearly constant as the degree of excitation increases, and it barely differs from the corresponding field free value (dashed line in Fig. (10)). As indicated previously, we find the electron to be in the circular state with m l = n − 1, which features the smallest mean square deviation of the nucleus-electron separation r 2 − r 2 = n 2 (2n + 1)/4. It is therefore possible, that the c.m. and the electronic wave function do not even overlap. This is indicated in the inset of the upper right plot in Fig. 10 for ν = 1. VI. CONCLUSION We have studied the quantum properties of ultracold Rydberg atoms in a Ioffe-Pritchard field configuration and find trapped c.m. quantum states to be readily achievable. Our starting point is a two-body approach to the Rydberg atom. Relativistic effects and deviations of the core potential from the Coulomb-potential as well as diamagnetic interactions have not been taken into account, which is well justified a posteriori. Applying a spatially dependent unitary transformation and additionally exploiting the major mass difference of the electron and the core, we arrived at a two-particle Hamiltonian for highly excited atoms in an inhomogeneous field where the appearance of the coupling of the relative and c.m. dynamics is simplified substantially. Thenceforward we have concentrated on the special case of a Ioffe-Pritchard trap. A symmetry analysis of the resulting Hamiltonian has been performed revealing seven discrete unitary and anti-unitary symmetries. Comparing the energetic contributions of the different interactions we find it legitimate to limit our considerations to a single n-manifold to solve the corresponding stationary Schrödinger equation. Consequently an adiabatic approach was applied. In the ultracold regime the Rydberg electron is much faster than the c.m. motion of the atom. This justifies an adiabatic separation of the internal (relative) and the external (c.m.) dynamics. The corresponding adiabatic electronic potential surfaces have been obtained by diagonalizing the electronic Hamiltonian matrix. In the limit of large ratios of Ioffe field strength and field gradient, B/(Gn 2 ), an approximate analytical expression for the adiabatic surfaces has been provided. In this limit, the surfaces arrange equidistantly and all but the uppermost surface are degenerate. The inter-surface distance is then proportional to the Ioffe field strength. The structure of the electronic surfaces becomes more complex when this ratio decreases. The shape of the uppermost surface and its energetic separation from others, however, prove very robust with respect to changes of the field parameters. We hence consider it the most appropriate to achieve confinement. Exploring the properties of the electronic wave functions we find that the expectation values approach the field free values when the ratio of the field strength and the field gradient, B/(Gn 2 ), is increased. This indicates that, despite the strong localization of the c.m., the electronic structure of the atom is barely changed compared to the field free case. Examining the compound quantized states we have found a regime where the extension of the c.m. wave function falls below the extension of the electronic cloud, i.e. the c.m. is stronger localized than the valence electron. In this regime Rydberg atoms in inhomogeneous magnetic fields can therefore not be considered as point-like particles. We conclude that the Ioffe-Pritchard trap provides a strong confinement for Rydberg atoms in two dimensions that permits their trapping on a microscopic scale. For such a one-dimensional guide, a relatively weak longitudinal confinement along the z-axis could additionally be provided for a non-Helmholtz configuration by the quadratic term. As a natural enrichment of the system one could study many atoms in that guide. Challenging issues are to stabilize such a one-dimensional Rydberg gas or to answer the question if it is feasible to use the strong Rydberg-Rydberg interaction to create a chain of trapped atoms [31] that could then serve as a tool for quantum information processing [13,32,33] making use of the state dependent atom-atom interaction. VII. ACKNOWLEDGMENT Financial support by the Deutsche Forschungsgemeinschaft is gratefully acknowledged.
2007-08-20T22:38:06.000Z
2007-05-09T00:00:00.000Z
104108310
s2orc/train
v2
Clash of Titans: a MUSE dynamical study of the extreme cluster merger SPT-CLJ0307-6225
Clash of Titans: a MUSE dynamical study of the extreme cluster merger SPT-CLJ0307-6225 We present VLT/MUSE spectroscopy, along with archival Gemini/GMOS spectroscopy, Magellan/Megacam imaging, and Chandra X-ray emission for SPT-CLJ0305-6225, a z=0.58 galaxy cluster. A large BCG-SZ centroid separation and a highly disturbed X-ray morphology classifies SPT-CLJ0307-6225 as a major merging cluster. Furthermore, the galaxy density distribution shows two main overdensities with separations of 0.144' and 0.017' to their respective BCGs. We characterize the central regions of the two colliding structures, namely 0307-6225N and 0307-6225S. We find velocity derived masses of $M_{200,N}=$ 2.42 $\pm$ 1.40 $\times10^{14}$ M$_\odot$ and $M_{200,S}=$ 3.13 $\pm$ 1.87 $\times10^{14}$ M$_\odot$, with a line-of-sight velocity difference between the two structures of $|\Delta v| = 342$ km s$^{-1}$. The total dynamically derived mass is consistent with the SZ derived mass of 7.63 h$_{70}^{-1}$ $\pm$ 1.36 $\times10^{14}$ M$_\odot$. We model the merger using the Monte Carlo Merger Analysis Code, estimating a merging angle of 36$^{+14}_{-12}$ degrees with respect to the plane of the sky. Comparing with simulations of a merging system with a mass ratio of 1:3, we find that the best scenario is that of an ongoing merger that began 0.96$^{+0.31}_{-0.18}$ Gyr ago, which could be close to turnaround. We also characterize the galaxy population using the H$\delta$ and [OII] $\lambda 3727$ \AA \ lines. We find that most of the emission-line galaxies belong to 0307-6225S, close to the X-ray peak position, with a third of them corresponding to red-cluster sequence galaxies, and the rest to blue galaxies with velocities consistent with recent periods of accretion. Moreover, we suggest that 0307-6225S suffered a previous merger, evidenced through the two equally bright BCGs at the center with a velocity difference of $\sim$674 km s$^{-1}$. In such extreme environments, galaxies are exposed to conditions that may quench (e.g. Poggianti et al. 2004;Pallero et al. 2020) or trigger star formation (e.g. Ferrari et al. 2003;Owers et al. 2012) . For example, Kalita & Ebeling (2019) found evidence of a Jellyfish galaxy in the dissociative merging galaxy cluster A1758N ( ∼ 0.3), concluding that it suffered from ram-pressure striping due to the merging event. Pranger et al. (2014) studied the galaxy population of the post-merger system Abell 2384 (z∼0.094), finding that the population of spiral galaxies at the center of the cluster does not show star formation activity, and proposing that this could be a consequence of ram-pressure stripping of spiral galaxies from the field falling into the cluster. Ma et al. (2010) discovered a fraction of lenticular post-starburst galaxies in the region in-between two colliding structures, in the merging galaxy cluster MACS J0025. 4-1225 (z∼0.59), finding that the starburst episode occurred during the first passage (∼0.5-1 Gyr ago), while the morphology was already affected, being transformed into lenticular galaxies because of either ram-pressure events or tidal forces towards the central region. On the other hand, Yoon & Im (2020) found evidence of increase in the star formation activity of galaxies in merging galaxy clusters, alleging that it could be due to an increment of barred galaxies in this systems (Yoon et al. 2019). Stroe et al. (2014) found an increase of H emission in star-forming galaxies in the merging cluster "Sausage"(CIZA J2242.8+5301) and, by comparing the galaxy population with the more evolved merger cluster "Toothbrush" (1RXS J0603.3+4213), concluded that merger shocks could enhance the star formation activity of galaxies, causing them to exhaust their gas reservoirs faster (Stroe et al. 2015). To understand how the merger process impacts cluster galaxies, it is crucial to assemble large samples of merging clusters and determine their corresponding merger phase: pre, ongoing or post. The SZ-selected samples are ideal among the available cluster samples, as they are composed of the most massive clusters in the Universe and are bound to be the source of the most extreme events. The South Pole Telescope (SPT Carlstrom et al. 2011) has completed a thermal SZ survey, finding 677 cluster candidates (Bleem et al. 2015), providing a well understood sample to study the impact of cluster mergers on their galaxy population. There is rich available information on those clusters, including the gas centroids (via SZ and/or X-ray), optical imaging, near-infrared imaging, cluster masses, photometric redshifts, etc. Furthermore, as the SPT cluster selection is nearly independent of redshift, a merging cluster sample will also allow evolutionary studies to high redshifts. Using SPT-SZ selected clusters and optical imaging, Song et al. (2012) reported the brightest cluster galaxy (BCG) positions on 158 SPT cluster candidates and, by using the separation between the cluster BCG and the SZ centroid as a dynamical state proxy, found that SPT-CLJ0307-6225 is the most disturbed galaxy cluster of the sample. Recently, Zenteno et al. (2020) employed optical data from the first three years of the Dark Energy Survey (DES; Abbott et al. 2018;Morganson et al. 2018;Collaboration: et al. 2016) to use the BCG in 288 SPT SZ-selected clusters (Bleem et al. 2015) to classify their dynamical state. They identified the 43 most extreme systems, all with a separation greater than 0.4 200 , including once again SPT-CLJ0307-6225. Furthermore, an X-ray morphological analysis done by Nurgaliev et al. (2017) over 90 SPTselected galaxy clusters shows SPT-CLJ0307-6225 as one of two most extreme cases (the other one is 'El Gordo' Marriage et al. 2011;Menanteau et al. 2012), making this cluster an interesting system to test the impact of a massive merging event in galaxy evolution, the goal of this paper. We use VLT/MUSE and Gemini/GMOS spectroscopy, X-ray data from Chandra, and Megacam imaging to characterize the SPT-CLJ0307-6225 merger stage, and its impact on galaxy population. The paper is organized as follow: in §2 we provide details of the observations and data reduction. In §3 we show the analysis for the spectroscopic and optical data, while in §4 we report our findings for both the merging scenario and the galaxy population. In §5 we propose an scenario for the merging event and connect it to the galaxy population. In §6 we give a summary of the results. Throughout the paper we assume a flat Universe, with a ΛCDM cosmology, ℎ = 0.7, Ω = 0.27 (Komatsu et al. 2011). Within this cosmology, 1 arcsec corresponds to ∼6.66 kpc. Optical Imaging Optical images were obtained using Magellan Clay with Megacam during a single night on November 26, 2011 (UT). Megacam has a 24 x 24 field-of-view, which at redshift 0.579 correspond to ∼10 Mpc. Several dithered exposures were taken in , , and filters for a total time of 1200 s, 1800 s, and 2400 s respectively. The median seeing of the images was approximately 0.79 arcsec or about 5 kpc, with a better seeing in r-band, averaging 0.60 arcsec. The 10 limit magnitudes in are 24.24, 24.83, and 23.58, respectively (Chiu et al. 2016). In Fig. 1 we show the pseudo-color image, centered on the SZ cluster position of SPT-CLJ0307-6225, with the white bar on the bottom right showing the corresponding scale. The catalogs for the photometric calibration were created following High et al. (2012) and Dietrich et al. (2019) including standard bias subtraction and bad-pixel masking, as well as flat fielding, illumination, and fringe (for i-band only) corrections. The stellar locus regression High et al. (2009) were constrained by crossmatching with 2MASS catalogs and give uncertainties in absolute magnitude of 0.05 mag and in color of 0.03 mag. For the creation of the galaxy photometric catalogs, we use a combination of Source Extractor (SExtractor; Bertin & Arnouts 1996) and the Point Spread Function Extractor (PSF ; Bertin 2011) softwares. SE is run in dual mode, using the -band image as the reference given the redshift of the cluster, we extract all detected sources with at least 6 pixels connected above the 4 threshold, using a 5 pix Gaussian kernel. Deblending is performed with 64 sub-thresholds and a minimum contrast of 0.0005. Galaxy magnitudes are SE 's MAG_AUTO estimation, whereas colors are derived from aperture magnitudes. The star-galaxy separation in our sample is performed following Crocce et al. (2019), by using the SE parameter _ , and its corresponding error, _ , derived from the -band image, for objects within 200 from the SZ center ( 200 = 3.84 ; Song et al. 2012;Zenteno et al. 2020). Crocce et al. (2019) classified a source as a galaxy if its satisfies ensuring a 97% purity galaxy catalog. With this separation we find 423 sources which are classified as galaxies. A visual inspection reveals that only 3 of them are not galaxies, however, most of the galaxies spectroscopically classified as cluster members are not included. To remedy this, we change the limit in Eq. 1 to > 0.004 (for reference, 0.005 is ∼95% purity; Sevilla-Noarbe et al. 2018), which includes then most of the spectroscopic galaxies (30 missing out of 131, see §3.2.1), but also increasing the contamination by other sources (e.g. stars). To improve upon this, we apply a second cut in magnitude for a source to be classified as a galaxy, such that auto < 18.5 mag, which is ∼ 0.5 mag brighter than the BCG. On the faint end the cut is set at auto < * + 3 = 23.39, which is beyond the limit of our spectroscopic catalogue (see §3.2.5). With this we obtain 789 galaxies, plus the 30 spectroscopic galaxies which did not make the cut. We inspect the properties of these 30 missing objects by comparing their measured SE parameter class_star on the same filter with that of the other 789. class_star is derived us-ing a neural network, giving a probability of a source to be a star (class_star ≈ 1) or a galaxy (class_star ≈ 0). The 30 missing galaxies are all in the higher end of this parameter with respect to the other 789, with class_star ≥ 0.80. On the other hand, Sevilla-Noarbe et al. (2018) uses a limit of < 0.002 in Eq. 1 to classify a source as a star, and these 30 sources are located in the area between the star cut and the galaxy cut we set above. The flux weighted galaxy surface density maps (see § 3.3 and the inset in Fig. 1) is generated from the population of red sequence galaxies, determined from the star-galaxy separation. Missing potential galaxies from the photometric catalog does not alter the general form of the surface density map, thus not altering our conclusions. Fig. 1, with the cubes enumerated in the top right corner of each square. We use these numbers to refer to the cubes throughout the paper. The data was taken in WFM-NOAO-N mode, with a position angle of 18 deg for three of the cubes and 72 deg for the one to the south, and using the dithering pattern recommended for best calibration: 4 exposures with offsets of 1" and 90 degrees rotations (MUSE User Manual ver. 1.3.0). The raw data were reduced through the MUSE pipeline (Weilbacher et al. 2014(Weilbacher et al. , 2016 provided by ESO. We construct 1D spectra from the MUSE cube using the MUSELET software (Bacon et al. 2016). MUSELET finds source objects by constructing line-weighted (spectrally) 5x1.25 Angstrom wide narrow band images and running SExtractor on them. In order to create well fitted masks to their respective sources, the parameter DETECT_THRESH is set to be 2.5. If the chosen value is below that, SExtractor will detect noise and output wrong shapes in the segmentation map. We proceed to use the source file to extract the SExtractor parameters A_WORLD, B_WORLD and THETA_WORLD to create an elliptical mask centered in each source. Finally, we use the MUSELET routines mask_ellipse and sum to create the 1D weighted spectra of the sources. To make sure the objects fit into their apertures, the SExtractor parameter PHOT_FLUXFRAC is set at 0.9, which means that 90% of the source's flux will be contained within the mask's radius. We complemented MUSE galaxy redshifts with Gemini/GMOS data published by Bayliss et al. (2016). Bayliss galaxy redshift sample consists in 35 cluster galaxies redshifts, with 8 not present in our MUSE data. The spectroscopic data from their sample can be found online at the V R C S (Ochsenbein et al. 2000), with the details on the data reduction described in Bayliss et al. (2016) and Bayliss et al. (2017). For SPT-CLJ 0307-6225, they used 2 spectroscopic masks with an exposure time of 1 hour each. The target selection consisted mostly of galaxies from the red sequence (selected as an overdensity in the color-magnitude and color-color spaces) up to * + 1, prioritising BCG candidates. X-ray data SPT-CLJ0307-6225 was observed by Chandra as part of a larger, multi-cycle effort to follow up the 100 most massive SPT-selected clusters spanning 0.3 < < 1.8 (McDonald et al. 2013(McDonald et al. , 2017. In particular, this observation (12191) The -axis shows the color index − estimated from aperture magnitudes, with a fix aperture of ∼40 kpc (∼6 arcseconds) at the cluster redshift, while the -axis shows SE 's MAG_AUTO. Magenta triangles represent galaxies from our spectroscopic sample, whereas dots are galaxies from the photometric sample. The red cluster sequence (RCS) estimated for the cluster is shown as a red-dashed line, while the green dotted lines are the 0.22 mag width established for the RCS. Guaranteed Time program (PI: Garmire). A total of 24.7 ks was obtained with ACIS-I in VFAINT mode, centering the cluster ∼1.5 from the central chip gap. The data was reprocessed using v4.10 and v.4.8.0. For details of the observations and data processing, see McDonald et al. (2013). The derived X-ray centroid is shown as a cyan plus-sign on Fig. 1. Color-Magnitude Diagram and RCS selection The color-magnitude diagram (CMD) for the cluster is shown in Fig. 2, where the magenta triangles are galaxies from our spectroscopic sample (see §3.2.1) and the dots represent galaxies from our photometric sample (selected as described in § 2.1). For the selection of the red cluster sequence (RCS) galaxies, which consist mostly of passive galaxies which are likely to be at the redshift of the cluster (Gladders & Yee 2000), we examine the location of the galaxies from our spectroscopic sample in the CMD. With this information, we then select all galaxies with − > 0.65 and perform a 3 -clipping cut on the color index to remove outliers. We keep all the galaxies from our previous magnitude cut in § 2.1 ( auto < 23.39). Finally, we fit a linear regression to the remaining objects, which is shown with a red dashed line in Fig. 2. The green dotted lines denote the limits for the RCS, chosen to be ±0.22 [mag] from the fit, which corresponds to the average scatter of the RCS at 3 (López-Cruz et al. 2004). This gives us a total of 187 optically selected RCS galaxy candidates, with 64 of those being spectroscopically confirmed members. Galaxy redshifts To obtain the redshifts, we use an adapted version of MARZ (Hinton et al. 2016) for MUSE spectra 1 . MARZ is an automatic redshifting Javascript web application that can be used interactively or via command line, for which we give the 1D spectra of each object as an input, obtaining the spectral type (late-type galaxy, star, quasar, etc.) and the redshift that best fits as an output. The results are examined visually for each of the objects, calibrating them using the 4000Å break and the Calcium and lines. Heliocentric correction was applied to all redshifts using the task from . There are three sources in the cube 4 region which appeared to be part of the cluster, but were not well fitted by MARZ. These sources are labelled by the white arrows in the top panel of Fig. 3, whereas their spectra are shown in black on the bottom panel. For comparison, the red arrow points towards a galaxy with a redshift automatically estimated as close to the cluster redshift, whereas the cyan arrow points towards a galaxy with an estimated redshift higher than that of the cluster. In total we estimate spectroscopic redshifts for 116 objects within the MUSE fields, with 4 of them classified as stars. In Table A1 we show the redshifts and magnitudes for this objects. For details of the different columns please refer to A. In addition, we supplement this data with 35 GMOS archival spectroscopic reduced data (Bayliss et al. 2016). Unfortunately, the header of these spectroscopic data did not have the information regarding the wavelength calibration, so we estimate that manually and then use to estimate redshifts. For the estimations we use 4 template spectra from the IRAF package ; eltemp and sptemp that are composites of elliptical and spiral galaxies, respectively, produced with the FAST spectrograph for the Tillinghast Telescope (Fabricant et al. 1998); habtemp0 produced with the spectrograph for the MMT as a composite of absorption line galaxies (Fabricant et al. 1998); and a synthetic galaxy template syn4 from stellar spectra libraries constructed using stellar light ratios (Quintana et al. 2000). The redshifts are solved in the spectrum mode of taking the -value (Tonry & Davis 1979) as the main reliability factor of the correlation following Quintana et al. (2000). They consider > 4 as the limit for a reliable result, here we use the resulting velocity only if it follows that (a) at least 3 out of the 4 estimated redshifts from the templates agree with the heliocentric velocity within ±100 km s −1 from the median and (b) at least 2 of those have > 5. Finally, the radial heliocentric velocity of the galaxy and its error is calculated as the mean of the values from the "on-redshift" correlations. Out of the 35 GMOS spectra from above, we have 12 galaxies with a common MUSE measurement, 10 belonging to the cluster (see below for details on the selection of the cluster members). We use these 12 galaxies in common to compare the results given by and MARZ, obtaining a mean difference of 60 ± 205 km s −1 on the heliocentric reference frame. However, only one galaxy showed a velocity difference higher than 3 . Excluding this galaxy from the analysis gives a mean velocity difference of 4 ± 96 km s −1 . With respect to the redshift measurements presented in Bayliss et al. (2016), we find that the velocity difference within ±5000 km s −1 from their redshift estimation of the cluster ( cl = 0.5801) is of |Δ | ≈ 300 km s −1 with a big dispersion. Regarding potential 1 http://saimn.github.io/Marz/#/overview (Hinton, cluster members, we select only galaxies where the redshifts reported by Bayliss et al. (2016) and the ones estimated using have a difference smaller than 500 km s −1 , which at cl = 0.5801 corresponds to a difference of ∼0.1%. This eliminates 2 potential cluster members, one from each method. In Table A1 we show the properties of 22 objects from GMOS, excluding the 12 in common with MUSE and the potential cluster member from our measured redshifts. The other potential cluster member is ID 27 from GMOS-2, where Bayliss et al. (2016) estimated = 0.5811. Redshifts in Table A1 correspond to the ones measured using . Our final spectroscopic catalog is composed of 136 objects; 131 galaxies and 5 stars. Cluster redshift estimation The cluster's redshift is estimated following the biweight average estimator from Beers et al. (1990), using the median redshift from all objects with measured redshift in our sample. This estimated redshift is then used instead of the median in their equation, in order to estimate a new redshift. This process is iterated 3 times. We select only spectroscopic sources with a peculiar velocity (see below) within ±5000 km s −1 from the cluster's estimated redshift, in order to exclude most of the foreground and background objects (eg. Bösch et al. 2013;Pranger et al. 2014). We then estimate the velocity dispersion ( ) using the biweight sample variance presented in Ruel et al. (2014), so that where the proper velocities of the galaxies, , and the biweight weighting, , are estimated as with being the speed of light, MAD corresponds to the median absolute deviation and , being the redshifts of the galaxies and the biweight estimation of the redshift of the sample, respectively. Then, the velocity dispersion is estimated as the square root of 2 bi , with its uncertainty estimated as 0.92 bi × √︁ members − 1. To obtain a final redshift for the cluster we use a 3 -clipping iteration (with = ), obtaining cl = 0.5803 ± 0.0006, where the error estimated as the standard error, i.e., the standard deviation over the square root of the number of cluster members. The velocity cut for the selection of the cluster members is discussed below. Cluster member selection Observationally, galaxies belonging to a cluster are selected by imposing restrictions on their distance to the center of the cluster and their relative velocities to the BCG. In this section, we study the appropriate cut in the Line of Sight (LoS) projected velocity of the galaxies relative to their BCG using the Illustris TNG300 simulations. Illustris TNG is a suite of cosmologicalmagnetohydrodynamic simulation which aims to study the physical processes that drive galaxy formation (Nelson et al. 2017;Pillepich et al. 2017;Springel et al. 2017;Naiman et al. 2018;Marinacci et al. 2018). The TNG300 is the simulation of the suite with the largest volume, having a side length of ∼ 250ℎ −1 Mpc. This volume contains 2000 3 Dark Matter (DM) particles and 2000 3 baryonic particles. The relatively large size of the simulated box allow us to identify a significant number of massive structures to be analyzed. The mass resolution of TNG300 is 5.9 × 10 7 , and 1.1 × 10 7 for the DM and baryonic matter respectively. Also, the adopted softening length is 1 h −1 kpc for the DM particles and 0.25 h −1 kpc for the baryonic particles (Marinacci et al. 2018). From this simulation we select a total of 80 clusters with masses between 4 × 10 14 ≤ 200 ≤ 9 × 10 14 , located at redshift between 0.1 ≤ ≤ 1. Here 200 is the mass within a sphere having a mean mass density of 200 times the critical density of the Universe. To ensure that our results are not affected by numerical resolution effects, we only selected subhalos with at least 1000 dark matter particles per galaxy ( DM ≥ 5.9 × 10 10 ) and at least 100 stellar particles ( stellar ≥ 1.1 × 10 9 ). The final set of 80 virialized and perturbed cluster provides a sample 9163 associated cluster galaxies. The bounded substructures were identified using the SUBFIND algorithm (Springel et al. 2001). To stack information from the 80 selected clusters we normalize the velocity distributions using the − 200 scaling relation from Munari et al. (2013). This scaling relation was obtained from a radiative simulation which included both (a) star formation and supernova triggered feedback, and (b) active galactic nucleus feedback (which they call the AGN-set). The equation is described as follows: where 1D is the one-dimensional velocity dispersion and h(z) = H(z)/100 km s −1 Mpc −1 . We choose the values of 1D = 1177 ± 4.2 and = 0.364 ± 0.0021, obtained using galaxies associated to subhaloes in the AGN-set simulation (Munari et al. 2013). To find the intrinsic Line of Sight (LoS) velocity distribution of a simulated cluster with mass 200 = 5 × 10 14 , at a given redshift of = 0.6, we followed the following procedure. We first fit the projected 1D velocity distribution of the cluster galaxies relative to the BCG using a Gaussian distribution with mean 0 and dispersion 0 . After, using the Equation 6, we compute the value of the 1D velocity dispersion 1 that the cluster would have if it had a mass of 200 = 5 × 10 14 . Then, we obtain the 1D velocities for each galaxy normalized by the mass and the redshift using the equation 7. Finally, we obtained the LoS velocities applying 200 different randomized rotations to each cluster, Figure 4 presents the histogram of the stacked LoS velocities for the galaxies in the different projections (blue histogram), the best fit normal distribution (red dashed line), and the confidence intervals shaded red areas. We conclude that, for a theoretical cluster of mass 200 = 7.64 × 10 14 , the LoS velocities are normally distributed with a dispersion of = 960 km −1 . This means that 95% of the galaxies belonging to this cluster would have LoS velocities lower than 1920 km s −1 , and 99% of them have LoS velocities lower than 2900 km s −1 . In what follows we adopt a cut of 3,000 km s −1 . Applying the ± 3,000 km/s cut we obtain a total number of cluster redshifts of 87, including 25 members from cube 1, 21 from cube 2, 11 from cube 3, 22 from cube 4 and 8 from the GMOS data. Hashed red bars represent the region within a range of ±3000 km s −1 in peculiar velocity from the cluster's redshift. The histogram insert on the top left shows the distribution of galaxies within this velocity range, where the black dashed and dotted lines represent the cuts at ±3000 km s −1 and the velocity of the BCG, respectively. Summary of spectroscopic catalog In total, we obtain 87 galaxies with spectroscopic redshifts for SPT-CLJ 0307-6225. Out of those, 79 come from the 1D MUSE objects from §2.2 and 8 from the GMOS archival spectroscopic data (Bayliss et al. 2016). The final redshift, estimated as the biweight average estimator, is cl = 0.5803 ± 0.0006. The final galaxy cluster redshift distributions is shown in Fig. 5. The inset shows the peculiar velocity of these selected galaxies, with the black dashed lines denoting the velocity cut and the black dotted line marking the velocity of the BCG. The velocity dispersion for the cluster, estimated following Eq. 2, is = 1093 ± 108 km −1 . Completeness of MUSE catalog Since our aim is to look at the properties of the galaxy population, we need to first characterise a limiting magnitude to define that population. Fig. 2 shows that the population of spectroscopic RS galaxies stops at auto ≈ 22.8, with blue galaxies going as deep as auto ≈ 23.3. In order to find out the limiting magnitude we N red Figure 6. Ratio of the spectroscopically confirmed members with respect to the red galaxies from our catalog at different bins of magnitudes. The top axis shows the number of red galaxies per magnitude bin. The dashed lines denote the limits for * , * + 1, * + 2 and * + 3, with the percentages being the accumulated completeness for a given limit. want to use, we compare our red-sequence catalog inside the cubes footprints within a magnitude bin, checking the fraction of spectroscopically confirmed galaxies within each bin. This check allows us to (1) validate our method for selecting RCS members, which will become important when looking for substructures (see §3.3), and (2) to look for potential cluster members not found by MARZ. In Fig. 6 we show the estimated completeness within magnitude bins of 0.5 mags. The -axis shows the ratio of spectroscopically confirmed red sequence cluster galaxies to the total number of red-sequence galaxies (photometrically selected + spectroscopically confirmed) within magnitude bins, while the N red on the top -axis shows the number of red galaxies within a bin. The dashed lines show the limits for the regions with magnitudes auto < * , * + 1, * + 2, * + 3, with the completeness of each luminosity range written to the left of each dashed line. The one "missing" galaxy at auto < * is at z = 0.611 (Δ = 5, 940 km s −1 ), while the two missing galaxies near auto < * +1 correspond to spectroscopically confirmed background galaxies at = 0.612 and = 0.716 (Δ = 6, 130 km s −1 and Δ = 25, 867 km s −1 , respectively). The latter one showed similar properties to the galaxies that belong to the cluster; size, visual color and spatially close to the BCG. Its − color index was also part of, towards the higher end, the rather generous width used for our RCS catalog. At auto ≥ * +2, galaxies look like they belong to the cluster, but do not show strong spectral features with which we can estimate the redshift accurately. We then require a cut at auto < * +2 (over 80% completeness) to the analysis regarding the galaxy population for all the galaxies in our spectroscopic sample. Spectral classification To understand if the merger is playing a role in the star formation activity of the galaxies, we make use of two measurements; the equivalent widths (EW) of the [OII] 3727 Å and H lines. [OII] 3727 Å traces recent star formation activity in timescales ≤10 Myr, while the Balmer line H has a scale between 50 Myr and 1 Gyr (Paulino-Afonso et al. 2019). A strong H absorption line is interpreted as evidence of an explosive episode of star formation which ended between 0.5-1.5 Gyrs ago (Dressler & Gunn 1983). To measure the equivalent widths of [OII] 3727 Å, EW(OII), and H , EW(H ), the flux spectra for each object is integrated following the ranges described by Balogh et al. (1999) using the IRAF task . Also, we only make use of MUSE galaxies, excluding the 8 GMOS galaxies added, given that the MUSE selection is unbiased. We do not expect this to change our main results since these galaxies are not located along the merger axis. We use the same scheme defined by Balogh et al. (1999) to classify our galaxies into different categories; passive, star forming (SF), short-starburst (SSB), post-starburst (PSB, K+A in Balogh et al. 1999) and A+em (which could be dusty star-forming galaxies). For this classification we only take into account galaxies with auto < * + 2 and a signal-to-noise ratio, SNR > 3 (62 galaxies), given that galaxies with low SNR can affect the measurements of lines in crowded sections, like in the region of the [OII] 3727 Å line (Paccagnella et al. 2019). The median signal-to-noise ratio (SNR) of our MUSE galaxies is 12.0 for sources with a magnitude auto < * , 7.8 for sources with * ≤ auto < * + 1, 4.0 for sources with * + 1 ≤ auto < * + 2, and 2.3 for sources with auto ≥ * + 2. We estimate the SNR in the entire spectral range of our data by using the _ algorithm (Stoehr et al. 2007). The results of this classification will be further discussed in §4.4. Galaxies association Depending on the stage of the merging event, it can be possible to determine what the main colliding-structures are, and which galaxies belong to each structure. Several techniques are available to estimate the level of substructure in galaxy clusters using the velocities. One of the most common techniques is to analyze the galaxy velocity distribution on a one-dimensional space, where it is assumed that for a relaxed cluster it should be close to a Gaussian shape (Menci & Fusco-Femiano 1996;Ribeiro et al. 2013). Hou et al. (2009) used Monte Carlo simulations to show that the Anderson-Darling (AD) test is among the most powerful to classify Gaussian (G) and non-Gaussian (NG) clusters, which is why it has been widely used in astronomy with different separation criteria (e.g. Hou et al. 2009;Ribeiro et al. 2013;Nurgaliev et al. 2017;Lopes et al. 2018). Hou et al. (2009) estimates an value (the significance value of the statistic) to separate G and NG clusters (see Eq. 17 in their paper), where < 0.05 indicates a NG distribution. Nurgaliev et al. (2017) uses the p-value of the statistic (p AD ) and separates the clusters using p AD < 0.05/ for NG clusters, where indicates the number of tests being conducted. Roberts et al. (2018) also uses the p-value, following that p AD < 0.1 indicates a NG cluster. We divide our data in 4 subsets for the application of the AD test; Cubes 2 and 3 for the middle overdensity, Cubes 1 and 4 to compare the two most overdense regions, all the data cubes and all the data cubes plus GMOS data. To test for 3D substructures (using the velocities and the onsky positions), we use the Dressler-Shectman test (DS-test, Dressler & Shectman 1988), which uses the information of the on-sky coordinates along with the velocity information, and can be used to trace perturbed structures (e.g. Pranger et al. 2014;Olave-Rojas et al. 2018). The DS-test uses the velocity information of the closest (projected) neighbors of each galaxy to estimate a Δ statistic, which is given by where tot corresponds to the total number of members of the cluster and where is estimated for each galaxy. corresponds to the number of neighbors of the galaxy to use to estimate the statistic, estimated as = √ tot (Pinkney et al. 1996), cl and loc correspond to the velocity dispersion of the whole cluster and the velocity dispersion of the neighbors, respectively, and¯c l and¯l oc correspond to the mean peculiar velocity of the cluster and the mean peculiar velocity of the neighbors, respectively. A value of Δ/ tot ≤ 1 implies that there are no substructures on the cluster. To calibrate our DS-test results, we perform 10 4 Monte Carlo simulations by shuffling the velocities, i.e., randomly interchanging the velocities among the galaxies, while maintaining their sky coordinates (meaning that the neighbors are always the same). The p-value of the statistic (p Δ ) is estimated by counting how many times the simulated Δ is higher than that of the original sample, and divide the result by the total number of simulations. Choosing p Δ < 0.05 ensures a low probability of false identification (Hou et al. 2012) and is accepted for the distribution to be considered non-random. Both AD and DS test results are shown in Table 2 below. When velocities are not available, or the velocity difference of the clusters are small, another common practice is to use the sky positions of the galaxies and build surface density maps to look for substructures (see, e.g., White et al. 2015;Monteiro-Oliveira et al. 2017Yoon et al. 2019). The galaxy surface density map at the top right of Fig. 1 implies that there are at least two colliding-structures. To obtain the density map we use the RCS galaxy catalog and the . .K D python module, applying a gaussian kernel with a bandwidth of 50 kpc. X-ray morphology An image in the 0.5-4.0 keV bandpass was extracted and adaptively smoothed using 2 . This smoothed image, shown as orange contours in Fig. 1, reveals a highly asymmetric X-ray morphology, with a bright, dense core offset from the large-scale centroid by ∼1 (∼400 kpc). Nurgaliev et al. (2017) used these same data to make an estimate of the X-ray asymmetry for this system, finding it to be the second 3 most asymmetric system in the full SPT-Chandra sample, with an X-ray morphology as disturbed as El Gordo, a well-known major merger (Williamson et al. 2011;Menanteau et al. 2012). Cluster substructures In Table 2 we show the results of both the AD-test and the DS-test applied to different subsets. The second column corresponds to the number of spectroscopic galaxies belonging to a given subsample. The subset which gives the smallest p-values for both the AD-test and the DS-test is the Cubes 1+4 subset, with these cubes located on top of the two density peaks, enclosing also the area next to the two brightest galaxies (see Fig. 1). We find that both the AD-test and the DS-test provide no evidence of substructure. Applying a 3 -clipping iteration to the samples does not change the results. The results, along the X-ray morphology, show no evidence of substructure along the line of sight, and rather support a merger in the plane of the sky, thus we take a look into the spatial distribution of the galaxies. In Fig. 7 we show the contours of the unweighted and flux weighted density maps, top and bottom figures respectively, of the RCS galaxies. The contour levels begin at 100 gal Mpc −2 and increase in intervals of 50 gal Mpc −2 . Dots correspond to galaxies from our spectroscopic samples. In these figures, regardless of whether they are weighted or unweighted, it can be seen the core of the two main structures with corresponding BCGs, and a high density of galaxies in-between them. For the definition of the substructures we take into account only spectroscopic members within (or near) the limits of our density contours. To distinguish the galaxies with a higher probability of being part of each structure we use the Density-Based Spatial Clustering of Applications with Noise (DBSCAN, Ester et al. 1996) algorithm. The advantage of using this algorithm is that the galaxies are not necessarily assigned to a given group, leaving some of them out. We use a -based application of this algorithm, following the work of Olave-Rojas et al. (2018, substructure defined as at least three neighbouring galaxies within a separation of ∼140 kpc). The results of the different structures found are shown with different coloured dots in Fig. 7. Black dots represent galaxies that either were too far from our density contours or were discarded by the DBSCAN algorithm. We name the two most prominent structures, defined by DBSCAN, as 0307-6225N (red dots) and 0307-6225S (orange dots), comprised by 23 members and 25 members, respectively. The BCGs for 0307-6225S and 0307-6225N are marked in Table A1 by the upper scripts 1 and , respectively. Both structures show a Gaussian velocity distribution when applying the AD test, and the distance between them is: ∼1.10 Mpc between their BCGs and ∼1.15 Mpc between the peaks of the density distribution. Regarding the in-between overdensity, with 19 galaxies (green dots in Fig. 7), we chose to discard it as an actual structure given that (1) unlike the other two structures, it does not have a massive dominant galaxy and (2) the estimated velocity dispersion is = 1400 km s −1 , which translates to an unlikely mass of 1.7 × 10 15 M (see §4.2). We comeback to this overdensity in § 5.1.2. Cluster dynamical mass We estimate the masses using Munari et al. (2013) scaling relations between the mass and the velocity dispersion of the cluster (see Eq. 6). The Gaussian velocity distribution together with the large . Unweighted (top) and flux weighted (bottom) RCS galaxies (photometric and spectroscopic) numerical density map is shown in black contours, where levels begin at 100 galaxies per Mpc 2 and the flux was estimated from the band. Galaxies not close to the density levels or classified as not being part of any structure by the DBSCAN algorithm are shown as black dots, while dots in different substructures according to the algorithm are shown with different colors according to the substructure; 0307-6225N (red), 0307-6225S (orange) and a in-between overdensity (green). separation between the center of both structures (∼1.1 Mpc between the BCGs) and the fact that the velocity difference between them is Δ − = 342 km s −1 (at the cluster's frame of reference) strongly suggest a plane of the sky merger (see, e.g. Dawson et al. 2015;Mahler et al. 2020) and could therefore, imply that the overestimation of the masses using scaling relations is minimal . We further explore this in §5.1.1. In order to minimize the possible overstimation of using scaling relations, we only use RCS spectroscopic galaxies to estimate , since in clusters with a high accretion rate, blue galaxies tend to raise the value of the velocity dispersion (Zhang et al. 2012). In Table 3 we show the properties of the two substructures. It can be seen then that the two structures have similar masses with the most probable ratio of S / N ≈ 1.3 with large uncertainties. Galaxies selected for the dynamical mass estimation are likely to belong to the core regions of the two clusters. Galaxies in these regions are expected to be virialized and should more closely follow the gravitational potential of the clusters during a collision, giving a better estimation of the masses when using the velocity dispersion. Cluster merger orbit To understand the merging event, we use the Monte Carlo Merger Analysis Code (MCMAC, Dawson 2013), which analyzes the dynamics of the merger and outputs its kinematic parameters. The model assumes a two-body collision of two spherically symmetric halos with a NFW profile (Navarro et al. 1996(Navarro et al. , 1997, where the total energy is conserved and the impact parameters is assumed to be zero. The different parameters are estimated from the Monte Carlo analysis by randomly drawing from the probability density functions of the inputs. The inputs required for each substructure are the redshift and the mass, with their respective errors, along with the distance between the structures with the errors on their positions. We use the values shown in Table 3 as our inputs, where the errors for the redshifts are estimated as the standard error, while the errors for the distance are given as the distances between the BCGs and the peak of the density distribution of each structure (0.144' and 0.017' for 0307-6225N and 0307-6225S, respectively). The results are obtained by sampling the possible results through 10 5 iterations, and are showed and described in Table 4, with the errors corresponding to the 1 level. MCMAC gives as outputs the merger axis angle , the estimated distances and velocities at different times and two possible current stages of the merger; outgoing after first pericentric passage and incoming after reaching apoapsis. The time since pericentric passage (TSP) for both possible scenarios are described as TSP0 for the outgoing scenario and TSP1 for the incoming one. This last two estimates are the ones that we will further discuss when recovering the merger orbit of the system. To further constrain the stage of the merger we compare the observational features with simulations. We use the Galaxy Cluster Merger Catalog (ZuHone et al. 2018) 4 , in particular, the "A Parameter Space Exploration of Galaxy Cluster Mergers" simulation (ZuHone 2011), which consists of an adaptive mesh refinement gridbased hydrodynamical simulation of a binary collision between two galaxy clusters, with a box size of 14.26 Mpc. The binary merger initial configuration separates the two clusters by a distance on the order of the sum of their virial radii, with their gas profiles in hydrostatic equilibrium. With this simulation one can explore the properties of a collision of clusters with a mass ratio of 1:1, 1:3 and 1:10, where the mass of the primary cluster is 200 = 6 × 10 14 M , similar to the SZ derived mass of 200 = 7.63 × ℎ −1 70 10 14 M for SPT-CLJ 0307-6225 (Bleem et al. 2015), and with different impact parameters ( = 0, 500, 1000 kpc). We use both a merger mass ratio of 1:3 and 1:1. Since we cannot constrain the impact parameter, we use all of them and study their differences, where, for example, the bigger the impact parameter, the longer it takes for the merging clusters to reach the apoapsis. We also note that for our analysis we use a projection on the -axis, since evidence suggests a collision taking place on the plane of the sky. Determining TSP0 and TSP1 from the simulations To determine the collision time, we use the dark matter distribution of both objects, focusing on the distance between their density cusps at different snapshots. Also, to determine the snapshots for an outgoing and an incoming scenario, which would be the closest to what we see in our system, we look for the snapshot where the separation between the peaks is similar to the projected distance between our BCGs (∼1.10 Mpc). In Table 5 we show the results for the different impact parameters, where the second column indicates the mass ratio. The third column shows the simulation time where the distance between the two halos is minimal (pericentric passage time). The errors are the temporal resolution of the simulation at the chosen snapshot. Following the previous nomenclature the fourth column, TSP0 sim , corresponds to the amount of time from the first pericentric passage (minimum approach), while the fifth column, TSP1 sim , corresponds to the amount of time from the pericentric passage, to the first turn around, and heading towards the second passage. Times are either the snapshot time or an average between two snapshots if the estimated separations are nearly equally close to the ∼1.10 Mpc distance. For = 0 kpc, the maximum achieved distance between the two dark matter halos in the 1:3 mass ratio simulation was 1.05 Mpc, while for the 1:1 mass ratio it was 0.99 Mpc, meaning that we cannot separate between both scenarios when comparing the projected distance of 0307-6225N and 0307-6225S. In Fig. 8 we show the density contours of the galaxies from the simulation with mass ratio 1:3 and = 1000 kpc as an example, where the contours were estimated as described in §3.3. The density contours at T = 1.9 Gyrs and T = 2.7 Gyrs are shown on the top (outgoing scenario) and bottom (incoming scenario) panels, respectively, where T is the time since the beginning of the simulation. Dots are from our spectroscopic sample, where the colors are the same as in Fig. 7, with the red contours being the unweighted RCS galaxies numerical density map from the same figure. It is worth noting that, although the density contours from the simulations and the galaxies from our observations do seem to be well correlated, the simulations (and therefore the density contours) were not influenced whatsoever by our observations. The only manipulation to the contours is rotation and translation of the coordinate system from the simulation, so that they would match the position of the galaxies from 0307-6225S. The results shown in Table 5 suggest that the estimate of TSP1 by MCMAC is too large, giving preference to the outgoing system scenario. We further discuss this in §5.1.3. X-ray morphology The hydrodynamical simulations render a gas distribution that can be directly compared to the observations. Fig. 9 shows the snapshots of the outgoing scenario, while Fig. 10 shows the snapshots of the incoming scenario, where the X-ray projected emission is overplotted as blue contours on top of the projected total density, for the simulation snapshots close to the derived TSP (Table 5), with the simulation time shown on the bottom left of each panel. Note however that for the 1:1 Mass ratio and = 500 kpc, the system has the ∼1.1 Mpc distance at turnaround, which means that we cannot differentiate between and outgoing and incoming scenario. We decided to keep the same snapshot in both Figures 9 and 10 just for comparison. It can be seen that the scenarios for 1:3 mass ratio closest resemble the gas distribution from our Chandra observations (orange contours on Fig. 1). We comeback to this in § 5.1.3. The impact of the merging event in the galaxy populations In Fig. 11 we show the CMD for each subsample; all galaxies, galaxies belonging to 0307-6225N and 0307-6225S, and galaxies not belonging to either of them. Galaxies are color coded according to their spectral classification. Most of the star-forming galaxies are located within the two main structures (9 out of 10 SF+SSB galaxies), with some of them being classified as RCS galaxies (4; 2 SF and 2 SSB). Galaxies with SNR < 3 and/or auto > * + 2 are plotted as black crosses. For simplicity, we use the following notation (and their combinations) to refer to the different galaxy populations throughout the text: • SSB: Short starburst galaxies, following Balogh et al. (1999). • EL: To refer to emission-line galaxies (galaxies with EW(OII) ≥ 5 Å), including SSB, star-forming (SF) and A+em galaxies, which are believed to be dusty star-forming galaxies (Balogh et al. 1999 Fig. 7, but drawn from galaxies from the merger simulation (1:3 mass ratio and b=1000 kpc) with t=1.9 Gyrs and t=2.7 Gyrs (top and bottom panels, respectively) since the beginning of the simulation. The coordinates of the density maps were rotated and translated in order to be comparable with the position of the galaxies (dots) from SPT-CLJ0307-6225. For comparison, the red contours show SPT-CLJ0307-6225 unweighted density map from Fig. 7, with dots being the spectroscopic galaxies following the same color scheme. • NEL: To refer to non emission-line galaxies; passive and PSB. This are galaxies with EW(OII) < 5 Å. • Red galaxies: Galaxies belonging to (or redder than) the red cluster sequence from §3.1. • Blue galaxies: Galaxies with colors lower than the red cluster sequence. Given that most of the SF galaxies seem to be located at the cluster's cores, especially the red SF galaxies, it is plausible that they were part of the merging event, instead of being accreted after it. In Fig. 12 we show a phase-space diagram, with the X-axis being the separation from the SZ-center, negative for objects to the south of it. Circles are red galaxies, while triangles are blue galaxies. Inverted triangles are blue galaxies with no-emission lines Table 5). The projected total density of the simulations is shown in red in the background, with the contrast starting at 1 × 10 7 M kpc −2 . Blue contours where derived from the projected X-ray emission, with the levels being 0.5, 1, 5, 10, 15 × 10 −8 photons/s/cm 2 /arcsec 2 . Simulations are divided according to their mass ratio (1:3 on top and 1:1 on the bottom) and according to the impact parameter (500 kpc on the left panels and 1000 kpc on the right panels). The used box size is the same to the one used in Fig. 1. The white bar also corresponds to the same length of 1 arcmin shown in Fig. 1. Figure 10. Same as Fig. 9, but derived from the simulations at the TSP1 times. Figure 11. CMD of the cluster for the different samples. Galaxies are colorcoded depending on their spectral classification described in §4.4. top left: entire spectroscopic data sample. top right: sample comprising galaxies not belonging to 0307-6225N and 0307-6225S, i.e., galaxies from the in-between overdensity plus galaxies not belonging to any substructure according to DBSCAN. bottom: 0307-6225S and 0307-6225N samples shown in left and right panels, respectively. The green dotted lines are the limits for the RCS zone. Black crosses are galaxies with SNR < 3 or auto ≥ * + 2. Filled colors are galaxies classified as SSB. (filled for PSB and non-filled for passive), while filled circles are SSB galaxies. Galaxies are color coded dark-red if they belong to 0307-6225N, dark-orange for 0307-6225S, and black for none of the above. In Fig. 13 we show small crops of 7×7 arcseconds (47×47 kpc at the cluster's redshift) of the EL galaxies plus the two NEL blue galaxies. On the top and middle rows, galaxies from 0307-6225S and 0307-6225N are shown, respectively, while the bottom row shows galaxies which do not belong to the clusters cores. The particular case of 0307-6225S Fig . 11 shows that 0307-6225S has (1) the bluest members from our sample and (2) two very bright galaxies with nearly the same magnitudes (galaxies with ID 35 and 46 from the MUSE-1 field in Table A1, marked with an upper script 1 and 2 , respectively). In Fig. 14 we provide a zoom from Fig. 1, to show in more detail the southern structure. Red circles mark spectroscopic members for this region with SNR > 3 and auto < * + 2. The two brightest galaxies are the two elliptical galaxies in the middle marked with red stars, with Δ = 0.0152 ± 0.0063 and Δ = 600 / . The on-sky separation between the center of them (∼41 kpc), suggests that these galaxies could be interacting with each other. In Fig. 15 we show the peculiar velocity distribution, with respect to the redshift of 0307-6225S ( = 0.5810), of all galaxies (black unfilled histogram) and of RCS galaxies (red hashed lines) belonging to this structure. The blue shaded area denotes the area within 1 for this structure, and the black dashed lines represent the peculiar velocities of the two BCG candidates, where the one to Figure 12. Phase-space diagram of spectroscopic members with SNR ≥ 3 and auto < * + 2. Galaxies are colored as dark red, dark orange and black if they were classified as belonging to the 0307-62255N, 0307-6225S or to neither of them, respectively. Crosses are galaxies classified as non-emission line galaxies. Emission line galaxies which belong to (or have redder colors than) the RCS are plotted as circles, triangles are galaxies with colors lower than the RCS, whereas inverted triangles are blue post-starburst (filled) or passive (unfilled) galaxies. The sizes of EL galaxies are correlated with their EW(OII) strength. Filled circles correspond to SSB galaxies. the south has a peculiar velocity closer to 0 (Δ = −8 km s −1 ). For this reason, we choose this galaxy (ID 46) as the BCG of 0307-6225. Merging history of 0307-6225S and 0307-6225N Here we discuss the estimated masses, how they compare with previous estimations and the risks of using scaling relations to study dynamically perturbed systems. Then we discuss how the merging parameters derived by MCMAC could be further constrained by constraining the merging angle, especially the error bars on the estimated times for an outgoing and an incoming system. Finally, we show how the comparison with simulations favors an outgoing scenario given the estimated times and the X-ray morphology, with the latter also showing the preferred mass ratio of 1:3. Mass estimation of a merging cluster Being able to recover the merging history of two observed galaxy clusters is not trivial. Most methods require a mass estimation of the colliding components, which is not always an easy task (see merging effect on cluster mass in Takizawa et al. 2010;Nelson et al. 2012Nelson et al. , 2014. The use of lensing measurements is one of the most precise ways of obtaining a mass estimation for the components (e.g. Clowe et al. 2006;Pandge et al. 2019;Monteiro-Oliveira et al. 2020), however this method requires deep photometric high quality images for the measurement of the distortions. Dietrich et al. (2019) used the same ground-based optical imaging described in this paper to measure the weak lensing surface mass density of SPT-CLJ0307-6225. However, their result shows that for this cluster the signal was not strong enough (as shown in their Figure B4) as the peak of the surface mass density is at a distance greater then R 200 from the SZ center. The velocity dispersion (along the line-of-sight) of the galaxies of a cluster can also be used to infer its mass, using for example the virial theorem (e.g. Rines et al. 2013;White et al. 2015) or scaling relations (e.g. Evrard et al. 2008;Saro et al. 2013;Munari et al. 2013;Dawson et al. 2015;Monteiro-Oliveira et al. 2021). For the mass estimations of our structures we use the later one, although it is important to note that these measurements are also affected by the merging event, as colliding structures could show alterations in the velocities of their members. White et al. (2015) argues that the masses of merging systems estimated by using scaling relations can be overestimated by a factor of two. Since we have a separation of ∼1.1 Mpc between the two structures and the distribution of velocities of the two clusters is Gaussian, we believe the overestimation is low. Also, the velocity difference being |Δ − | = 342 km s −1 suggests that the merger is taking place close to the plane of the sky, similar to what Mahler et al. (2020) find for the dissociative merging galaxy cluster SPT-CLJ0356-5337. Furthermore, the velocity difference between the BCGs and the redshift of each substructure is ≤20 km s −1 for both 0307-6225N and 0307-6225S, which might indicate that the two merging substructures were not too dynamically perturbed by the merger. In order to further minimize the bias of using scaling relations, we use only RCS galaxies, however blue galaxies are taken into account when reporting the number of members on Table 3, and also when analysing the galaxy populations below. It is worth noting that recently Ferragamo et al. (2020) suggested correction factors on both and the estimated mass to account for cases with a low number of galaxies. They also apply other correction factors to turn into an unbiased estimator by taking into account, for example, interlopers and the radius in which the sources are enclosed. However, applying these changes does not change our results drastically, with the new derived masses being within the errors of the previously derived ones. To check how masses derived from the velocity dispersion of merging galaxy clusters could be overestimated, we estimate the masses, following the equations from Munari et al. (2013), of the simulated clusters from the 1:3 merging simulation (from §4.3) at all times (and ) using their velocity dispersion. It is worth noting that we cannot separate RCS members to estimate the velocity dispersions, since the simulation does not give information regarding the galaxy population. Fig. 16 shows the derived masses at different times for the 1:3 mass ratio simulation for different values of . The black dotted lines represent the collision time and the dashed lines with the gray shaded areas represent the TSPs and their errors from Table 5, respectively. It can be seen that before the collision and some Gyr after it, the masses are overestimated, especially for the case of the smaller mass cluster. However, near the TSP0 times, the derived masses are in agreement, within the errors, with respect to the real masses. This is true also for the TSP1 with = 500 kpc, but for the same time with = 1000 kpc, the main cluster's mass is actually underestimated. Although we cannot further constrain the masses from the simulation using only RCS members, this information does suggest that our derived masses are not very affected by the merging itself given the possible times since collision. On the bottom left of each image the spectral type of the galaxy is shown, with a white bar on the bottom right representing the scale size of 1 arcsecond. Galaxies on the top and middle row belong to 0307-6225S and 0307-6225N, respectively, while galaxies on the bottom row are those who do not below to any of the aforementioned. Figure 14. Zoom from Fig. 1 into 0307S, with the white bar on the top left showing the scale of the image. Spectroscopic members with SNR < 3 or auto ≥ * + 2 are shown as cyan circles, while red and green circles/stars represent passive and emission-line cluster galaxies, respectively, where emission-line refers SF or SSB galaxies. The 2 brightest galaxies are marked with stars. Recovery of the merger orbit With the masses estimated, the merging history can be recovered by using a two-body model (Beers et al. 1990;Cortese et al. 2004;Gonzalez et al. 2018) or by using hydrodynamical simulations constrained with the observed properties of the merging system (e.g. Mastropietro & Burkert 2008;Machado et al. 2015;Doubrawa et al. 2020;Moura et al. 2021), with the disadvantage being that the lat- ter method is computationally expensive. The method presented by Dawson (2013), MCMAC, is a good compromise between computational time and accuracy of the results, with a dynamical parameter estimation accuracy of about 10% for two dissociative mergers; Bullet Cluster and Musket Ball Clusters. MCMAC gives as a result two different time since collision, TSP0=0.96 +0.31 −0.18 Gyr and TSP1=2.60 +1.07 −0.53 Gyr, for an outgoing and an incoming merger, respectively, after the first pericentric passage. A more detailed analysis of the X-ray could further constrain both the MCMAC output, e.g. by constraining the merging angle (Monteiro-Oliveira et al. 2017 and the TSP (Dawson 2013;Ng et al. 2015;Monteiro-Oliveira et al. 2017) from shocks (if any), and also the merging scenario from hydrodynamical simulations, e.g. by Figure 16. Velocity dispersion derived masses for the 1:3 mass ratio simulations used in this work, with different . The x-axis is the time since the simulation started running, with the blue and orange dots corresponding to the main cluster and the secondary cluster, respectively. The blue and orange dashed lines represent the masses of 6 × 10 14 and 2 × 10 14 M , respectively. Black dotted lines mark the collision times estimated following §4.3. Vertical black dashed lines mark the estimated TSP0 and TSP1 shown in Table 5, with the gray area being the errors on this estimation. comparing the temperature maps or by running a simulation which recovers the features (both of the galaxies and of the ICM) of this particular merger. This is particularly interesting given that the simulations that we use to compare have a merger axis angle of = 0.0 deg. Dawson (2013) runs MCMAC on the Bullet Cluster data and finds = 50 +23 −23 deg, however, by adding a prior using the X-ray shock information, he is able to constrain the angle to = 24 +14 −8 deg, which is closer to the plane of the sky and also decreases significantly the error bars on the estimated collision times. For instance, if we assume that the merger is nearly on the plane of the sky and constrain the merging angle, , from MCMAC to be between 0 • and 45 • , then the resulting values are = 25 +6 −6 deg, TSP0=0.73 +0.09 −0.09 and TSP1=2.10 +0.51 −0.30 , which are still within the previous estimated values (within the errors) and have smaller error bars. However, the estimated TSP1 is still higher than any of the ones estimated from the simulations (see Table 5). A similar system is the one studied by Dawson et al. (2012); DLSCL J0916.2+2951, a major merging at = 0.53, with a projected distance of 1.0 +0.11 −0.14 Mpc. Their dynamical analysis gives masses similar to that of our structures (when using − scaling relations), with the mass ratio between their northern and southern structures of S / N = 1.11 ± 0.81. Using an analytical model, they were able to recover a merging angle = 34 +20 −14 degrees and a physical separation of 3 = 1.3 +0.97 −0.18 , both values in agreement with what we found. Furthermore, their time since collision is also similar to the one found for our outgoing system TSP= 0.7 +0.2 −0.1 , however they do not differentiate between an outgoing or incoming system. Regarding the in-between structure, the estimated velocity dispersion is very high ( = 1400 km s −1 ) and the density map shows that this region is not as dense as the other two. To check whether it is common for a merging of two galaxy clusters, we take a look at how the density map varies in the 1:3 mass ratio simulations near the estimated TSP0. In Fig. 17, we show on each row, the density maps of the simulations with the corresponding time shown at the bottom left, and the impact parameter of the row at the top left of the first figure of each row. Levels start at 100 galaxies Mpc −2 and increase in levels of 50. The cluster with 6×10 14 M is located at the bottom. The middle column indicates the density map at TSP0, where the previous and next 2 snapshots are also shown. At different times, the density maps for the same impact parameter show to be rather irregular, with the in-between region changing from snapshot to snapshot. In particular, both = 0 kpc and = 1000 kpc show an overdense in-between area near the TSP0. However, this is not the case in other snapshots, so we cannot state with confidence that this is common for a merging cluster to show such a pronounced in-between overdense region. Constraining the TSP with simulations We compare the results derived by MCMAC with those estimated from a hydrodynamical simulation of two merging structures with a mass ratio of 1:3 (ZuHone 2011; ZuHone et al. 2018). We chose this ratio since the X-ray morphologies of both the simulation and the system are a better match than the 1:1 mass ratio, where the X-ray intensity from the simulation is similar for the two structures (see Fig. 9 and 10), unlike our system, which have two distinctly different structures (see the orange contour in Fig. 1). To compare the results from MCMAC with the simulation, it is necessary to have a good estimate of (1) the time when the two structures have their first pericentric passage and (2) the TSP sim for the outgoing and incoming scenarios. For the former, we determined the time on the simulation where the separation of the dark matter halos was at its minimum, while for the later we used the time where the separation was similar to that of our BCGs. For each , the estimated TSP0 sim is smaller but in agreement with the result from MCMAC, however the estimated TSP1 sim is never in agreement (at least at 1 ). Using dark matter only simulations, Wittman (2019) looked for halos with similar configurations to those of observed merging clusters (such as the Bullet and Musket Ball clusters) and compared the time since collisions to those derived by MCMAC and other hydrodynamical simulations, finding that with respect to the latter the derived merging angles and TSP are consistent. However, both the outgoing and incoming TSP and the angles are lower than those derived by MCMAC, attributing the differences to the MCMAC assumption of zero distance between the structures at the collision time. Sarazin (2002) discuss that most merging systems should have a small impact parameter, of the order of a few kpc. Dawson et al. (2012) argues that, given the displayed gas morphology, the dissociative merging galaxy cluster DLSCL J0916.2+2951, has a small impact parameter. The argument is that simulations show that the morphology for mergers with small impact parameters, is elongated transverse to the merger direction (Schindler & Muller 1993;Poole et al. 2006;Machado & Lima Neto 2013). The X-ray morphology shown in this paper is similar to that from Dawson et al. (2012). It is also similar to that of Abell 3376 (Monteiro-Oliveira et al. 2017), a merging galaxy cluster which was simulated by Machado & Lima Neto (2013) with different impact parameters ( = 0, 150, 350 and 500 kpc), with their results suggesting that a model with < 150 kpc is preferred. Given the similitude between SPT-CLJ0307-6225 X-ray morphology and that of other systems such as Abell 3376 and DLSCL J0916.2+2951, which have small impact parameters, then we suggest that the simulations with = 0 kpc or = 500 kpc are better representations of our system. This implies that the preferred scenario for this merging cluster is that of an outgoing system or a system very close to turnaround. This can also be seen when comparing the X-ray morphology of SPT-CLJ0307-6225 with that of the 1:3 mass ratio simulations at the estimated TSP0 sim and TSP1 sim , shown in Fig. 9 and Fig. 10, respectively, where it is noticeable that the X-ray contours at TSP0 sim are more similar than the ones at TSP1 sim for = 500, 1000 kpc. Galaxy population in a merging galaxy cluster From Fig. 11, it is noticeable that EL galaxies are located preferentially towards the cluster cores. We will divide the discussion of the galaxy population by studying the differences between the two clumps, analysing the red EL galaxy population and also the population in the area in-between the merger. Comparison between North and South One interesting optical feature of 0307-6225S, is the two very bright pairs of galaxies ( proj = 41 kpc) at the center of its distribution (Fig. 14). A similar, but rather extreme case is that of the galaxy cluster Abell 3827 at = 0.099, which shows evidence for a recent merger with four nearly equally bright galaxies within 10 kpc from the central region (Carrasco et al. 2010;Massey et al. 2015). Using GMOS data, Carrasco et al. (2010) found that the peculiar velocities of at least 3 of these galaxies are within ∼300 km s −1 from the cluster redshift, with the remaining one having an offset of ∼1000 km s −1 . BCGs have low peculiar velocities in relaxed clusters, whereas for disturbed clusters it is expected that their peculiar velocity is 20-30% the velocity dispersion of the cluster (Yoshikawa et al. 2003;Ye et al. 2017). For 0307-6225S, one of the bright galaxies has a peculiar velocity of ∼666 km s −1 , which is ∼88% the velocity dispersion of this subcluster. This could be evidence of a past merging between 0307-6225S and another cluster previous to the merger with 0307-6225N. The AD test gives a Gaussian distribution, where the results do not change by applying a 3-iteration, which could indicate that the substructure is a post-merger. Raouf et al. (2019) uses the magnitude difference between the first and second brightest galaxy of a group (Δ 12 ), along with the BCG to the luminosity center distance ( offset ) to separate between relaxed and unrelaxed systems. They propose a value of Δ 12 < 0.5 and log 10 ( offset ) > 1.8 to define unrelaxed clusters, whereas for relaxed systems the definition goes as Δ 12 > 1.7 and log 10 ( offset ) < 1.8. In our case, we only check the magnitude difference since we are already studying a merging cluster. For 0307-6225S the magnitude difference is Δ 12 = 0.0152 < 0.5, which supports the scenario that 0307-6225S suffered a previous merger prior to the one with 0307-6225N. Central galaxies take ≈1 Gyr to settle to the cluster centre during the post-merger phase (White 1976;Bird 1994), meaning that this previous merger must have taken place over 1 Gyr before observed merger between 0307-6225S and 0307-6225N. On the other hand, for 0307-6225N the value is Δ 12 ≈ 1.8 > 1.7, meaning 0307-6225N was a relaxed system prior to this merger. Regarding the overall galaxy population, the fraction of EL galaxies in 0307-6225S (24%) is nearly two times that of 0307-6225N (∼13%), although consistent within 1 . However, it can be seen in Fig. 12 that all the EL galaxies from 0307-6225N have small peculiar velocities (with the SF galaxies within 1 ), while for 0307-6225S we notice that most of the blue SF galaxies have velocities higher than 2 . These galaxies, which are bluer than the blue EL galaxies of 0307-6225N, could be in the process of being accreted. Considering the scenario where mergers between clusters accelerate the quenching of galaxies by increasing their star formation activity (Stroe et al. 2014(Stroe et al. , 2015, then the fact that there are less star forming galaxies towards the central region of 0307-6225S compared to 0307-6225N, could be an indication that the previous merger of 0307-6225S already exhausted the star formation of the cluster, with the observed blue SF galaxy population (with larger peculiar velocities) being recently, or in the process of being, accreted from the field. Red EL galaxies Of particular interest are our EL galaxies located in the RCS. Out of the 4 red EL galaxies, 3 are located in the cores of the two main structures, with 2 of them classified as SSB. Most of the blue SF galaxies are best matched by a high-redshift star forming or latetype emission galaxy template, whereas most of the red SF galaxies are best matched with an early-type absorption galaxy template. Koyama et al. (2011) studied the region in and around the = 0.41 rich cluster CL0939+4713 (A851) using H imaging to distinguish SF emission line galaxies. A851 is a dynamically young cluster with numerous groups at the outskirts. They found that the red H emitters are preferentially located in low-density environments, such as the groups and the outskirts, whereas for in the core of the cluster they did not find red H emitters. Ma et al. (2010) studied the galaxy population of the merging galaxy cluster MACS J0025.4-1225 at = 0.586. In the areas around the cluster cores (with a radius of 150 kpc) they find emission line galaxies corresponding to two spiral galaxies (one for each subcluster), plus some spiral galaxies without spectroscopic information, accounting for 14% of the total galaxies within the radius. Their Fig. 15 shows that they also have red EL galaxies, however they don't specify whether the 2 spiral galaxies within the cluster core are part of this population. Both results from Ma et al. (2010) and Koyama et al. (2011) indicate that red EL galaxies are not likely to be found within the cores of dense regions. It can be observed from Fig. 13 that most of our red EL galaxies do not have close neighbours which can supplement gas to them. It is possible then that these objects accreted gas from the ICM, with the merger triggering then the SF. Given the peculiar velocity of the two SSB galaxy from our sample (which is classified as red), at least one of them was most likely part of the merging event. If, for example, merger shocks travelling through the ICM can trigger a starburst episode on galaxies with gas reservoirs for a few 100 Myr (Owers et al. 2012;Stroe et al. 2014Stroe et al. , 2015, then these galaxies would make the outgoing scenario a better candidate than the incoming one. Area in-between the merger With respect to the in-between area, its mostly comprised by red passive galaxies, with the only EL galaxy belonging to the RCS. Moreover, the 2 blue galaxies are classified as a passive and a PSB. Ma et al. (2010) found a fraction of post-starburst galaxies in the major cluster merger MACS J0025.4-1225, on the region in-between the collision between the two merging components, where, given the timescales, the starburst episode of them occurred during first passage. Similarly to our blue galaxies in this region, they found that their colors are located between those of blue EL galaxies and red passive galaxies. SUMMARY AND CONCLUSIONS In this paper we use deep optical imaging and new MUSE spectroscopic data along with archival GMOS data to study the photometric and spectral properties of the merging cluster candidate SPT-CLJ0307-6225, estimating redshifts for 69 new galaxy cluster members. We used the data to characterize (a) its merging history by means of a dynamical analysis and (b) its galaxy population by means of their spectroscopic and photometric properties. With respect to the merging history, we were able to confirm the merging state of the cluster and conclude that: • Using the galaxy surface density map of the RCS galaxies we can see a bi-modality in the galaxy distribution. However, the cluster does not show signs of substructures along the line-of-sight. • We assign galaxy members to each substructure by means of the DBSCAN algorithm. We name the two main substructures as 0307-6225N and 0307-6225S, referring to the northern and southern overdensities, respectively. • For each substructure we measured the redshift, velocity dispersion and velocity-derived masses from scaling relations. We find a mass ratio of / ≈ 1.3 and a velocity difference of − = 342 km s −1 between the northern and southern structures. • To estimate the time since collision we use the MCMAC algorithm, which gave us the times for an outgoing and incoming system. By means of hydrodynamical simulations we constrained the most likely time to that of an outgoing system with TSP=0.96 +0.31 −0.18 Gyr. • The outgoing configuration is also supported by the comparison between the observed and simulated X-ray morphologies. This comparison between the X-ray morphologies also provide a constraint on the masses, where a merger with a mass ratio of 1:3 seems more likely than that of a 1:1 mass merger. With respect to the galaxy population, we find that: • EL galaxies are located preferentially near the cluster cores (projected separations), where the average low peculiar velocities of red SF galaxies indicates that they were most likely accreted before the merger between 0307-6225N and 0307-6225S occurred. • EL galaxies on 0307-6225N have smaller peculiar velocities than those of 0307-6225S, where in the latter it appears that blue SF galaxies were either recently accreted or are in the process of being accreted. • 0307-6225S shows two possible BCGs, which are very close in projected space. The magnitude and velocity differences between them are ∼ 0 mag and ∼674 km s −1 , respectively, with one of them having a peculiar velocity close to 0 km s −1 with respect to 0307-6225S, while the other is close to the estimated 1 . However, the velocity distribution of the cluster shows no signs of being perturbed. This suggests that 0307-6225S could be the result of a previous merger which was at its last stage when the observed merger occurred. • With respect to the in-between region, the galaxy population is comprised mostly of red galaxies, with the population of blue galaxies classified as passive or PSB, with colors close to the RCS. In summary, our work supports a nearly face-on, in the plane of the sky, major merger scenario for SPT-CLJ0307-6225. This interaction accelerates the quenching of galaxies as a result of a rapid enhancement of their star formation activity and the subsequent gas depletion. This is in line with literature findings indicating that the dynamical state of a cluster merger has a strong impact on galaxy population. Of particular importance is to differentiate dynamically young and old mergers. Comparisons between such systems will further increase our understanding on the connection between mergers and the quenching of star formation in galaxies. In future studies, we will replicate the analysis performed on SPT-CLJ0307-6225, to a larger cluster sample, including the most disturbed cluster candidates on the SPT sample. These studies will be the basis for a comprehensive analysis of star formation in mergers with a wide dynamical range. Table A1. Properties of the spectroscopically confirmed objects. The first and second columns are the sky coordinates of the objects. Columns (3) and (4) are the instrument (along with the corresponding field) and the object ID within the field. The heliocentric redshifts are listed in column (5). Columns (6) through (10) are the derived magnitudes and the − , − color indexes (from aperture magnitudes). The last column corresponds to the cluster membership, where 1 means galaxies within the ±3000 km s −1 cut from the cluster's redshift cl = 0.5803. R
2021-12-01T02:15:52.662Z
2021-11-30T00:00:00.000Z
244729610
s2orc/train
v2
Covid-19 Disaster relief projects management: an exploratory study of critical success factors
Covid-19 Disaster relief projects management: an exploratory study of critical success factors The COVID-19 pandemic has caused unprecedented socio-economic devastation. With widespread displacement of population/ migrants, considerable destruction of property, increase in mortality, morbidity, and poverty, infectious disease outbreaks and epidemics have become global threats requiring a collective response. Project Management is, however, a relatively less explored discipline in the Third Sector, particularly in the domain of humanitarian assistance or exploratory projects. Via a systematic literature review and experts' interviews, this paper explores the essence of humanitarian projects in terms of the challenges encountered and the factors that facilitate or hinder project success during crises like Covid-19. Additionally, the general application of project management in international assistance projects is analysed to determine how project management can contribute to keeping the project orientation humane during a crisis. The analysis reveals that applying project management tools and techniques are beneficial to achieve success in humanitarian assistance projects. However, capturing, codifying, and disseminating the knowledge generated in the process and placing the end-users at the centre of the project life cycle is a prerequisite. While the latter can seem obvious, the findings demonstrate that the inadequate inclusion of beneficiaries is one of the main reasons that prevent positive project outcomes leading to unsustainable outcomes. The key finding of this paper is that the lack of human-centred approaches in project management for humanitarian assistance and development projects is the main reason such projects fail to achieve desired outcomes. Introduction The destructive capacity of natural and artificial disasters increases continuously, affecting millions of people globally. In 2016, 564.4 million people were reportedly affected by natural disasters, the highest since 2006 (Guha-Sapir et al. 2016). The World Health Organization (WHO) declared COVID-19 as a pandemic on March 11, 2020, having around 3 million cases and causing 207,973 deaths (WHO, COVID-19: Situation Report). A Brooking's report 1 on socio-economic impact of COVID-19 notes that causing global economy to contract by 3.5 percent it brought about one of the deepest recessions of modern times. According to an ILO report 2 , COVID-19 led to a loss of 8.8 per cent of global working hours roughly amounting to 255 million full-time jobs in 2020 compared to last quarter to 2019. As of June 2021, the COVID-19 outbreak had spread to 215 countries and territories across six continents causing over 3.9 million deaths. 3 Given the vulnerability of nations to hazards like Covid-19, International Aid (IA), also known as International Development (ID), has become increasingly important especially for less developed countries. The United Nations (UN) has suggested that the developed economies spend at least 0.7% of their gross national income on international assistance (Myers 2015). Much of this assistance ends up financing projects managed by the Third sector, including international, national and local non-governmental organisations (NGOs), charities and other voluntary groups (Marlow 2016). NGOs are private organisations characterised by humanitarian objectives "that pursue activities to relieve suffering, promote the interest of the poor, protect the environment, provide essential social services, or undertake community development" (World Bank 1995). These organisations are key contributors to international assistance (Morton 2013), which is broadly divided into two categories: Official Development Assistance (ODA) and Humanitarian Assistance (HA), also referred to as emergency aid. HA projects have the overall goal of providing an immediate response as fast and effectively as possible. Nevertheless, the time scale and particular goals are less specific because of the spontaneous nature of these events and the available information (Lindell and Prater 2002). In this sense, HA projects fall into the category of exploratory projects, for which 'neither the goals nor the means to attaining them are clearly defined' (Lenfle et al. 2019). The loose definition of deliverables, the scope, and the recovery scale makes these projects challenging (Walker 2011). Additionally, a lack of Project Management (PM), cultural sensitivity, and stakeholder involvement contribute to high failure rates and unsatisfactory performance for these projects (Golini et al. 2015). For exploratory projects, neither the output nor the means to attain it can be established from the beginning. Given their increasingly significant impact, however, it is prudent to develop a scientific understanding of the projects management challenges and success factors for the exploratory projects. Therefore, this research aims to investigate via template analysis of the relevant qualitative data, the ontology of humanitarian aid projects, and the effect that project management implementation could have on their success for such projects. More specifically, we review the literature and case studies on humanitarian projects by NGOs to identify the main challenges in achieving favourable HA project outcomes and factors that promote project success or contribute to project failure. We also explore the PM procedures, tools, and frameworks used for International Development and how these influence the cognitive 4 aspects of humanitarian projects; and revisit the link between PM and human-centred design in the Third Sector. We find that applying project management tools and techniques are beneficial to achieve success in humanitarian assistance projects. However, knowledge generation, storage, and sharing and end-user-centric projects' design and execution throughout the project life cycle are major critical success factors. The findings also highlight that the inadequate consideration of beneficiaries' identity, expectation, and role is one of the main reasons preventing positive project outcomes from leading to sustainable outcomes. Our findings contribute to the literature in three ways. First, it explores the extension of the application of PM tools and techniques to a much important phenomenon of humanitarian assistance projects, especially during the current Covid-19 crisis. Second, relying on PM and design thinking literature, we explore more pragmatic design and execution choices that bring project output/ deliverables and outcomes closer. Thirdly, through literature review, case studies, and expert interviews, our study highlights some critical success and failure factors in humanitarian assistance projects.s The rest of the paper is organised as follows. Part two presents the literature review, followed by the methodology and findings, followed by the conclusion. Crisis and humanitarian aid Project management Relief projects carry an "acute sense of urgency", and their results are critical to people's livelihood in the affected communities (Steinfort and Walker 2011). The challenge is to minimise human suffering and death (Noham and Tzur 2014) and do so in an often hostile and uncertain environment, where violence, socio-political instability, disease, other health hazards, panic, and chaos are encountered. Other obstacles include lack-of or poor communication and transportation infrastructures, different cultural norms and rules, complex issues of autonomy and control and managing productive cooperation with governments and other organisations (Steinfort and Walker 2011). According to Bysouth, project management is a relatively new discipline in the Third Sector. Despite the limited information regarding the adoption of PM methodologies by NGOs (Golini et al. 2015), several authors agree that PM expertise can be employed as a possible remedy for the poor performance of ID projects (Landoni and Corti 2011;Golini and Landoni 2014). Moreover, guidelines such as PMDPro and PM4DEV have been developed explicitly for NGO management of these projects (Table 1). However, recent empirical studies note widespread adoption of few PM tools, viz., Logical Framework (LogFrame) and Progress Reports and almost none of few such as Earned Value Management System and Issue Logs (Golini et al. 2015). LogFrame provides the goals, measures and expected resources for each level of the means-to-end logical path, laying out the way between vision, overall and specific objectives, desired outputs and outcomes through its detailed breakdown of the chain of causality among activities. Moreover, Monitoring and Evaluation (M&E) supports learning, governance and performance accountability (Steinfort and Walker 2011). It also includes the evaluation criteria-relevance, effectiveness, efficiency, impact and sustainability-to ensure appropriate monitoring and control. Research has shown that lack of expertise and planning (Alexander 2002), poor coordination, duplication of services, and inefficient use of resources (Kopinak 2013), inadequate beneficiary involvement has hindered positive outcomes (Brown and Winter 2010). Coupled with Linking Relief, Rehabilitation and Development (LRRD) omission, this has often provided unsustainable solutions (Kopinak 2013). These interspersed layers demonstrate that humanitarian management cannot be improvised and that planning is relevant at all stages of the Disaster Cycle (Alexander 2002;Steinfort and Walker 2011). The professionalisation of humanitarian response is thus inevitable due to the adding layers of complexity that resulted from growing levels of stakeholders and poor management skills. Defining project success Project management focuses on delivering change via unique sets of concerted actions (Tantor 2010). Unlike general management, where almost everything is routine, almost everything is an exception (Meredith et al. 2014). Each project is unique and temporary, with a definite start and end (Tayntor 2010). The end of a project can be defined when the desired output is delivered or when the output can no longer be delivered, or when there is no more need for the project. These endeavours aim to create a unique product or deliver a unique service or result. It is possible to have repetitive elements, but repetition does not take away the uniqueness of a project because the mix of elements is unique to each project. Therefore, projects can also be considered generators of value (Winter et al. 2006) and explicit and tacit learning, as their uniqueness provides a foundation for capturing new knowledge (Zollo and Winter 2002). The definition of project success is ambiguous due to the different characteristics, perspectives, interest, and objectives of the stakeholders involved (Fig 2). Nonetheless, the essential requirement of project success is achieving the project objectives/outputs within a defined budget, quality, and time. Project output can be defined as the product, service or result that the project was expected to generate. Furthermore, many authors suggest that project success is multidimensional, and that project outcome should also be considered when determining success (Rodrigues et al. 2014). That is particularly relevant in the case of exploratory post-crisis projects, for which neither the output nor the means to attain it can be established from the beginning (Lenfle 2014). This multidimensional outlook reflects project success and the project ' manager's responsibilities, including managing time, cost, quality and human resource, integration, communication, project design, procurement, and risk management (Radujkovic and Sjekavica 2017). The uniqueness of each project also requires the project manager to be creative, flexible, and highly adaptable. Special skills such as conflict resolution and negotiation are also required due to the high level of discontent present in these projects. Project management success does not guarantee that the project output will lead to a successful outcome (Steinfort and Walker 2011;Kopinak 2013). The project outcome is the change produced as a consequence of the delivery of such output. Unfortunately, in HA projects, outputs are often delivered accordingly but still fail to provide a successful outcome. Project success might be initially perceived as achieved in such cases, yet project outcome might demonstrate the opposite (Brown and Winter 2010). This occurs when hard 5 and soft 6 services fail to transform the output into a functioning outcome (Steinfort and Walker 2011); perhaps because the output lacked the infrastructure to support its use or because it failed to consider the 'beneficiaries' needs, culture, behaviour, the context of their lives (Brown and Winter 2010). The latter has been recognised as a consequence of the ambiguous definition of target customer or beneficiary in HA projects, leading to their exclusion in the project design phases and considerable project (Golini et al. 2015). To this end, the literature suggests referring to the end-user as a consumer over the word beneficiary". Although both terms may be used interchangeably, researchers suggest that the latter can infer that recipient who do not pay for the services shall have unquestionable gratitude and, therefore, no right to choose or be informed, leading to poor recipient involvement projects. Steinfort and Walker (2011) argue that project success can be linked with the degree of customer value generated from the project. The real value is the output combinations that lead to a specific outcome, which allows the stakeholders to perceive that the project deliverables have been achieved. However, the natural outcome of the project is to generate customer value. The diversity of stakeholders and the different perception of values (Rodrigues et al. 2014) and a lack-of or poor inclusion of beneficiaries in project design (Golini et al. 2015) further hinder consensus in defining HA projects success. Critical success factors Planning is considered desirable in achieving success, especially among HA projects during Crisis like Covid-19 (Taylor 2010). Plans must be robust and granular yet flexible enough to adapt to different circumstances. NGOs and other organisations such as civil protection agencies have set up measures of natural disaster response based on their magnitude, recurrence, physical and human consequences, and the duration of their impact. Additionally, technology has become a vital tool in managing disasters (Alexander 2002). It was evident during the Covid-19 crisis as to how the biotechnology, data storage and analytical technology, and communication technology allowed the primary responders, frontline workers, and researchers to work together to arrive at standard operating procedures and share them with relevant stakeholders across the globe in a relatively short time. International recognition and acceptance of a set of common principles are essential to stimulate humanitarian aid project design, innovation, accountability and effectiveness, and the implementation of best tools and approaches. Despite the diversity in stakeholders, antecedents and consequences, and desired outcomes (Alexander 2002), the lessons and results captured from previous projects can serve as a blueprint for planning and implementation (Lampel et al. 2009). Explicit knowledge can be expressed and formalised into frameworks or formal " know-how" procedures and instructions, which can later be integrated into the organisation/field/team methods. On the other hand, tacit knowledge, the skills, or experience acquired through practice, may be shared through training programs/ orientations or on-the-job simulations and training. Each form of knowledge can serve as a tool to acquire the other; however, they cannot convert into one another. Understanding these epistemological dimensions and their interplay provides organisations and teams with the ability to learn, innovate and develop competencies that can be used in future projects (Cook and Brown 1999). Additionally, the knowledge seeker must be careful of the subjective interpretation of success factors and avoid "superstitious learning" (Zollo and Winter 2002). Preconceived notions can be easily generated, and projects often falter because the needs of the beneficiaries have not been fully contemplated. Human-centred approaches such as design thinking are considered a viable solution to integrate multidisciplinary knowledge, consumer insights and recognise the infrastructure needed to support the output provided. Designthinking complements the learning process both through the collection of knowledge and its application. Not only does it tap into capacities that conventional problem-solving practices overlook, but also it brings balance between the rational/analytical side of thinking and the emotional/intuitive counterpart (Brown and Winter 2010). This approach has contributed significantly to ID project success and has been adopted by UNICEF, The World Food Programme, and the International Rescue Committee. Additionally, companies such as Frog and IDEO continue collaborating with NGOs to integrate this approach in development projects and programmes. Programme thinking can also be explored to drive project success, as a given programme may involve coordinating multiple projects to achieve a specific outcome. In this sense, projects can focus specifically on their particular output whilst the programme can ensure that the outcome is delivered. In addition, projects can start and end under the programme umbrella. However, both approaches are complementary, and not all projects are part of a programme (OGC 2007). Lastly, given that the distinction between HA and ODA is less straightforward in practice (Fink and Redaelli 2011), LRRD has been identified as a model that could bridge the grey zone between both sides of the international assistance spectrum (Kopinak 2013). Programmes, rather than singled out projects, can be used to provide a successful LRRD as they can coordinate and oversee the implementation of a set of related projects to deliver an outcome greater than the sum of its parts (OGC 2007). The literature review suggests that Project Management is a relatively new discipline in the Third Sector. Its methodologies have been progressively adopted and recognised as a possible remedy for poor ID performance (Landoni and Corti 2011;Golini and Landoni 2014). Logical Framework and Monitoring and Evaluation are widely adopted PM tools by NGOs (Golini et al. 2015;Steinfort and Walker 2011). Poor planning and coordination, inadequate beneficiary involvement and omission of LRRD have often provided unsustainable/unsuccessful outcomes. (Alexander 2002;Kopinak 2013) Project management, thus, alone is not enough to deliver a successful outcome. Outputs need to be supported by hard and soft services, and beneficiaries must be considered in project design phases (Steinfort and Walker 2011;Alexander 2002;Kopinak 2013). Projects generate value and learning. The customer value generated from the project should be considered to determine project success (Rodrigues et al. 2014). Design thinking complements the learning process both through the collection of knowledge and its application. Human-centred approaches increase the possibility to create sustainable solutions and achieve success by incorporating interpersonal elements into the existing paradigm (Winter et al. 2006;Brown and Winter 2010). The distinction between HA and ODA is not always straightforward. LRRD, Design Thinking and programme implementation can help ID projects deliver successful and sustainable outcomes (Fink and Redaelli 2011). These arguments lead to the following proposition: Project management can contribute to HA projects by providing better planning, coordination and knowledge generation. PM can improve the outcome of HA projects; however, it is not the only success factor. Infrastructure (hard and soft services) must be available to support the project outcome 7 , and most importantly, such outcome should align with the broader culture and needs of the beneficiaries. Design thinking offers PM ways of including the end-users, ensuring outcomes are fit for purpose and that customer value is generated. Methodology Primary and secondary data were used to explore the effects that implementation of Project Management tools and techniques could have on the success of humanitarian projects. First, secondary qualitative data was explored via a systematic literature review. The review provided a synthesis of extant knowledge and helped create an expert database for conducting interviews as primary research (Hasson and Keeney 2011). Given the exploratory nature of this research, we interviewed a limited number of experts (mentioned in Table 2) in the fields of PM, ID and design thinking. Given that the purpose was to explore in-depth the expert's views on humanitarian aid and their particular field, discuss their findings, and find additional study paths, the interviews were kept unstructured. Each interview lasted approximately 30 to 45 minutes. Computer-Assisted Qualitative Data Analysis Software (CAQDAS) was used for the data analysis to aid continuity, transparency and methodological rigour. Via Nvivo, the literature was coded following a template analysis, which combines deductive and inductive approaches. This meant that the literature could be coded using predetermined information (like the challenges or success factors identified in the literature review) and at the same time amend or add codes as more data was collected and analysed. This approach permitted exploring key themes and identifying emerging issues. Once all the codes were established, MS-Excel was used to measure the data from the 33 sources selected and display the data to facilitate comparisons through graphs. Ordinary scales from zero to five (from least relevant to most relevant) were used to rank-order the codes (variables) according to the importance that each author gave to each category (Sekaran and Bougie 2016). Given that the authors did not focus solely on any of the variables, none of the categories ranked five, and most were rated two or three. Additionally, the graphs included the number of journals that mentioned the categories rated to give the audience a clearer view of each variable's " real" frequency. Finally, to prove reliability, the consistency of the rankings was confirmed by four volunteers unrelated to the study. These volunteers were given samples of 10 different journals. This exercise helped find and correct mistakes and strengthen validity. It also served as a point of discussion regarding the findings of this research. There was not enough literature regarding project management in ID projects (Diallo and Thuillier 2005;Golini and Landoni 2014), including humanitarian projects. To overcome the limitation of data scarcity, the findings on PM applications in ODA projects were considered and later adapted to humanitarian projects. It was a straightforward process, given that the main difference between these types of assistance is the spontaneity of the event and the time horizon (Golini and Landoni 2014). Similarly, the overall theory on design and innovation was studied and further shaped into its use in the International Development field, focusing on humanitarian relief. The sources selected were published within the last ten years to gather the most recent information. This critical selection included the collection of academic and scientific journals published under the Association of Business School (ABS/AJG) rankings (Table 3). In addition, other research databases, like Scopus and Web of Science were also considered, non-ABS/AJG listed journal listed in these databases like Data analysis and discussion This section presents the results obtained from the analysis of data described in part three. In line with the initial objectives, Sect. 1 highlights the challenges encountered in HA projects and factors contributing to HA project failure and success. Section 2 reports the benefits that PM brings into this field and the importance of the cognitive process in exploratory projects of this nature. Lastly, Sect. 3 revisits the link between PM and design theory and how human-centred approaches can contribute to sustainable projects. Challenges, failure, and success Challenges Figure one illustrates the main challenges in Humanitarian Aid projects. The graph further divides obstacles into four subcategories representing: A) the characteristics of the external environment and uncontrollable factors, B) general management and the "iron" triangle of Time, Cost, and Quality (TCQ), C) human-based management and challenges, and D) others. This categorisation 8 was derived as a common theme throughout the findings. It continues throughout the graphs of this section to link the commonalities between them and show the importance of PM in each of these levels. HA challenges are broad[1, A1] 9 , and they are growing in scale, scope and complexity. All of these challenges are interlinked and often dependent on one another. Complexity[1, A2], for example, encompasses the diversity of time lines[1, B2] roles and stakeholders[1, C2] that must be coordinated in HA projects, adding a layer of difficulty as some of these are not clearly defined. Limited resources[1, A6], including lack of human skills, were the second biggest challenge. They are followed by the complications of assessing impact/quality[B4] given the poor feedback and control mechanisms recognised in this sector. Furthermore, the high number of stakeholders[1, C2] was considered more critical than the unique and unpredictable context in emergency settings[1, A2, A3]. The greater the stakeholder spectrum, the more coordination, communication, needs and requirements[1, C1] to be met; it also increases the opacity of authority lines and responsibilities [1, A2]. It was also discovered that the greater the power distance is between donors and recipients, the harder it is to meet donor requirements[1, C1]. Additionally, high levels of bureaucracy[1, A4] contribute to delays[1, B2], and personal agendas[1, A5] might interfere with project outcomes if, for example, managers were more concerned about their relationship with particular politicians or status in the public/private sector, rather than on the community burden (Diallo and Thuillier 2004). Together with the absence of PM methodologies, these challenges usually result in poor project planning, superficial risk management strategies, paucity of accountability and stakeholder involvement, unmotivated project teams, and eventually costing project success (Kelecklaite and Meiliene 2015). Figure two presents additional omissions that not only hinder success but can also lead to project failure. Insufficient culture consideration[2, AC] was regarded as the most relevant contributor to failure. Lack of shared perception between donors, project managers, and end-users can result in poor beneficiary inclusion and omission of community needs during planning and delivery stages. Exclusion of factual information, dishonesty, and lack of transparency[2, A2] came second; these include corruption and political manipulation, shaky government policies and lack of transparency derived from the difficulty of breaking down costs incurred in HA (Kopinak, 2013). Finally, lack of or poor PM[2, B1] was one of the most critical factors, mainly as factors mentioned in sections B and C can be managed through this discipline. Furthermore, resource allocation[2, B2] amongst relief projects has been denounced disproportionately not only in terms of goods and skills but financially; some operations have been "forgotten" as they receive little or no help from donors, while others receive more than is necessary. Next came inappropriate recruitment[2, B3] and flawed risk analysis[2, B4]. Inappropriate recruitment disrupts team functions and service delivery, reflecting negatively on the donor and hindering project management and future finance. Lack of experience also reflects poor cultural perceptions[2, AC], including difficulty adapting to the environment and having an unbalanced view of local values, beliefs, and infrastructure. Finally, inexperience often results in workplace stress, frustration, anger and lack of empathy to the host country. [3, C2] are the key factors to consider to achieve success in HA projects. As the literature review suggested, capturing lessons is critical for success, helping to achieve continuous improvement. Knowledge creation and capture [3, C5] can happen at all stages and levels of the project life cycle. Lessons gained should be transmitted to subsequent projects to prevent the repetition of mistakes (Golini et al. 2015). Additionally, managers must know that learning opportunities are missed when managers are reluctant to admit mistakes, leading to losing some donor funding (Marlow 2016). Furthermore, PM[3, B1] was equally relevant and given that the PLC is included under this category, it can be inferred that the importance of planning has also been considered. Although communication [3, C2] was not as frequently mentioned, it is a critical success factor as it relates to other categories such as team management, motivation and leadership [3, C1], conflict resolution[3, C4], cultural sensitivity[3, AC1] and in choosing a particular language to refer to the end users [3,AC2]. Lastly, standardisation [3, D] was suggested to improve the application of PM methodologies and obtain more objective results from evaluation and feedback mechanisms. It was also significant to better understand success and failure contributing factors[2, B5], as well as to improve finance and resource allocation[2, B2], prioritisation of stakeholder needs[2, AC], ethical practices[3, A2], and reduction of coordination problems[7, B1, C1] and time frames. Benefits of project management in humanitarian assistance The general belief that enthusiasm and empathy are the essential skills of aid workers leads to staff that have unsuitable skills and experience (Kopinak 2013). As both literature and findings suggest, HA project managers deal with A) a broad range of challenges outside their control, B) hard services to deliver, and C) human management at all levels. Fortunately, PM can add value, improve performance through each of 'its knowledge areas*, and facilitate Project Capability Building (PCB). Communication [4, C1] represents the single most crucial task faced. However, it is also considered highly difficult in the HA context. The quality of information exchanged depends highly on trust, respect and values, and verbal and behavioural delivery and decoding. Furthermore, PM benefits projects by providing more realistic time frames [4, B3] and technical abilities to meet them [4,AB2]. This is particularly helpful in the case of exploratory projects as a means to identify cycles [4,AB1] Cognitive process in exploratory projects PM offers the opportunity to learn from projects, which is progressively essential to project success (Fig.8). While Sect. 1 identified the uniqueness and complexity of HA projects as a challenge, both exploratory 11 and exploitative 12 learning are closely linked to the degree of change in the environment (Brady and Davies 2004). Learning from exploratory projects is the process of discovering practical lessons from experiences that could not have been foreseen (Lampel et al. 2009). HA projects provide higher learning opportunity as patterns and behaviours can quickly become obsolete. Consequently, constant revision of organisational process permits focus and transforms ambiguous information into knowledge, hence the relevance of identifying cycles and applying monitoring and evaluation in all stages. Similarly, the process of learning involves making sense of the culture, leadership and capabilities of the current context; it requires a level of receptivity and observation. These lessons can manifest as the creation of new solutions or as innovative processes. The latter is ontological to the cognitive process of exploratory projects, as innovation processes are driven mainly by experimentation. Exploratory projects bring higher opportunities for learning as they do not have definite specifications; their " openness" provides a baseline for the generation of new ideas (Lenfle 2014). In like manner, new management methods are encouraged given the levels of " unforeseeable uncertainties"; therefore, the process of learning through exploratory projects can be understood as a loop of selection and testing, an inductive process. However, learning must be captured either through a communication or through embedding the new knowledge into processes and combinations. Discussion It was expected that each of the categories (A, B and C) within the graphs would relate to one another across the different divisions: main challenges, factors of success, contributors to failure and PM contribution. Even though all of Knowledge acquired in exploratory projects (Brady and Davies 2004) 12 What results of exploratory learning as it develops into new capabilities (Brady and Davies 2004) these categories are interrelated, the results differ from one division to another. Within challenges (Fig 1), the category that was considered the most relevant was the one relating to the external factors (A). In this sense, the results agree with the literature review, which suggests that the environment of HA projects is hostile and uncertain and that its complexity is the main hindrance to success. Moreover, within success factors (Fig 2), category C, relating to human-based management and challenges, was considered vital. This category made a high emphasis on communication and interpersonal (Fig. 2) and PM contributions (Fig. 4). However, contrary to what was expected from the literature, the consideration of the recipients and their inclusion in the project was not mentioned as such. It could be inferred that it is part of stakeholder management and that the lack of culture consideration was regarded as highly relevant within contributors to failure (Fig 3). Nevertheless, including the beneficiaries in project design phases was expected to be the primary approach to planning and implementing HA projects. Additionally, the most relevant category in both failure factors (Fig 3) and PM contributions (Fig. 4) was in relation to the more technical and general management (B). Furthermore, project leaders should harness the passion for positive social impact with careful and intentional planning. This confirms the suggestion from the literature review regarding the possibility of PM being a remedy for poor project performance. Furthermore, it indicates that PM management is critical to achieving successful coordination, time management and resource allocation, all of which were also suggested in the literature review. Despite being a critical factor in the literature review, it was surprising that programme end-users were shown to receive meagre attention and have not been considered necessary, mainly because beneficiaries are at the centre of creating a sustainable project. For this precise reason, the literature suggested incorporating human-centred design in the planning and implementation and evaluation of HA projects and the benefits of treating the recipients as consumers. However, it seems like there is still a gap in both the literature and the practice between these fields. Conclusion Natural disasters' frequency and destructive capacity are on the rise, and a high number of international assistance projects are reported to have high failure rates and unsatisfactory performance. Moreover, the livelihood and survival of people in the affected communities are highly dependent on disaster relief projects. Therefore, third sector organisations must find ways to manage humanitarian aid effectively. The professionalisation of humanitarian response has contributed to the adoption of PM tools, and the development of NGO focused PM frameworks. However, there is still a gap concerning meeting the end users' needs and considering them in all parts of the project/disaster life cycle. As the literature identified, the latter is one of the factors of project success because it is linked with the degree of customer value and because including the beneficiaries can result in sustainable outcomes that manage to bridge relief, rehabilitation, and development. The categorisation of the variables into HA environment and PM knowledge areas suggested that PM can contribute to humanitarian project success and that project manager can and should learn from exploratory projects. The scope of the challenges discovered was as complex as the literature suggested; the main challenges in achieving favourable HA project outcomes included limited resources, difficulty assessing the project's impact, and the broad stakeholder spectrum. Although it was initially assumed that the emergent nature of the exploratory projects hinders outcomes, it was discovered that the highly complex-uncertain, unstable, culturally diverse, and multiple stakeholders-environment could provide a fertile ground to activate the learning process and generate explicit and tacit knowledge. In this sense, it is only logical that capturing lessons and PM application is rated as the most critical factors to achieve project success. However, project managers must consider that patterns and behaviours in HA projects can quickly become obsolete and that constant revision of organisational process and communication allows the transformation of ambiguous information into knowledge. In the same way, communication was one of the most relevant success factors, and the PM contribution was considered the most important. Findings suggested that communication is at the core of success because it is part of every process, from HR to coordinating with a diverse roster of stakeholders to permit the correct allocation of time, resources, procurement, etc. Communication is also vital to design thinking. It allows project managers to adapt to the environment and understand the needs of the end-users and engage with them to create solutions that are suitable for the communities affected. People must be placed at the centre of the project life cycle, and beneficiaries must be included in all project design phases. Further research into both the practical use and perceived benefits of human-centred design needs to be undertaken and the results contrasted with those of current standard practices. This would enable a fuller understanding of how these practices help and hinder the development of better outcomes for beneficiaries, leading to more synthesis between traditional and innovative project management approaches in the third sector. In conclusion, project management, particularly in HA, goes beyond tools and methodologies. Managers must also possess high human skills to adapt to demanding environments, communicate appropriately, and engage with multiple stakeholders to achieve a successful project outcome. People are the common denominator throughout this study. Lack of stakeholder consideration and working from the preconceived notions of what needs, and solutions are detrimental to project success. Both donors and recipients matter, and project managers should prioritise accordingly and bridge the gap between donor-recipient relations to find innovative ways of meeting their requirements. In this sense, adopting design thinking can lead to more sustainable solutions and project success. Lastly, this report identified a gap in the literature relating to the promotion and efficacy of design thinking when implementing PM. Further research into both the practical use and perceived benefits of human-centred design needs to be undertaken and the results contrasted with those of current standard practices. This would enable a fuller understanding of how these practices help and hinder the development of better outcomes for beneficiaries, leading to more synthesis between traditional and innovative project management approaches in the third sector.
2022-04-28T05:12:52.081Z
2022-04-26T00:00:00.000Z
248400210
s2orc/train
v2
Agricultural labor, COVID-19, and potential implications for food security and air quality in the breadbasket of India
Agricultural labor, COVID-19, and potential implications for food security and air quality in the breadbasket of India To contain the COVID-19 pandemic, India imposed a national lockdown at the end of March 2020, a decision that resulted in a massive reverse migration as many workers across economic sectors returned to their home regions. Migrants provide the foundations of the agricultural workforce in the ‘breadbasket’ states of Punjab and Haryana in Northwest India.There are mounting concerns that near and potentially longer-term reductions in labor availability may jeopardize agricultural production and consequently national food security. The timing of rice transplanting at the beginning of the summer monsoon season has a cascading influence on productivity of the entire rice-wheat cropping system. To assess the potential for COVID-related reductions in the agriculture workforce to disrupt production of the dominant rice-wheat cropping pattern in these states, we use a spatial ex ante modelling framework to evaluate four scenarios representing a range of plausible labor constraints on the timing of rice transplanting. Averaged over both states, results suggest that rice productivity losses under all delay scenarios would be low as compare to those for wheat, with total system productivity loss estimates ranging from 9%, to 21%, equivalent to economic losses of USD $674 m to $1.48 billion. Late rice transplanting and harvesting can also aggravate winter air pollution with concomitant health risks. Technological options such as direct seeded rice, staggered nursery transplanting, and crop diversification away from rice can help address these challenges but require new approaches to policy and incentives for change. Introduction COVID-19 is a rapidly evolving pandemic, with many rural and urban areas across the globe effectively shut down for most commerce and transport. Border closures, quarantines, and value chain disruptions are restricting food access, while shortfalls of inputs and the financial means to purchase them are jeopardizing production capabilities. Productivity is further threatened by emerging shortages of agricultural labour in some regions that may disrupt planting, harvest, and other farming operations. Beyond current output market and harvesting disruptions that are impacting the end of the winter cropping cycle, the looming major forthcoming challenge is the narrow window for rice transplanting that happens across > 30 million hectares at the beginning of the monsoon rainfall season. More than 95% of the rice area in India is dependent on manual labour for crop establishment and the lockdown has triggered a huge reverse migration from the northwestern states of Haryana and Punjab, with estimates suggesting that around 1 million labours have returned to their home states with little prospect of returning in the near future (Chaba and Damodara, 2020). Productivity shortfalls from Northwest India could have profound national-level food security ramifications since these two states contribute around 50% of the staple food grains that are procured and distributed by the Government of India (Chauhan et al., 2012, DPFP, 2020. Agriculture in this region is intensive, high input, and is dominated by rice (monsoon or kharif season) and wheat (rabi or winter season) crops grown in rotation. These systems have India's highest annual grain productivity per unit area land (Yadav et al., 2019). Rice-wheat system productivity is driven by timely transplanting of rice and, consequently, by the timely sowing of the succeeding wheat crop in rotation. Most of the rice is transplanted during a short two-week window starting mid-June. We hypothesize that reverse migration of farm labour, coupled with social distancing restrictions, will significantly delay the transplanting of rice with a consequent delay in wheat seeding in Northwest India (i.e. states of Punjab and Haryana). Not only may this significantly reduce rice-wheat production, but rice harvesting delays may also lead to damaging shifts in rice residue burning to periods in the later fall where weather conditions favor poor air quality (Balwinder-Singh et al., 2019a). During the peak pollution period in November and early December, rice residue burning is a significant source of PM 2.5 in the region. This broadly affects rural and urban communities, including the capital New Delhi , with 3-fold increases in acute respiratory illnesses observed in the most fire-affected districts (Chakrabarti et al., 2019). The spiking of air pollution in the winter months in northern India already constitutes a serious health problem, and may exacerbate the threat of COVID-19 by increasing both infection rates and disease severity. In this paper, we present a spatial ex ante assessment of the potential impacts of labor-induced crop establishment delays in the rice-wheat systems of Northwest India from the perspectives of agricultural production and air pollution. Further, we suggest potential technological solutions which may help cope with the anticipated labour shortages associated with the pandemic while contributing to agricultural sustainability in the region. Materials and methods Agricultural systems in Northwestern India are highly dependent on the work of economic migrants (Kaur et al., 2011) and COVID-19 has resulted in an exodous of these workers returning to their home states, creating wide-spread labour shortage across economic sectors (Gupta, 2020;Mukhra et al., 2020). We developed rice transplanting delay scenarios based on expert judgement that represent a range of plausible delays conditioned by anecdotal evidence that dependence on hired labour increases as farm size increases. Time-series satellite imagery are used to characterize the current rice and wheat crop planting and harvest date trends in the Northwestern Indian states of Punjab and Haryana. We then used farm size data ( Fig. 1) to calculate the rice transplanting date distributions on an area basis for each scenario. We then use this information to initialize the APSIM cropping system model (Balwinder-Singh et al., 2019a, 2019bBalwinder-Singh et al., 2016) to examine the effects of rice sowing date on both rice and wheat productivity and then used rice crop duration from simulated data for each transplanting date to develop rice harvesting date distribution scenarios. Satellite-based assessments on crop characteristics Rice and wheat establishment dates, maturity dates, and total sown area were derived using high temporal resolution satellite data. We used MOD13Q1 (Terra) and MYD13Q1 (Aqua), Vegetation Indices (VI) 16-Day L3 Global 250 m product of MODIS from 2018 to 2019. Each image gives the maximum value of the enhanced EVI (Enhanced Vegetation Index) over a 16-day compositing window with an 8-day offset between the two products, yielding EVI estimates at 8-day intervals when used together. To extract crop phenology data, we used 50 images of MODIS EVI data for last two rice seasons (2018-2019). The Savitzky-Golay function in TIMESAT was used to smoothen the noise and subsequently fit timeseries data on a pixel-by-pixel basis. Based on Balwinder-Singh et al., 2019a, 2019b, we separated rice areas from other landuses by using a maximum EVI threshold value (> 0.5) and field duration criteria (112 to 152 days) for rice during the monsoon season. Thereafter rice transplanting date was estimated on a 250 m 2 pixel basis by assessing when a rice crop achieved 10% of maximum EVI on the ascending limb of growth curve; actual transplanting is likely to be 2-3 weeks earlier and we adjusted our statellite estimates by 15 days accordingly (Boschetti et al., 2009). We then used crop maturity criteria (i.e. 10% of peak EVI on the descending limb of growth curve) to assess crop readiness for harvest. Rice transplanting scenarios in Northwest India (Punjab and Haryana) A. Business as Usual: In this scenario, we assume no delays in rice transplanting and farmers manage to plant rice on time though family labour, hired labour, or mechanized planting. Rice transplanting was based on the average planting date of last two years (2018 and 2019) as estimated by satellite data. We choose the last two years over the long-term average because they reflect a recent increase in the use of shorter-duration rice varieties, although many farmers still cultivate medium or longer-duration cultivars. This transition has been provoked, in part, by policy mandate by the state governments to delay rice transplanting till the third week of June in order to conserve groundwater resources for irrigation. B. Medium (4-10 ha) and large (> 10 ha) farmers won't get access to sufficient labour and other resources for transplanting: In this scenario, we assume that both medium and large farms (i.e. > 4 ha, Fig. 1) are not able to transplant rice on time. Here we anticipated that paddy transplanting in 50% of land owned by medium and large farmers will be delayed by one week and remaining 50% by two weeks. Farm size information is only available to district levels, hence we used fractional area under medium and large farm-size class to estimate the total rice area covered by each delay category in each district. For example in Ludhiana district in Punjab, the areal extent of > 4 ha farms is 33% which was equally distributed to all transplanted dates starting from 165 DOY and onwards. The districts which have large areas under farm size > 4 ha have more risk for transplanting delay. Other farms transplant rice according to scenario A. C. Semi-medium (2-4 ha), medium and large farmers won't get access to sufficient labour and other resources for transplanting: In this scenario all assumptions like area distribution under delay transplanting are same as Scenario B, but we assume that the labour shortage effect extends to farm sizes equal to or greater than 2 ha. We assume that out of total rice area occupied by these farms in each district, 50% transplanting is delayed by one week and remaining 50% by two weeks. D. All categories of farmers won't get access to sufficient labour and other resources for transplanting: In this scenario we assume a delay in transplanting everywhere (i.e. not considering the farm size differences in labor availability). These delays, however, are modelled in a staggered manner; 50% of total area in both states is subjected to delay of one week, 25% area by two weeks and remaining 25% by three weeks. Cropping system simulations The APSIM model (Holzworth et al., 2014) was used to simulate the effect of the different sowing date scenarios on rice-wheat system productivity. APSIM was calibrated and performance verified in the same region in previous studies (Balwinder-Singh et al., 2015; Balwinder-Singh, et al. Agricultural Systems 185 (2020) 102954 Balwider-Singh et al., 2011) (Rice; r 2 = 0.91, RMSE = 200 kg ha −1 ; Wheat; r 2 = 0.86, RMSE =550 kg ha −1 ) and has performed well in simulating rice and wheat yields under contrasting environments across Asia (Gaydon et al., 2017). Simulations were run on a silty loam soil for medium duration rice variety (135-140 d) and wheat variety (150 d) using CSISA (www.csisa.org) project data from Karnal, Haryana. The soil has a plant available water capacity (PAWC) of 110 mm over the top 60 cm and 290 mm to a soil depth of 180 cm (Table 1, Supplementary material). The stage 1 soil evaporation parameter (U) was set to 12 mm, and the stage 2 parameter (cona) was set to 4 mm. The saturated percolation rate was set to 20 and 6 mm d −1 for non-puddled (at time of wheat sowing) and puddled soil. Soil water and nitrogen values were reset in each simuated year at 15 days before rice sowing, to capture the effect of inter-seasonal climate variability on rice-wheat yield. All simulations were conducted using 24 years of weather data from the Ludhiana (Punjab) and Karnal (Haryana) sites. To study the effects of the rice transplanting date on rice and the following wheat crop yield (pixel basis), and on total system yield, the calibrated model (for rice and wheat cultivars)was used to evaluate the performance of rice-wheat system under different transplanting dates starting from 15 June to 5 August for rice at 2-day increments. Rice crop simulations used 25-day old seedlings transplanted at 33 plants m −2 and nitrogen was applied at 150Kg ha −1 . The crop was irrigated daily after transplanting as needed to maintain continuous ponding (depth 50 mm) for the first two weeks after transplanting. Thereafter, the crop was irrigated three days after disappearance of the ponded water, and the amount of water added was the amount required to fill the top two soil layers (0-30 cm) to saturation plus an additional 50 mm water. The rice crop was followed by wheat sown 21 days after rice maturity to allow for the farmers practice of drying, harvesting, crop and field preparation for the wheat crop after pre-sowing irrigation. Wheat was sown using conventional tillage with one pre-sowing irrigation of 70 mm applied 15 days after rice maturity. The wheat variety was PBW343 sown at 150 plants m −2 with a row spacing of 20 cm. The crop was irrigated when the soil water content (0-60 cm) decreased to 50% of plant available water content and nitrogen was applied at 150 Kg ha −1 . Economic loss For each scenario, we estimated yield changes over baseline (Scenario-A) at pixel level as a function of transplanting date for rice and wheat. These yield changes are then applied to district level production data (DACNET, 2018-2019) of rice and wheat to calculate the changes in the total production. These two states cultivate both coarse rice as well as basmati (aromatic) rice; the analysis here focuses only on coarse rice. Area under coarse rice was calculated using district level percentage share of Basmati rice (Agricultural and Processed Food Produts Export and Development Authority (APEDA), 2019). Since this analysis highlights transplanting delays and its effects on the rice-wheat cropping system, the wheat area modelled is the same as the rice area. For estimating economic losses, we used minimum support price (USD $215 for rice and $278 for wheat; assuming 1 USD = 72 INR) as the basis for calculations; minimum support price is an assured procurement price for these commodities in Punjab and Haryana. It should be noted that the evaluated scenarios only consider transplanting delays and assume that all other management factors including the age of rice seedlings remain the same across scenarios. Fire data The Visible Infrared Imaging Radiometer Suite (VIIRS) fire product from NASA/NOAA Suomi National Polar-orbiting Partnership (Suomi NPP) satellites at 375 m spatial resolution was used to estimate the number of fires per day for year 2018 and 2019 from 1 October to 31 December. The VIIRS data is more sensitive for smaller and cooler fire Schroeder et al., 2014), which is often the case for agricultural residues. Low confidence values data in the VIIRS fire products were excluded from the analysis and remaining nominal and high confidence values were considered in the analysis to estimate fires per day. Rice and wheat yield under different rice transplanting dates APSIM simulations using long-term weather data showed a similar yield for all transplanting dates during June and till 9 July, with maximum median yield of about 6.5 t ha −1 (Fig. 2a). Rice yield declined sharply after 9 July and further delay in transplanting to 2 August resulted in significantly lower median yield of 2.4 t ha −1 . The highest variance in rice yields was observed under July transplanting dates. Under far later sowings in early August, variability was low but yields levels were also very low. The best combination of high and stable rice yields was observed till 9 July (190 DOY) transplanting dates. Under the scenario of transplanting after 9 July, rice yield declined at 1.2% per day delay. The succeeding wheat crop, sown 21 days after of rice maturity in every case, was more sensitive to the timing of Balwinder-Singh, et al. Agricultural Systems 185 (2020) 102954 rice establishment with yield declines started after 180 DOY (Fig. 2b). Therefore, on a system basis, the rice-wheat productivity declined significantly when rice transplanting was done after 30 June or 181 DOY at rate of 0.75%, however system productivity decline very sharply after 190 DOY rice transplanting at 2.4% per day in delay. (Fig. 2c). Satellite data analyzed for past two years (2018-2019) indicates that in both Punjab and Haryana states (Fig. 3), almost 100% of the area is transplanted during the optimum window for rice productivity (i.e. before 190 DOY); Fig. 2a). However, in Haryana roughly 35% of the rice area is transplanted after 30 June (DOY 181), the date afterwhich delayed rice sowing increases yield penalties for wheat (Fig. 2b). In Punjab only 10% of the rice area is planted after 30 June. In transplanting delay scenarios B, C and D, based on model runs with long-term climate data suggests that 15%, 20% and 51% of the total area incurring rice yield losses (transplanting beyond 190 DOY) (Fig. 4) across both states. However, the wheat area projected to fall under significant yield loss window (i.e. due to rice transplanting beyond 30 June) are significantly higher at 38%, 47%, and 87%% for scenarios B, C, and D, respectively under both states (Fig. 4). Simulated yield outcomes under plausible transplanting delay scenarios from COVID-19 indicates less impact on the rice yields, but the potential for significant losses for wheat. The magnitude of productivity loss, i.e. area × yield over baseline, will also vary with rice and wheat planting date (Fig. 2). Among the rice transplanting delay scenarios, the highest rice production loss, was under scenario D with Haryana experiencing higher losses (20%) than that of Punjab (16%) (Fig. 5). On an average across both states, projected rice production losses vary from 7% to 18.0% in scenario B to D. The wheat production yield loss trends in both states are similar to, but in higher quantities, than that of rice. On a district-scale, which is the functional planning unit for policy implementation in India, reductions in system-level production vary from 13% to 26% in Punjab and 13.0% to 35% in Haryana under scenario D where maximum production losses were observed (Fig. 6). This micro-scale analysis will be highly helpful to the planners for targeting investments for immediate coping measures and technologies and their effective implementation in this agriculturally important region of the world. Economic loss In addition to jeopardizing food security, projected rice-wheat system production losses would come at substantial economic costs. The highest total combined economic loss is around USD $1.48 billion (Punjab-US $1140 m and Haryana-$337 m) in scenario D. For scenario B and C, the combined losses for both states are estimated at USD $675 and US $935 m, respectively. In Punjab maximum economic loss is expected in Ludhiana District ($123 m) whereas it is projected to be (Fig. 6). Impacts on residue burning Under 'business as usual' (scenario A), about 90% rice in Punjab and Haryana is harvested (from simulation data) by the last week of October, resulting in daily fire events and PM2.5 levels peaking in the first week of November (observed data). Average PM2.5 levels during 2018 and 2019 winter season follow the trends of average rice residue burning events during the same period, and attains peak in first week of November before declining as fire events also starts going down. Transplanting delay scenarios will result in later rice harvesting and will have a direct impact on the timing of rice residue burning. For scenarios B, C, D, our results (Fig. 7.) suggest that between 30 and 80% rice residues will be burnt later, shifting the residue peak and prolonging the burning season in all the change scenarios. Also the residue burning will fall in more cooler time of year (mid to late November) as compare to the contemporary situation, likely further exacerbating agriculture's contribution to hazardous air quality. Potential impacts on food security, air quality, and public health In the intensive rice-wheat rotation of Northwest India, ensuring timely rice establishment heavily relies on manual transplanting by migratory labour; Punjab alone requires 50 million person-days for planting of monsoon crops, primarily rice (Dhillon and Vatta, 2020). However, the COVID-19 crisis has led to significant reverse migration of labour. In the absence of viable near-term alternatives to support rice transplanting such as mechanized rice planters or the repurposing of labour from employment guarantee schemes (e.g. MGNREGA), the transplanting of rice will very likely be delayed in 2020. Our analysis suggests that this will have very significant implications for food security through yield loss in both rice and wheat, as well as for air pollution that may aggravate long-standing public health problems while increasing vulnerability to COVID-19 (McDonad et al., 2020). Our analysis under different rice transplanting scenarios due to COVID-19 effects on labour availability indicates that potential delays in rice transplanting would significantly reduce rice production (around 7% -18% loss) in Punjab and Haryana (Fig. 5). Under conditions where labor is not limiting, the turn-around time between rice harvesting and optimal planting window for wheat is already very narrow (around 2-3 weeks). Any delay in planting and harvesting of rice has significant implications for wheat planting and yields, and in absence of any adaptive measures, there could be significant wheat production loss. Our scenario analysis also showed that with delayed transplanting, it can lead to significant loss in wheat production (ranging from 10 to 23% under different scenarios) and total system production loss (from 9%-21%) (Fig. 5). The contribution of Haryana and Punjab to the central food stocks procured by Food Corporation of India and other agencies is about 65% for wheat and 34% for rice (https://dfpd.gov.in/ ). Therefore, any significant impact on the wheat and rice production in Northwest India would impact national food security. Our analysis indicates that if rice transplanting is delayed by one to three weeks in some areas scenario D), then total rice and wheat production in the kharif and following rabi season may go down by 4.6 million tons. In India, rice and wheat provides 60-70% calories and 50-55% protein intake (Bishwajit et al., 2013), hence any reduction in production may directly or indirectly (e.g. through prices spikes) affect the food and nutrition security of resource-poor households. The cities that fall in the study region are among the top 10 most airpolluted cities of the world. During November and early December, the peak air pollution period, rice residue burning is a significant source (7-79%) of PM 2.5 in the region that broadly affects rural and urban communities, including the capital New Delhi (Bikkina et al., 2019;Cusworth et al., 2018). Seasonal PM 2.5 trends are highly correlated with residue burning in the airshed i.e. Punjab and Haryana states Liu et al., 2018). Long term analysis of PM 2.5 data over North India showed that stable atmospheric conditions, indicated by low boundary layer depth and low wind speed, resulted in higher PM 2.5 concentration during the winter months (Chowdhury et al., 2019). In all the transplanting delay scenarios modelled in our analysis, residue burning will be pushed towards a more cooler time of year (Fig. 7). This dimension could worsen an already acute air quality crisis. Recent research evidence indicates that living in a district with intense agricultural residue burning during winter months is associated with a three times higher risk of Acute Respiratory Infection (Chakrabarti et al., 2019); there is a large overlap between causes of deaths of COVID-19 patients and the diseases that are affected by long-term exposure to fine particulate matter (PM 2.5 ) (Ogen, 2020;Wu et al., 2020). In all scenarios, the spike in air pollution in the winter months, caused in part, by rice residue burning, could coincide with an anticipated COVID resurgence in the fall, potentially making the public health impacts more severe by increasing both morbidity and mortality rates (McDonad et al., 2020). Response options to cope with projected labor bottlenecks Management practices and technologies exist that can help avoid or alleviate the consequences of labor delays, although there are significant challenges to bringing solutions to scale in the very near term. In descending order of near-term feasibility, options include: Delay/staggered nursery sowing Any delay in transplanting due to labour shortage can result in use of old (aged) seedlings. Seedling age at transplanting is an important factor for uniform stand of rice (Paddalia, 1980) and regulating its growth and yield (Sarwar et al., 2011). By delaying nursery sowing to better match delays in transplanting, yield potential can be conserved for rice. Direct drilling of wheat using happy seeder Direct seeding of wheat into rice residues using the Happy Seeder (Sidhu et al., 2015) can reduce the turn-around time between rice harvest and wheat sowing by 7-10 days and potentially eliminate the need of residue burning. With the COVID-19 mediated delayed transplanting, this window is expected to be further narrowed in winter 2020. A high-level analysis using public and private cost and benefits to explore feasible, affordable and scalable alternatives to crop residue burning suggests that use of no-till direct drilling using happy seeder is a potential solution (Shyamsundar et al., 2019) in addition to contributing to several sustainable development goals . Directly sown rice Timely planting of rice can also be achieved by adopting dry direct seeding of rice (DSR) using mechanized seed-cum-fertilizer planters. In addition to reducing the labor requirement for crop establishment, dry direct seeding allows earlier rice planting due to its lower water requirement for establishment. Furthermore, direct-seeded rice matures 8-10 days earlier than puddled transplanted rice, leading to an earlier harvest and timely establishment and higher yield of the following wheat crop (Balwinder-Singh et al., 2019b;Kumar and Ladha, 2011;Chakraborty et al., 2017). Crop diversification with maize Replacing rice with maize in the monsoon season is another option to alleviate the potential shortage of agricultural labour due to COVID-19 since mechanized crop establishment is the prevailing practice. Research evidence generated over past decade , Choudhary et al., 2018Gathala et al., 2013) demonstrates that maize along with modern agronomic management practices can provide a profitable and sustainable alternative to rice. The diversification of rice with maize can potentially contribute to ecosystem services that includes conserving ground water, improving soil health and reducing air pollution through eliminating residue burning. Nevertheless, maize is more sensitive to water logging and soil salinity than rice and geographic opportunity targeting is a must. Proactive policies such as assured markets and critical infrastructure development are likely necessary steps to facilitate diversification away from rice. In addition to above mentioned options, adoption of shorter duration rice varieties that maintain yield potential even with later transplanting, increasing the cropped area of basmati rice, and mechanical transplanting can address the challenges posed by rice establishment delays. Limitations of the study This study presented a scenario based analysis of implications of reverse labour migration as a result of COVID-19 on food security and air quality in the breadbasket of India. The transplanting delay scenarios described here and their effect on crop productivity are captured through the crop simulation model. The crop simulation models captures abiotic responses to delay on crop productivity, however, there may be shifts in pest and disease regimes (Chander and Mohan, 2020;Prasad, 2020) which are not accounted for in this study. Further, the crop response to delay might vary based on management practices and biophysical resources which, at this stage, owing to unavailability of datasets have not been accounted for. Conclusion The intensive and high yielding rice-wheat system in Northwest India generally occupies an optimum planting window that ensures high yields for both crops. Labor shortages caused directly and indirectly by the COVID-19 pandemic may significantly delay rice establishment with cascading effects on systems level productivity. Our simulation suggest that in the states of Punjab and Haryana, highly significant production (~24%) and economic losses (US$ 1.5 billion) are possible. In addition, delayed rice transplanting may exacerbate seasonal air pollution associated with agricultural burning, an outcome that may contribute to a COVID resurgence in the fall. Technological and management innovations can help address emerging constraints, but none can be readily taken to scale without concerted and strategic policy interventions. Declaration of Competing Interest None.
2020-09-21T13:05:37.072Z
2020-09-21T00:00:00.000Z
221805000
s2orc/train
v2
Antibody development for preventing the human respiratory syncytial virus pathology
Antibody development for preventing the human respiratory syncytial virus pathology Human respiratory syncytial virus (hRSV) is the most important etiological agent causing hospitalizations associated with respiratory diseases in children under 5 years of age as well as the elderly, newborns and premature children are the most affected populations. This viral infection can be associated with various symptoms, such as fever, coughing, wheezing, and even pneumonia and bronchiolitis. Due to its severe symptoms, the need for mechanical ventilation is not uncommon in clinical practice. Additionally, alterations in the central nervous system -such as seizures, encephalopathy and encephalitis- have been associated with cases of hRSV-infections. Furthermore, the absence of effective vaccines or therapies against hRSV leads to elevated expenditures by the public health system and increased mortality rates for the high-risk population. Along these lines, vaccines and therapies can elicit different responses to this virus. While hRSV vaccine candidates seek to promote an active immune response associated with the achievement of immunological memory, other therapies -such as the administration of antibodies- provide a protective environment, although they do not trigger the activation of the immune system and therefore do not promote an immunological memory. An interesting approach to vaccination is the use of virus-neutralizing antibodies, which inhibit the entry of the pathogen into the host cells, therefore impairing the capacity of the virus to replicate. Currently, the most common molecule targeted for antibody design against hRSV is the F protein of this virus. However, other molecular components of the virus -such as the G or the N hRSV proteins- have also been explored as potential targets for the control of this disease. Currently, palivizumab is the only monoclonal antibody approved for human use. However, studies in humans have shown a protective effect only after the administration of at least 3 to 5 doses, due to the stability of this vaccine. Furthermore, other studies suggest that palivizumab only has an effectiveness close to 50% in high-risk infants. In this work, we will review different strategies addressed for the use of antibodies in a prophylactic or therapeutic context and their ability to prevent the symptoms caused by hRSV infection of the airways, as well as in other tissues such as the CNS. Introduction Human respiratory syncytial virus (hRSV), recently renamed human orthopneumovirus (Afonso et al. 2016), is the main virus responsible of respiratory diseases in newborns, children under 5 years old, and the elderly. hRSV is the most important viral agent causative of acute lower respiratory tract infections (ALRTI) and hospitalizations during winter season (Nair et al. 2010). The symptoms associated with the infection of this virus are mostly age-dependent (Domachowske et al. 2018a), although they are frequently related to coughing, wheezing, fever, apnea, and bronchiolitis or pneumonia in some cases. Commonly, afflicted children require supportive care, accompanied with supplemental oxygen and, in extreme cases, the use of mechanical ventilation (Krilov 2011). Remarkably, extrapulmonary symptoms have also been described for this disease, including cardiovascular complications in young infants (Gálvez et al. 2017;Puchkov and Min'kovich 1972;Suda et al. 1993;Donnerstein et al. 1994), hepatitis -associated with liver complications- (Gálvez et al. 2017;Eisenhut and Thorburn 2002;Eisenhut et al. 2004), hyponatremia (Hanna et al. 2007) and alterations in the central nervous system (CNS), such as seizures (Cha et al. 2019), encephalopathy and encephalitis (Bohmwald et al. 2015). Additionally, hRSV infections can result in impaired learning capacities, as described in murine models (Gálvez et al. 2017;Bohmwald et al. 2018;Espinoza et al. 2013). Accordingly, symptoms such as apnea, encephalopathy, seizures, strabismus and status epilepticus have also been reported in humans (Sweetman et al. 2005;Kho et al. 2004;Millichap and Wainwright 2009;Kawashima et al. 2012), adding to the long list of collaterals from this disease. Further studies analyzing the disease induced by this virus are still required to elicit its true impact as a possible systemic pathogen and the new relevance that this could have from a clinical perspective. hRSV is associated with a rate of infection close to 34 million children under 5 years old per year (Bont et al. 2016). Specifically, hRSV is responsible of nearly 63% of total ALTRI cases and between 19 to 81% of the total viral infections affecting the lower respiratory tract in children. This wide range indicated above is associated with a retrospective analysis that covered 20 years of epidemiology data (Bont et al. 2016). One out of ten children infected with hRSV is hospitalized due to the severe symptoms induced by this virus, and the World Health Organization has estimated that 66,000 to 253, 000 annual deaths are due to hRSV (Afonso et al. 2016;Bont et al. 2016). Finally, children hospitalizations due to hRSV-related bronchiolitis can even reach an 80% in the USA (Peiris et al. 2003). Once hRSV reaches its host, it is able to infect the respiratory tract, mainly targeting epithelial cells at the alveolar epithelium. Here, the glycoprotein (G) is anchored to the plasmatic membrane of its target cell. Then, the fusion protein (F) promotes the fusion between the viral envelope and the plasmatic membrane of the host cell. The fusion process allows the entry of the genetic material that can be used for replication and transcription, once the replicase/transcriptase complex (conformed by the N-, P-, and L-hRSV proteins) is assembled (Hacking and Hull 2002;Collins and Melero 2011). Other viral proteins, such as M2.1 and M2.2, are used as cofactors for this replicase/transcriptase complex (Harpen et al. 2009). The genome is replicated into a positive-sensed (+) antigenome, which will be used for the generation of new genetic material. In parallel, the viral genome will be transcribed into a (+) mRNA, that will be used for protein synthesis (Hacking and Hull 2002). All these processes results in the synthesis of a new ssRNA (−) genome, that will eventually be used as a template for the synthesis of new proteins by the host's ribosomes (Hacking and Hull 2002;Collins and Melero 2011;Tsutsumi et al. 1995) originating new viral particles after 10-12 h post cell infection (Collins and Karron 2013). Both non-structural proteins -NS1 and NS2-are virulence factors with a key role in the immune evasion mechanisms and the induction of cellular apoptosis elicited by hRSV, undermining the host's defenses (Liesman et al. 2014;Lo et al. 2005;Pretel et al. 2013). Specifically, NS1 and NS2 have been associated with the suppression of the type I IFN pathway, by impairing the regulation of STAT2. As a consequence, both downstream α/β IFN genes are suppressed leading to an inefficient viral clearance by the host (Lo et al. 2005;Pretel et al. 2013). Additionally, NS2 has been associated with the obstruction of the airways, as it promotes the shedding of epithelial cells into the airways (Liesman et al. 2014). Therefore, both non-structural proteins contribute to the suppression of type I IFN secretion, which is one of the host's first line of defense for the elimination of viral pathogens. To control the disease caused by hRSV, several vaccines and treatments were developed soon after its discovery (WHO PD-VAC 2014; Graham 2016; Modjarrad et al. 2016). However, no convincing results -both regarding safety and immunogenicity-have been obtained after the numerous vaccine trials that may allow approving the use of a vaccine in humans (Graham 2016). One of the first vaccines tested for hRSV was a formalin-inactivated virus vaccine (FI-hRSV), a formulation that exacerbated the detrimental inflammatory response triggered by the virus in infants and regrettably ended up with the death of two of the immunized children (KIM et al. 1969;Murphy and Walsh 1988). In this line, recent reports have indicated that differential subsets of CD4 + T cells are responsible of the exacerbated response elicited by this failed vaccine prototype (Knudson et al. 2015). In order to control hRSV's expansion worldwide in a safer way, prophylactic approaches based on anti-hRSV antibodies have been generated. These molecules are generally known to be less immunogenic and hold an acceptable safety record for the control of microbial pathogens. There are significant differences between the development of vaccines versus antibody-based prophylactic therapies, especially for a pathogen such as hRSV (Wang et al. 2019;Villafana et al. 2017;Simões et al. 2018). Although the main aim of both types of treatment is to achieve a protective response against the virus, active immunization with vaccines usually results in the activation and generation of immunological memory by the adaptive immune response. Antibody-based prophylactic are preventive strategies that usually promote a protective response that does not lead to the activation of the immune system, nor the induction of immunological memory. This type of immune protection relies on the periodic administration of pathogen-specific antibodies and depends on the half-lives of these molecules (Baxter 2007). The antibody-based prophylaxis and other related preventing therapies developed up to date against hRSV will be discussed in the following sections. Antibody-based approaches for hRSV for high-risk populations Following the discovery of hRSV, the development of vaccines and treatments was quickly initiated (Fig. 1). After the detrimental effects elicited by the FI-hRSV vaccine in children (2 months to 9 years) (KIM et al. 1969;Chin et al. 1969), the notion of a prophylactic treatment based on the passive transfer of hRSV-specific antibodies was supported by early studies and reports in cotton rats (Prince et al. 1985). The results showed therein considered an extensive description of the properties of these antibodies, such as opsonization, neutralization and the capacity to induce clearance of some pathogenic agents. This work was considered a starting point for the use of antibodies as a new tool against hRSV (Olszewska and Openshaw 2009). Early studies generated and evaluated almost 25 different hybridomas, used to obtain several anti-P, −N, −G and -F antibodies ( Fig. 1) . The authors of this work indicated that optimal results were obtained only for one anti-F and one anti-G antibody in mouse model . However, one of the most critical caveats of these antibodies was their low neutralizing capacity in murine models. An encouraging discovery of these studies was the identification of specific sites on the F-and G-hRSV proteins that promote the binding of monoclonal antibodies with enhanced neutralizing capacity (Anderson et al. 1986). The use of Intravenous Immunoglobulin (IVIG), a pool of polyclonal antibodies, was another therapy utilized at one point to prevent lethal hRSV infections in high-risk populations (Fig. 1). In preterm infants and children with cardiac diseases, different doses of IVIG with specificity against hRSV (IVIG-hRSV) were tested (150 mg/kg to 750 mg/kg) and only the highest IVIG-hRSV dose tested elicited a significant protection. The highest IVIG-hRSV dose decreased the hospitalization days, ameliorated the symptoms and reduced the number of ALRTI cases, when compared to the lower doses and the placebo-treated control groups (Groothuis et al. 1993). A similar study evaluated a total of 510 children either premature at birth or with cardiac diseases. This study showed that monthly administration of both the low and the high IVIG-hRSV doses resulted in beneficial effects, as compared to placebo controls or to children receiving a single dose (Groothuis et al. 1993). These results were independent of the pathology or the recurrence in the development of the respiratory diseases, as compared with the children treated with the low dose or the placebo control groups (Respiratory Syncytial Virus (RSV) PREVENT study group 1997). Importantly, the use of IVIG-hRSV as a therapy (RespiGam, Massachusetts Public Health Biologic Laboratories, and MedImmune, Inc., Gaithersburg, MD.) was approved by the Food and Drug Administration (FDA) in 1996 for hRSV's high risk populations (Committee on Fetus and Newborn 2004). Soon after the approval of RespiGam by the FDA, a humanized IgG1-isotype monoclonal antibody against the F-hRSV protein was produced and baptized as Fig. 1 Timeline of antibodies therapies since the discovery of hRSV as human pathogen. Advances and implementation of different strategies that use antibodies to promote the clearance of hRSV since the virus was first discovered in 1956 MEDI-493 or palivizumab. Currently, this antibody is the only prophylactic therapy approved and used in high-risk populations to treat and prevent hRSV infections (Simões et al. 2018). Since it showed a greater protective effect than IVIG, the FDA decided to keep it as the only therapy approved (Johnson et al. 1997). Despite this, two other antibodies against the F-hRSV protein, generated by Merck and Sanofi, are currently undergoing Phase I and III clinical trial evaluations, respectively. Interestingly, targeting the N-hRSV protein has been considered as a new approach, as this protein can be found on the surface of hRSV-infected cells (Cespedes et al. 2014). It is thought that anti-N-hRSV antibodies might lead to the killing of infected cells preventing virus spread, as it will be discussed below. Production of an anti-G monoclonal antibody as an improved immunotherapy against hRSV One of the first monoclonal antibodies developed after the IVIG-hRSV was an anti-G-hRSV antibody (131-2G) that only exhibited partial neutralization capacities (Anderson et al. 1988). This monoclonal antibody blocks the interaction between the G protein and the CX3C chemokine receptor by recognizing a conserved epitope on the G protein that is required for binding to its receptor (Tripp et al. 2001;Tripp et al. 2003). Although in vitro studies using the 131-2G antibody showed reduced neutralization capacity, in vivo responses showed activation of Fc receptors and a better protective response than others anti-F monoclonal antibodies (Radu et al. 2010;Miao et al. 2009). The pathology induced upon hRSV infection was also decreased when the 131-2G antibody was administered, correlating its neutralizing capacity with a lower pulmonary inflammatory disease (Miao et al. 2009;Haynes et al. 2009). Interestingly, a protective response was observed even when the antibody was administered 5 days after the infection ). While the native 131-2G monoclonal antibody was able to favor the development of a Th1-like immune response, inducing the secretion of IFN-γ, a modified version of this antibody consisting of only the F(ab') 2 region promoted a Th2-like profile, without an optimal viral clearance (Boyoglu- Barnum et al. 2014). Despite these promising data, to date no further evaluation of this antibody in clinical studies has been published. The 131-2G antibody was also tested along with another anti-G monoclonal antibody (130-6D) that recognizes an epitope located at the central conserved region (CCR) of the G-hRSV protein. In this study, authors showed that the combination of both monoclonal antibodies decreased the lung pathology when compared to the administration of solely the 130-6D monoclonal antibody, without affecting their mutual neutralization effects (Caidi et al. 2012). Palivizumab: a passive prophylactic method to protect against hRSV infection Palivizumab (MEDI-493, Synagis, MedImmune, Inc., Gaithersburg, MD) is a commercially distributed, humanized IgG1 monoclonal antibody that binds to the F-hRSV protein (Johnson et al. 1997). The first study that described the effect of palivizumab in vivo was performed in cotton rats treated 1 day prior to hRSV infection showing a decrease in the disease parameters when compared to the control (Johnson et al. 1997). Of these results two possible mechanisms arose to understand the palivizumab activity. First, palivizumab is able to prevent the fusion between the viral particle and the host cell membrane and second, it might suppress the formation of syncytia between lung epithelial cells, effect observed in lung epithelial cells in vitro. These could be achieved by blocking the interaction between the F protein and the proteins found at the host cell surface (Young 2002). Following the experiments performed in animal models, clinical studies were performed for palivizumab (Subramanian et al. 1998;Sáez-Llorens et al. 1998). These studies showed that a monthly administration of this antibody was necessary to decrease the disease parameters in the population evaluated, and that this dosage maintained the monoclonal antibody detectable up until day 30 post-immunization in the serum (Subramanian et al. 1998;Sáez-Llorens et al. 1998). The use of palivizumab was also tested as therapy in children hospitalized due to an hRSV infection. Interestingly, a decrease in number of plaque-forming units (PFU) in children treated with palivizumab when compared to the placebo-treated controls was found. However, the observed decrease in PFUs did not correlate with any change in the cellular immune responses (DeVincenzo et al. 2007). In addition, palivizumab administration promoted a reduction in the number of hospitalizations of this high-risk population. The children treated with palivizumab exhibited shorter hospitalization periods and a decreased requirement of oxygen assistance, along with a less pronounced development of ALRTI than the untreated control groups (Village 1998). The main caveat of palivizumab is the very high cost/ effectiveness ratio, since as many as 5 doses might be needed to decrease the probability of a potent or lethal hRSV infection in a high-risk population, given the halflife of the antibody in the host (Village 1998;B. R. 2018;Torchin et al. 2018). The elevated cost for completing an effective treatment is a major burden for health care programs (US$780 per vial of 50 mg and US$1416 per vial of 100 mg, with a recommended dosage of 15 mg/ kg) (Ambrose et al. 2014;Mochizuki et al. 2017). The need of multiples doses reflects the inability of palivizumab to induce a long-lasting immune protection in the individual, therefore consisting of a passive immunization treatment. Finally, some weak points associated with the use of palivizumab are that both dosage and periodicity of administration can influence the effectiveness of the treatment (B. R. 2018). Besides, it suggested that children previously exposed to palivizumab exhibited more respiratory problems than children exposed to this antibody for the first time. Nevertheless, the authors of the study suggested that these respiratory problems might not be associated directly to palivizumab, but rather to environmental factors (Lacaze-Masmonteil et al. 2003). Motavizumab, an improved version of palivizumab Motavizumab (MEDI-524) is an improved version of palivizumab, with an optimized affinity for the F-hRSV protein achieved by mutating 13 specific amino acids located in the variable region of the Complementary Determining Region (CDR) sequence of the antibody (Wu et al. 2007;Wu et al. 2008). Early data derived from the use of motavizumab showed a 70-fold increase in binding to the F-hRSV protein as compared to palivizumab. Interestingly, motavizumab was able to decrease the infection in the upper respiratory tract in a cotton rat model, an effect that was not observed when palivizumab was used as a treatment instead (Wu et al. 2007;Mejías et al. 2005). The suggested mechanism of action of motavizumab as a novel therapy is the inhibition of the cell-to-cell fusion, without affecting the attachment of the virus to the target cell. The central hypothesis surrounding this suggested mechanism considers the antibody's capacity of interrupting the conformational change of the F protein at the moment of making the fusion with the cell membrane of the host cell, therefore targeting the pre-andpost-fusion F protein (Huang et al. 2010). A Phase II clinical study evaluated the effect of fiveadministrations of either: motavizumab only, motavizumab and then palivizumab (M/P), or palivizumab and then motavizumab (P/M). As expected, the three groups showed a similar protective response. However, when comparing the adverse events (AEs) induced by these treatments, the highest AEs incidence was reported for the M/P-treated children. Although two deaths were reported for the M/P group, according to the authors the deaths and the pulmonary impairment reported were not associated with the treatment (Fernández et al. 2010). A phase III clinical trial for motavizumab was also performed in children under 6 months old, which were treated with this antibody and their response was compared to that of children treated with palivizumab (Carbonell-Estrany et al. 2010). The authors observed that there were less cases of hospitalization among children treated with motavizumab than among those treated with palivizumab (Carbonell-Estrany et al. 2010). These data suggested that motavizumab is a more efficient prophylactic treatment than palivizumab. However, these motavizumab-treated children exhibited more frequent AEs, specifically associated with cutaneous problems, such as rashes and skin-related allergies (Carbonell-Estrany et al. 2010). Another phase III clinical trial was performed in a population of 2596 children, either preterm (born at 36 weeks) or under 6 months of age (O'Brien et al. 2015). A positive protective effect was shown for motavizumab for both inpatient and outpatient burdens. This study also demonstrated that children treated with motavizumab exhibited less severe hRSV infections and achieved a reduction in hospitalization rates and in the need of mechanical ventilation, when compared to placebotreated groups (O'Brien et al. 2015). This study corroborated observations reported previously, indicating that motavizumab elicits an enhanced protective capacity against hRSV-infections, when compared to palivizumab (Carbonell-Estrany et al. 2010;O'Brien et al. 2015). Despite of all the positive findings made with motavizumab, a phase II clinical trial that analyzed a population of 118 children showed that the use of two different doses of motavizumab was not able to significantly decrease viral loads in treated children (Ramilo et al. 2014). Furthermore, lack of reduction in viral loads was associated with the absence of improvement of treated children (Ramilo et al. 2014). The following of these children for 12 months after the treatment showed equivalent rates of wheezing episodes as compared to the controls (Ramilo et al. 2014). Interestingly, the vast majority of studies using antibody therapies in humans have shown that this type of transfer is not capable of directly decreasing viral loads in the subjects (Millichap and Wainwright 2009;Bont et al. 2016;Tsutsumi et al. 1995). Unfortunately, despite motavizumab's higher efficiency as a therapy against hRSV, the FDA decided not to approve the license for this new antibody and decline to endorse an extensive use in humans. This decision was based on the large number of AEs associated to skin allergies reported in the clinical study of Carbonell-Estrany et al. described above (Carbonell-Estrany et al. 2010). Development of mucosal antibodies-based strategies as a prophylaxis for hRSV As hRSV-infections are mainly associated with the respiratory tract (Nair et al. 2010), the development of strategies focused on mucosal antibodies could improve the treatment of the disease caused by this pathogen. Antibodies are categorized-according to the characteristics of their Fc domain as IgM, IgG, IgD, IgE and IgA (Mak et al. 2014). The IgA isotype is especially important as it constitutes one of the first mucosa defense barriers against various infectious agents (Woof and Russell 2011). An early study showed that the intranasal administration of an anti-F-hRSV mouse monoclonal IgA antibody (HNK20) -prior to hRSV infection-reduced viral titers in the lungs both in mice and rhesus monkeys (Weltzin et al. 1994;Weltzin et al. 1996). Despite these encouraging data in animal models for this HNK20 antibody, a phase III clinical trial showed unconvincing results and a further development of this antibody was not pursued (Mills et al. 1999). A recent study used the Fab regions of palivizumab and motavizumab to generate recombinant monomeric, dimeric and secretory IgA molecules (Jacobino et al. 2018). The main particularity of these molecules was their capacity to recognize the same epitopes as palivizumab and motavizumab but displaying the functional features of an IgA molecule. Such isotype change resulted in a decrease effectivity of these recombinant IgA antibodies, as compared to the IgG1 palivizumab and motavizumab (Jacobino et al. 2018). Reduced in vitro and in vivo antiviral responses in the mouse model also discouraged further studies for these recombinant IgA molecules (Jacobino et al. 2018). However, it is important to mention that various studies in adult populations have reported high titers of IgA and IgG antibodies, mainly against the G-and the F-hRSV proteins (Cortjens et al. 2017;Goodwin et al. 2018). In a study performed by Cortjens et al., the effect and isotype of antibodies produced by isolated memory B cells from healthy donors was evaluated. These memory B cells were used for the generation of hybridomas whose secreted antibodies were evaluated in hRSVinfected cells. However, these antibodies exhibited limited neutralizing capacity, a signature of hRSV-induced antibodies, as this virus is responsible of recurrent viral infections throughout the life (Cortjens et al. 2017). IgA has even been suggested as a possible predictor of hRSV-infection susceptibility after a study with a cohort Fig. 2 Development of antibody therapies against hRSV infection. The five main types of antibody therapies against the hRSV-infection are described. Also, these therapies are shown in order of development, highlighting that the only approved therapy to be used in humans to date is palivizumab. However, an interesting new possibility is also described at the end of the figure, associated with a therapy based on the use of the monoclonal anti-N-hRSV antibody of 61 healthy volunteers (Habibi et al. 2015). Despite the negative results and the reduced number of researches focusing on the development of IgA antibodies as a therapy for hRSV, a study using monoclonal IgA and IgG isotype antibodies against Influenza virus showed that IgAs can promote better prevention of viral infections as compared to IgGs (Muramatsu et al. 2014). A summary of the current advances and the most important developments of antibodies used as therapies are described in Fig. 2, where the main features of each treatment are highlighted. Novel hRSV antigen targets for the design of protective antibodies Currently, there are only a few monoclonal antibodies conceived as a prophylactic treatment under development. Three preclinical candidates have been published on the Program for Appropriate Technology in Health (PATH website). Two of these antibodies recognize the F-hRSV protein (Arsanis and UCAB (mAbXience)), and one of them is specific for the N-hRSV protein. The anti-N antibody was first evaluated in clinical samples from nasopharyngeal swabs obtained from patients infected with hRSV, showing a high specificity for this protein (Gómez et al. 2014). The protective capacity of this antibody -which is currently under preclinical evaluation in animal models-is based on the induction of an antibody dependent cell cytotoxicity (ADCC) of hRSVinfected cells. As the N-hRSV protein can be found on the surface of infected cells (Cespedes et al. 2014), an anti-N-hRSV antibodies could induce ADCC and complement fixation on cells infected with hRSV. This antibody is yet to be evaluated in humans. The rationale of using an anti-N-hRSV antibody relies on the capacity of this protein to migrate to the membrane of infected cells (Cespedes et al. 2014) and the ( Anderson et al. 1986;Groothuis et al. 1993; Respiratory Syncytial Virus (RSV) PREVENT study group 1997) 131-2G G protein/ monoclonal antibody -It is able to confer protection prior to or after hRSV-infection. -Widely used to identify an hRSV infection in laboratory assays. -Recognizes a very conserved epitope associated with the binding to its receptor. -Does not induce immunological memory. -Not accepted by the FDA for human use. -Only approved in animal models. (Tripp et al. 2001;Tripp et al. 2003;Radu et al. 2010;Miao et al. 2009;Haynes et al. 2009;Boyoglu-Barnum et al. 2014;Caidi et al. 2012;YOUNG 2002) -The evaluation of this antibody is in experimental process in murine model (Anderson et al. 1988;Aliprantis et al. 2018) consequent impairment of the immunological synapses reported (Cespedes et al. 2014). It is possible that the recently described feature of the N-hRSV protein could contribute on preventing the establishment of an adequate immunological synapse, required for the proper induction of a protective Th1 response, during an hRSV infection. Therefore, the use of this antibody could contribute to restore the induction of the cytotoxic immune response required to clear this virus. As stated above, two novel antibodies are currently undergoing clinical evaluation with the F-hRSV protein as their target (Aliprantis et al. 2018;Zhu et al. 2017). The first one is known as MK-1654, a human monoclonal antibody that possess a modification in the Fc region to promote an increase in the molecule half-life (Aliprantis et al. 2018). MK-1654 was developed by Merck™, and the target group for administration are the pediatric population. Currently, this antibody is being evaluated in a clinical trial (Aliprantis et al. 2018). The second anti-F-hRSV antibody (MedImmune, Sanofi) called MEDI8897 is a recombinant human IgG1 monoclonal antibody that recognizes the pre-fusion state of the F-hRSV protein. The pre-fusion state of the F-hRSV protein is a metastable homotrimer associated to type I fusion proteins of different viruses. A conformational change occurs after an initial cleavage of the inactive precursor of the F-protein (F0). Then, after the fusion between the host membrane and the viral membrane, the F-hRSV protein adopts a stable post-fusion conformation (Magro et al. 2010;Ngwuta et al. 2015;McLellan et al. 2011). A study performed in cotton rats showed a 9-fold increase in the reduction of viral loads in the lungs of infected animals, when compared to animals receiving palivizumab . A clinical study using MEDI8897 in a dose-escalated study showed that a single dose of this antibody in healthy preterm infants promoted a safe response with neutralizing capacity at its highest dose (50 mg) (Domachowske et al. 2018b). Another clinical trial using the same antibody previously confirmed safety in healthy adults . Currently this antibody is being evaluated in a phase III trial. Despite the similarities between many of these antibodies in their structure (and possibly, their function), when they are evaluated as a treatment, minimal changes might be critical to promote protection. Some of the main advantages and disadvantages of the above discussed monoclonal antibodies are shown in Table 1. Concluding remarks Antibodies have been widely explored as a potent and recurrent strategy to prevent hRSV-infection in high-risk populations, especially due to the lack of an effective, safe, and licensed vaccine. Antibody-based approaches have been tested either as prophylactic or therapeutic treatments, with various results, depending on the antibody molecule evaluated. However, despite various efforts and several possible treatments, only one antibody is currently used to prevent the viral infection by hRSV, which is highly expensive and not always effective. For this reason, it is still essential to explore new options that could provide improved cost/effectiveness ratios, until a vaccine becomes available and allows the promotion of a protective immune response against hRSV.
2022-11-30T15:40:07.093Z
2020-04-17T00:00:00.000Z
254074600
s2orc/train
v2
Nonadiabatic Landau Zener tunneling in Fe_8 molecular nanomagnets
Nonadiabatic Landau Zener tunneling in Fe_8 molecular nanomagnets The Landau Zener method allows to measure very small tunnel splittings \Delta in molecular clusters Fe_8. The observed oscillations of \Delta as a function of the magnetic field applied along the hard anisotropy axis are explained in terms of topological quantum interference of two tunnel paths of opposite windings. Studies of the temperature dependence of the Landau Zener transition rate P gives access to the topological quantum interference between exited spin levels. The influence of nuclear spins is demonstrated by comparing P of the standard Fe_8 sample with two isotopically substituted samples. The need of a generalized Landau Zener transition rate theory is shown. During the last few decades, a large effort has been spent to understand the detailed dynamics of quantum systems that are exposed to time-dependent external fields and dissipative effects [1]. It has been shown that molecular magnets offer an unique opportunity to explore the quantum dynamics of a large but finite spin. These molecules are the final point in the series of smaller and smaller units from bulk magnets to single magnetic moments. They are regularly assembled in large crystals where often all molecules have the same orientation. Hence, macroscopic measurements can give direct access to single molecule properties. The most prominent examples are a dodecanuclear mixed-valence manganese-oxo cluster with acetate ligands, Mn 12 [2], and an octanuclear iron(III) oxo-hydroxo cluster of formula [Fe 8 O 2 (OH) 12 (tacn) 6 ] 8+ , Fe 8 [3], where tacn is a macrocyclic ligand. Both systems have a spin ground state of S = 10, and an Ising-type magneto-crystalline anisotropy, which stabilises the spin states with the quantum numbers M = ±10 and generates an energy barrier for the reversal of the magnetisation of about 67 K for Mn 12 and 25 K for Fe 8 [2,3]. Fe 8 is particular interesting for studies of quantum tunnelling because it shows a pure quantum regime, i.e. below 360 mK the relaxation is purely due to quantum tunnelling, and not to thermal activation [4]. We showed recently that the Landau Zener method can be used to measure the very small tunnel splittings ∆ in Fe 8 [5]. The observed oscillations of ∆ as a function of the magnetic field applied along the hard anisotropy axis are explained in terms of topological quantum interference of two tunnel paths of opposite windings which was predicted by Garg [6]. This observation was the first direct evidence of the topological part of the quantum spin phase (Berry or Haldane phase [7,8]) in a magnetic system. Recently, we demonstrate the influence of nuclear spins, proposed by Prokof'ev and Stamp [9], by comparing relaxation and hole digging measurements [10] of two isotopically substituted samples: (i) the hyperfine coupling was increased by the substitution of 56 Fe with 57 Fe, and (ii) decreased by the substitution of 1 H with 2 H. These measurements were supported quantitatively by numerical simulations taking into account the altered hyperfine coupling [10,11]. In this letter, we present studies of the temperature dependence of the Landau Zener transition rate P yielding a deeper insight into the spin dynamics of the Fe 8 cluster. By comparing the three isotopic samples we confirm the influence of nuclear spins on the tunneling mechanism and in particular on the lifetime of the first excited states. Our measurements show the need of a generalised Landau Zener transition rate theory taking into account environmental effects such as hyperfine and spin-phonon coupling [12]. All measurements of this article were performed using a new technique of micro-SQUIDs where the sample is directly coupled with an array of micro-SQUIDs [13]. The high sensitivity of this magnetometer allows us to study single Fe 8 crystals [14] of the order of 10 to 500 µm. The crystals of the standard Fe8 cluster, st Fe 8 or Fe 8 , [Fe 8 (tacn) 6 O 2 (OH) 12 ]Br 8 .9H 2 O where tacn = 1,4,7-triazacyclononane, were prepared as reported by Wieghardt et al. [14]. For the synthesis of the 57 Fe-enriched sample, 57 Fe 8 , a 13 mg foil of 95% enriched 57 Fe was dissolved in a few drops of HCl/HNO 3 (3 : 1) and the resulting solution was used as the iron source in the standard procedure. The 2 H-enriched Fe 8 sample, D Fe 8 , was crystallised from pyridine-d 5 and D 2 O (99%) under an inert atmosphere at 5 • C by using a non-deuterated Fe(tacn)Cl 3 precursor. The amount of isotope exchange was not quantitatively evaluated, but it can be reasonably assumed that the H atoms of H 2 O and of the bridging OH groups, as well as a part of those of the NH groups of the tacn ligands are replaced by deuterium while the aliphatic hydrogens are essentially not affected. The crystalline materials were carefully checked by elemental analysis and single-crystal X-ray diffraction. The simplest model describing the spin system of Fe 8 molecular clusters has the following Hamiltonian [3]: S x , S y , and S z are the three components of the spin operator, D and E are the anisotropy constants, H 2 takes into account weak higher order terms [15,16], and the last term of the Hamiltonian describes the Zeeman energy associated with an applied field H. This Hamiltonian defines a hard, medium, and easy axes of magnetisation in x, y and z direction, respectively. It has an energy level spectrum with (2S + 1) = 21 values which, in first approximation, can be labelled by the quantum numbers M = −10, −9, ...10. The energy spectrum, can be obtained by using standard diagonalisation techniques of the [21 × 21] matrix describing the spin Hamiltonian S = 10. At H = 0, the levels M = ±10 have the lowest energy. When a field H z is applied, the energy levels with M << 0 increase, while those with M >> 0 decrease. Therefore, different energy values can cross at certain fields. This crossing can be avoided by transverse terms containing S x or S y spin operators which split the levels. The spin S is in resonance between two states M and M ′ when the local longitudinal field is close to such an avoided energy level crossing (|H z | < 10 −8 T for the avoided level crossing around H z = 0). The energy gap, the so-called tunnel spitting ∆ M,M ′ , can be tuned by an applied field in the xy−plane via the S x H x and S y H y Zeeman terms. It turns out that a field in H x direction (hard anisotropy direction) can periodically change the tunnel spitting ∆ as displayed in Fig. 1 where H 2 in Eq. 1 was taken from [16]. In a semi-classical description, these oscillations are due to constructive or destructive interference of quantum spin phases of two tunnel paths [6]. A direct way of measuring the tunnel splittings ∆ M,M ′ is by using the Landau-Zener model [17,18] which gives the tunnelling probability P M,M ′ when sweeping the longitudinal field H z at a constant rate over an avoided energy level crossing [19]: Here, M and M ′ are the quantum numbers of the avoided energy level crossing, dH z /dt is the constant field sweeping rate, g ≈ 2, µ B the Bohr magneton, andh is Planck's constant. In order to apply the Landau-Zener formula (Eq. 2), we first cooled the sample from 5 K down to 0.04 K in a field of H z = -1.4 T yielding a negative saturated magnetisation state. Then, we swept the applied field at a constant rate over the zero field resonance transition and measured the fraction of molecules which reversed their spin. This procedure yields the tunnelling rate P −10,10 and thus the tunnel splitting ∆ −10,10 (Eq. 2). The predicted Landau-Zener sweeping field dependence of P −10,10 can be checked by plotting ∆ −10,10 as a function of the field sweeping rate which should show a constant which was indeed the case for sweeping rates between 1 and 0.001 T/s ( fig. 2). The deviations at lower sweeping rates are mainly due to the hole-digging mechanism [20,21] which slows down the relaxation. The comparison with the isotopically substituted Fe 8 samples shows a clear dependence of ∆ −10,10 on the hyperfine coupling (Fig. 2). Such an effect has been predicted for a constant applied field by Tupitsyn et al. [22]. All measurement so far were done in the pure quantum regime (T < 0.36 K) where transition via excited spin levels can be neglected. We discuss now the temperature region of small thermal activation (T < 0.7 K) where we should consider transition via excited spin levels [23]. We make the Ansatz that only ground state tunnelling (M = ±10) and transitions via the first excited spin levels (M = ±9) are relevant for temperatures slightly above 0.36 K. We will see that this Ansatz describes well our experimental data but, nevertheless, it would be important to work out a complete theory [25]. In order to measure the temperature dependence of the transition rate, we used the Landau-Zener method [17] as described above with a phenomenological modification of the transition rate P (for a negative saturated magnetisation): P = n −10 P −10,10 + P th where P −10,10 is given by Eq. 2 , n −10 is the Boltzmann population of the M = −10 spin level, and P th is the overall transition rate via excited spin levels. n −10 ≈ 1 for the considered temperature T < 0.7 K and a negative saturated magnetisation of the sample. Fig. 3 displays the measured transition rate P for st Fe 8 as a function of a transverse field H x and for several temperatures. The oscillation of P are seen for all temperatures but the periods of oscillations decreases for increasing temperature (Fig. 4). This behaviour can be explained by the giant spin model (Eq. 1) with higher order transverse terms (H 2 ). Indeed, the tunnel splittings of excited spin levels oscillate as a function of H x with decreasing periods (Fig. 1). Fig. 5 displays the transition rate via excited spin levels P th = P −n −10 P −10, 10 . Surprisingly, the periods of P th are temperature independent in the region T < 0.7 K. This suggests that only transitions via excited levels M = ±9 are important in this temperature regime. This statement is confirmed by the following estimation [26], see also Ref. [25]. Using Eq. 2, typical field sweeping rates of 0.1 T/s, and tunnel splittings from Fig. 1, one easily finds that the Landau Zener transition probability of excited levels are P −M,M ≈ 1 for M < 10 and H ≈ 0. This means that the relaxation rates via excited levels are mainly governed by the lifetime of the excited levels and the time τ res,M during which these levels are in resonance. The later can be estimated by The probability for a spin to pass into the excited level M can be estimated by τ −1 M e −E10,M /kB T , where E 10,M is the energy gap between the levels 10 and M , and τ M is the lifetime of the excited level M . We yield [26] Note that this estimation neglects higher excited levels with |M | < 8 [27]. Fig. 6 displays the measured P th for the three isotopic Fe 8 samples. For 0.4 K < T < 1 K we fitted Eq. 5 to the data leaving only the level lifetimes τ 9 and τ 8 as adjustable parameters. All other parameters are calculated using Eq. 1 [28]. We obtain τ 9 = 1.0, 0.5, and 0.3 × 10 −6 s, and τ 8 = 0.7, 0.5, and 0.4 × 10 −7 s for D Fe 8 , st Fe 8 , and 57 Fe 8 , respectively. This result justifies our Ansatz of considering only the first excited level for 0.4 K < T < 0.7 K. Indeed, the second term of the summation in Eq. 5 is negligible in this temperature interval. It is interesting to note that this finding is in contrast to hysteresis loop measurements on Mn 12 [30] which suggested an abrupt transition between thermal assisted and pure quantum tunnelling [31]. Furthermore, our result shows clearly the influence of nuclear spins which seem to decrease the level lifetimes τ M , i.e. to increase dissipative effects. The nuclear magnetic moment and not the mass of the nuclei seems to have the major effect on the dynamics of the magnetization. In fact the mass is increased in both isotopically modified samples whereas the effect on the the relaxation rate is opposite. On the other hand ac susceptibility measurements at T > 1.5 K showed no clear difference between the three samples [32] suggesting that above this temperature, where the relaxation is predominately due to spin-phonon coupling [23,24], the role of the nuclear spins is less important. Although the increased mass of the isotopes changes the spin-phonon coupling, this effect seems to be small. We can also exclude that the change of mass for the three isotopic samples has induced a significant change in the magnetic anisotropy of the clusters. In fact the measurements below T < 0.35 K, where spin-phonon coupling is negligible, have shown that (i) relative positions of the resonances as a function of the longitudinal field H z are unchanged [33], and (ii) all three samples have the same period of oscillation of ∆ as a function of the transverse field H x [5], a period which is very sensitive to any change of the anisotropy constants. In conclusion, we presented detailed measurements based the Landau Zener method which demonstrated again that molecular magnets offer an unique opportunity to explore the quantum dynamics of a large but finite spin. We believe that a more sophisticated theory is needed which describes the dephasing effects of the environment. *** D. Rovai, and C. Sangregorio are acknowledged for help by sample preparation. We are indebted to J. Villain for many fruitful discussions. This work has been supported by DRET and Rhone-Alpe. Fig. 3. a, b, and c are defined in the inset. The dotted line labelled with a', b', and c' where take from P th of Fig. 5; see also [29]. -Transverse field dependence of P th which is the difference between the measures tunnel probability P and the ground state tunnel probability n−10P−10,10 measured at T = 0.05K (see Fig. 3. The field sweeping rate was 0.14 T/s. The long dotted lines indicate the minima of P th whereas the short dotted lines indicate the minima of P−10,10. -Temperature dependences of P th for Hx = 0 for three Fe8 samples. The field sweeping rate was 0.14 T/s. The dotted lines are fits of the data using Eq. 5 [28].
2014-10-01T00:00:00.000Z
2000-01-24T00:00:00.000Z
18699920
s2orc/train
v2
Lessons learnt from prenatal exome sequencing
Lessons learnt from prenatal exome sequencing Abstract Background Prenatal exome sequencing (ES) for monogenic disorders in fetuses with structural anomalies increases diagnostic yield. In England there is a national trio ES service delivered from two laboratories. To minimise incidental findings and reduce the number of variants investigated, analysis uses a panel of 1205 genes where pathogenic variants may cause abnormalities presenting prenatally. Here we review our laboratory's early experience developing and delivering ES to identify challenges in interpretation and reporting and inform service development. Methods A retrospective laboratory records review from 01.04.2020 to 31.05.2021. Results Twenty‐four of 116 completed cases were identified as challenging including 13 resulting in difficulties in analysis and reporting, nine where trio inheritance filtering would have missed the diagnosis, and two with no prenatal diagnosis; one due to inadequate pipeline sensitivity, the other because the gene was not on the panel. Two cases with copy number variants identified were not detectable by microarray. Conclusions Variant interpretation requires close communication between referring clinicians, with occasional additional examination of the fetus or parents and communication of evolving phenotypes. Inheritance filtering misses ∼5% of diagnoses. Panel analysis reduces but does not exclude incidental findings. Regular review of published literature is required to identify new reports that may aid classification. � It demonstrates that solely relying on trio inheritance filtering will miss ∼5% of diagnoses � Close communication between scientists and referring clinicians is essential to identify evolving phenotypes � Regular review of published literature is required to identify new reports that may alter variant classification | INTRODUCTION Traditionally, genetic analysis following the sonographic identification of fetal structural abnormalities involves karyotyping and/or microarray analysis of a sample obtained by invasive procedures to detect aneuploidy and copy number variants (CNVs). With the decreasing cost of next generation sequencing (NGS), along with the development of rapid analytical pipelines, this has become an increasingly popular technique for rapid prenatal diagnosis. There are many hundreds of single gene conditions that may present in the prenatal period with anomalies detectable by prenatal imaging. Thus, whole exome sequencing (WES) is an attractive and potentially efficient approach, with diagnostic rates reported between 10% and 80%, [1][2][3][4][5][6] for clinical identification of disease-causing variants to provide couples with a definitive diagnosis to aid decision-making and pregnancy management. Our laboratory developed rapid prenatal clinical exome sequencing, initially focussing on fetuses with likely skeletal dysplasia. 7 In this cohort we had a diagnostic yield of 86% with turnaround times falling to around 2 weeks. Following this success, in April 2020 we broadened the inclusion criteria to include any fetus with structural anomalies detected on prenatal imaging suggestive of a monogenic disorder where a diagnosis would impact pregnancy or neonatal management. These eligibility criteria were based on those agreed by the national team for a national service that was subsequently implemented on 1st October 2020 in the English Genomic Medicine Service, and do not include amniotic fluid or placental anomalies. 8 Here we focus on discussing cases that have posed particular issues for reporting, to highlight challenges we encountered during the development and early months of the clinical service, and discuss how these may be overcome. We also review how many diagnoses would have been missed if inheritance filtering had solely been applied for the analysis. Finally, we report on any diagnoses identified after birth. For all cases described here trio whole exome sequencing (parents and fetus) was performed either after or concurrently with microarray testing, and sequencing was analysed with a panel of genes developed for conditions that present in the prenatal setting and can be seen by imaging. 9 A panel approach is utilised to minimise detection of incidental findings and it also allows us to not solely rely on the use of inheritance filtering reducing the number of variants to be investigated. However, if incidental findings are identified that are actionable for the parents or fetus, these are discussed with the referring clinician and then reported, as demonstrated in cases C20-22 (Table 2). Parents are advised during pre-test counselling that such findings may arise occasionally. Here we also discuss how using inheritance filtering in the prenatal setting can miss fetal diagnoses. In this circumstance the analysis pipeline uses the assumption that both parents are unaffected and so if pipelines filter on inheritance patterns, the inherited autosomal dominant variants would be filtered out of the dataset and therefore not identified. In the situation where a parent is thought to be affected with the disease, this information can be submitted to the pipeline to prevent an inheritance filter from removing any dominant variants carried by both the proband and the affected parent. | METHODS Collection of this data has the ethical approval of GOSH clinical audit department (ref 2781). A summary of the clinical service and laboratory methodologies used is described below. | Patients Referral requests were accepted from Clinical Genetics centres where patients were deemed to meet the following eligibility criteria: Fetus with multiple, multisystem, major structural and selected isolated abnormalities detected on fetal imaging where multidisciplinary review (to include clinical genetics, tertiary fetal medicine specialists, clinical scientists and relevant paediatric specialists) considers a monogenic malformation disorder is likely and molecular diagnosis may influence pregnancy or early neonatal management in the index pregnancy. 8 In practice the fetal medicine specialist discusses the case with the local genetics team, this can be by phone or face to face in specialist clinics. The local clinical geneticist will then refer the case by email to the testing laboratory where the case is reviewed for eligibility by a team comprising clinical geneticists, clinical scientists and a fetal imaging expert. Usually the case is accepted but where the request is queried or declined further discussion with the local team occurs to determine eligibility and alternative testing approaches. Fetal ultrasound, and where available MRI, reports were assessed by local clinical geneticists with expertise in fetal dysmorphology to ensure the criteria were fulfilled. Cases were excluded if the reviewers thought the anomalies were unlikely to have a monogenic aetiology, where the result was unlikely to affect pregnancy management or where the anomalies did not meet the eligibility criteria. Once testing was agreed, written informed consent was obtained and parental and fetal samples collected. Rapid aneuploidy testing was carried out prior to prenatal exome sequencing to rule 832out the common aneuploidies and also exclude significant maternal cell contamination. Microarray analysis was carried out in parallel to the exome sequencing due to the need for rapid testing. | Laboratory methodology Details of DNA extraction, exome sequencing, data analysis, gene panel and variant confirmation are given in online Appendix 1. | Variant prioritisation and classification Whilst recognising that parts of the exome are refractory to sequencing, 10 we perform whole exome sequencing, with analysis subsequently focussing on a panel of 1205 genes where there is deemed sufficient evidence for a prenatal phenotype detectable by imaging (see Genomics England PanelApp for more details 9 ). The contents of this panel are reviewed regularly and updated every 6 months. Variants were classified according to the guidelines set out by the Association of Clinical Genetic Science (ACGS), 11 which are based on the American College of Medical Genetics and Genomics (ACMG) guidelines. 12 Figure 1 shows the steps taken for variant prioritisation. All pathogenic/likely pathogenic variants are screened. If a pathogenic variant that explained the fetal phenotype was found during the initial prioritisation steps, it was discussed with the referring clinical geneticist and if explanatory of the fetal phenotype remaining variants were not analysed. If no pathogenic variant was identified, all variants identified by the pipeline were investigated and classified. In more complex cases further multidisciplinary discussion with clinical scientists and clinical geneticists occurred. In the majority of cases, only pathogenic and likelypathogenic variants explaining the fetal phenotype were reported. In some cases variants of uncertain significance (VUS), which may explain the phenotype and only require a single piece of evidence to be upgraded, were taken to the multidisciplinary team for discussion and on occasion further examination of the fetus and/or parents was required. In some cases this allowed upgrading to pathogenic or likely pathogenic but in others, if a VUS had been discussed it was included on the report to highlight the need for further prenatal surveillance and postnatal investigations if appropriate. All reported variants were confirmed by Sanger sequencing prior to reporting using standard methodology (see online Appendix 1). CNVs were confirmed by quantitative real time PCR or multiplex ligation probe amplification using conventional methods. | Selection of cases that have posed challenges We reviewed our laboratory database to identify those cases that had required complex multidisciplinary team discussion or further examination of fetus or parents prior to issuing a final report. We also identified those cases that would have not been diagnosed if inheritance filtering had been applied during analysis. Finally, we identified any case where prenatal sequencing did not identify a causative pathogenic/likely pathogenic (P/LP) variant, but a molecular diagnosis was subsequently made after birth (Table 1). | RESULTS We identified 24 (20.9%) of 113 cases sequenced as a trio and two as a duo (C19 & C22), including 13 (11.3%) raising issues in analysis or reporting, nine (7.8%) where trio inheritance filtering would have missed the diagnosis and two (1.7%) cases where the diagnosis was missed by our pipeline (Table 1). In 16 of these cases pathogenic variants were ultimately reported with 13 of these consistent with a diagnosis that explained (or partially explained) the phenotype ( Table 3. | Challenges in variant interpretation In three cases (C1, C2 and C3) variants were initially reported as VUS but were upgraded to pathogenic following further information received regarding the variant from another diagnostic laboratory from the ClinVar entry (C2), the phenotype evolving at a scan later in gestation to be more aligned with published reports (C1) and new data published in the literature (C3). A pathogenic variant was reported in one fetus (C4) after multidisciplinary team discussion (MDT) even though the prenatal phenotype only partially matched the recognised phenotype. This case was an early diagnosis of a fetal cardiac anomaly initially presenting as increased nuchal translucency (NT) and the early gestation was thought to preclude detection of other potential associated anomalies. Postnatal examination however allowed better characterisation of the cardiac anomaly and revealed additional features consistent with the prenatally reported pathogenic variant ( Table 2). In three cases variants were reported as VUS with recommendations for postnatal follow up (C17, C18 and C19). In one case (C17) the variant arose de novo in the fetus, but in another (C18) it was maternally inherited but known variability in expression did not allow for definitive reporting as pathogenic. In the third case (C19) inheritance could not be determined as a paternal sample was unavailable. | Variants identified in parental DNA for autosomal dominant or recessive conditions that would have been missed by inheritance filtering In six cases autosomal dominant (AD) conditions were identified where an apparently "unaffected" parent was also found to have the pathogenic variant (Table 2). Three cases where the parent was heterozygous for the variant (C5, C7 and C8) and two cases (C9 and C10) maternal somatic mosaicism was identified at levels of ∼6% and ∼28% respectively. In C10 the fetus had a lethal phenotype inherited from the mother who was subsequently found to have mild features of the condition. In the sixth case (C6) there were two diagnoses, a maternally inherited pathogenic variant and a de novo pathogenic variant ( Table 2). There were three cases where a single pathogenic variant in a recessive gene compatible with the phenotype had been inherited from one parent, but no second P/LP variant was identified. In these cases, the variants were reported and examination after birth confirmed the diagnosis in one case, but confirmation was not obtained in the other two (Table 2). | Non-paternity Analysis of case C16 identified no inherited paternal variants. After further testing and multidisciplinary team discussions this was determined to be due to non-paternity. This was discussed with the referring clinician who confirmed the situation with the mother. A | Copy number variants Two multi-exon pathogenic deletions were detected by our anal- | Diagnoses missed by our pipeline In one fetus presenting prenatally with a clover-leaf skull, absent | DISCUSSION Our review has shown that in 11 -839 percentage of cases posing challenges shows that it is a complex service to run and clearly requires multidisciplinary team working, especially when working with the time pressures required for rapid delivery of results to inform pregnancy management. | Dangers of solely relying on inheritance filtering Trio inheritance filtering is performed to reduce the number of variants requiring analysis to expedite reporting. This analysis relies on the clinical status of the parents being unaffected and will only filter variants of interest if they fit with the required inheritance pattern; thus de novo in the fetus for an AD condition, compound heterozygous or homozygous in the fetus for an autosomal recessive (AR) condition and hemizygous for X-linked conditions with the mother carrier (or de novo). In our cohort relying on this approach for analysis would have missed P/LP variants in nine cases (7.8%) ( Table 2), three due to inherited AD conditions inherited from mildly affected/"unaffected" parents, two due to somatic mosaicism in the parents and three single pathogenic variants reported in a recessive gene compatible with the phenotype. In the ninth case two pathogenic variants were identified (C6), one de novo and one inherited. If inheritance filtering alone had been used in this case the inherited diagnosis, with the higher recurrence risk, would have been missed. There have been other cases with a dual diagnosis reported in literature in prenatal exome cohorts 14 and therefore a potential second diagnosis should be considered in any analysis pipeline. Imprinted genes also need to be considered. We have not encountered this yet, but there is a report of a MAGEL2 pathogenic variant inherited from the father who is unaffected due to the pathogenic variant being on his maternally inherited allele. 15 Many AD conditions have variable expressivity and age of onset and parents may often be unaware of carrier status, as with some of our cases (Table 2) and others reported in the literature. 1,2,7,[16][17][18][19] Further, referrals for fetal exome sequencing come from fetal medicine units, often with limited input from clinical geneticists and so examination of parents and ascertaining the family history may be more limited. Indeed, the father may often not be present at the time of the scan. The occasional identification of parental carrier status is something that should be included in parental pre-test counselling. Parental mosaicism must also be considered when applying filters to exome datasets as seen in two of our cases (C9 and C10, Table 2). In one case a heterozygous COL1A2 variant was identified that was initially thought to have arisen de novo in the fetus. Parental sequencing reads were inspected at the site of the variant as per laboratory policy and the variant was seen in about 6% of reads from the unaffected mother, too low to be called by our | Challenges in variant interpretation In our laboratory we aim to only report P/LP variants consistent with the fetal phenotype but VUSs may be reported when they are considered "hot" class 3s (where they only require a single piece of evidence to be upgraded, such as phenotypic fit or publication in the literature) and reporting is agreed at multidisciplinary team meetings. This is in agreement with others. 25 which we see prenatally, 1,26 or ones that evolve as pregnancy progresses. In our series this was illustrated by two cases (C14 and C1) ( | Incidental findings In England the current policy for genome sequencing both pre-and postnatally is to minimise identification of incidental findings by using a panel approach to analysis. However, in three of our cases variants were reported in parents but not the fetus, as they may have implications for a parent's health (C20) or risks for future pregnancies (C21 and C22) ( Table 3). | Ethical issues There was one case of non-paternity in our cohort (C16, | Microarrays should be run in parallel Microarray is still the gold standard for detecting copy number variants in fetuses with structural anomalies and we perform ES concurrently with microarray. The resolution of microarray depends on the probe capture used; in our laboratory this is 300 kb. In our cohort, we identified two pathogenic multi-exon deletions that were not detected by microarray due to the size of the deletions and the lack of probes in the region. In addition, we were able to accurately map the breakpoints which is not achievable using microarray thus allowing for a more accurate classification of the deletions. These cases further demonstrate the utility of short-read sequencing to detect CNVs as reported by others, 31,32 but that microarray should be performed as CNV detection by ES is limited to exons or in close proximity to exons. This may change if whole genome sequencing is performed. | Diagnoses missed by our analysis approach In two fetuses, no LP/P variants were identified by our pipeline, but postnatal clinical examination raised suspicion of a genetic condition. In the first case, the diagnosis of Curry-Jones syndrome was missed due to low level (6%) somatic mosaicism of the pathogenic variant. This is below the known sensitivity of our pipeline which is 10%. This raises two issues. The first, that it is important to be aware of limitations of the pipeline and clearly state sensitivity settings on report. The second, that interpretation of mosaicism detected from analysis of amniocytes may be difficult to interpret prenatally in the absence of a clinical phenotype. The second case had a RAC3 pathogenic variant, but at the time of reporting this gene was not included on the fetal panel in PanelApp 9 as the phenotype for pathogenic variants in this gene has only been described recently both postnatally 33 and prenatally 34 and it therefore hadn't yet been reviewed for addition to the panel. This demonstrates the limitations of a panel-based analysis and also the importance of keeping abreast of current literature to regularly update panels applied to ES data and of data sharing. - Additionally, the panel version should be clearly stated on the report, so the referrer is able to assess which genes have been analysed. Finally, we recognise that in using a panel approach to analyse the whole exome sequencing we will not identify novel genes, but this approach has been taken to enable a rapid turnaround time with equity of access to testing across the whole country. Having the whole exome available can enable further investigation over time, but not in the time course to influence pregnancy management. | CONCLUSIONS In conclusion, solely applying inheritance filtering will potentially miss a significant proportion of pathogenic variants. The panel approach to analysing the whole exome reduces but does not eliminate incidental findings and precludes identification of novel genes. As the prenatal phenotype is often incomplete or evolving, close communication between referring clinicians and clinical scientists is required for interpretation of sequence data, with additional detailed examination of the fetus or parents needed in some cases. Finally, close attention to the published literature is required to identify new reports that may aid classification and also identify new genes for addition to panels so that they stay up to date. Parents and health professionals should also be aware that testing is complex and further examination of the fetus, parents or neonate may be required to reach a diagnosis, and that sometimes this may have implications for the parents' own health.
2022-05-05T06:20:33.776Z
2022-05-04T00:00:00.000Z
248515020
s2orc/train
v2
Feasibility of Rectal Stent Development for Fecal Diversion: A Porcine Experiment
Feasibility of Rectal Stent Development for Fecal Diversion: A Porcine Experiment Objective Since low rectal anastomosis leakage may cause severe morbidity, surgeons create diversion stoma to prevent complication. However, stoma requires additional surgery with morbidity. Therefore, rectal stent may help prevent these problems. This preliminary report details the development of new rectal stent in animal experiment. Thirteen female 12 week-old pigs weighing 30–35 kg each (four in the control group, nine in the experimental group) were included. Under general anesthesia, pigs underwent laparoscopic low anterior resection. In experimental group, a Niti-S fully covered stent (Taewoong Medical Inc.) was inserted by guidewire, under direct laparoscopic vision, and axed near the anus. All pigs were sacriced for autopsy. Including the anastomosis line, 10 cm length of bowel was obtained and a water-air leak and barium leakage X-ray tests were performed to conrm anastomosis integrity. Among the 36-mm-diameter The last three intra-abdominal but Despite natural stent removal, there were only two of intraoperative leakage. To overcome rectal pressure and fecal bulk, rectal stent development requires further investigation. or barium leakage x-ray tests. For porcine no.11–13, we did not expect to nd any gross leakage since subject’s stents were xed with intracorporeal stitches. These results suggest stent safety in the bowel mucosa and anastomosis site despite insertion and detachment. Only two subjects demonstrated gross anastomosis leakage during and after surgery; however, neither demonstrated leakage on either leakage test. Introduction The Low Anterior Resection is regarded as the globally recognized standard treatment for rectal cancer surgery. For surgeons, one of the most worrisome complications is the anastomosis leakage. The short length of rectal stump causes the risk of anastomosis leakage to be high. (1) For rectal anastomosis, the Double Stapled Anastomosis method is used worldwide. (2) There are several methods available to con rm the anastomosis integrity that include the air leakage test and direct visualization during the colonoscopy. (3,4) However, these methods are found to not guarantee high anastomosis quality. In order to accommodate this de ciency, surgeons often create a diversion stoma to prevent severe complications. This procedure has been particularly important in cases such as a person with history of preoperative chemoradiotherapy, male with narrow pelvis, and a person with high body mass index. (5) However, the diverting stoma may not assist the anastomosis leakage healing process and help prevent other severe complications such as sepsis. (6) Stoma repair is also likely to be required in the near future and may carry other risks of surgery-related morbidities. (7) Therefore, the Rectal Stent may be an alternative option for preventing low rectal anastomosis disruption. The primary purpose of a Rectal Stent is to protect against fecal contamination and decrease intraluminal pressure, and it is expected to substitute for a stoma. The Rectal Stent will function to strengthen the expanding bowel wall and maintain intraluminal space. A new Rectal Stent was developed in consideration of these factors, particularly the severe complications related to anastomosis leakage. This report details the animal experiment which examines the e cacy and safety of the newly developed Rectal Stent. New Stent Development For the use of Rectal Stents, there are some vital issues to consider. Rental Stent's luminal patency may be well maintained but it can induce pain and migration. (8) Therefore, it is important to prevent migration after its placement. The Rectal Stents should also be removed easily and safely after its use. To solve these issue, several different Rectal Stent Prototypes have been developed after our extensive discussion. Figure 1 shows the evolution of our invented Rectal Stents. The very rst Rectal Stents were developed with partially covered silicon with straight features. Later, in order to increase the smoothness and to facilitate its removal, the Rectal Stents were made using polytetra uoroethylene (PTFE) instead of silicon. Its proportion of stent coverage was also increased. PTFE and silicone are well known to provide good stent coverage and support the stent functions mentioned above.(9, 10) Finally, the Rectal Stents were fully covered with PTFE and the proximal portion of the Rectal Stents was changed to a funnel shape. The funnel shaped proximal portion would allow more feces to drain through and was expected to better protect the anastomosis than the straight shape since the space between the Rectal Stent and the bowel wall could be further blocked. The diameter of the bottom part of the Rectal Stent was also slightly reduced in its length to avoid any irritation near the anus. Preoperative Procedure This study was approved by the Animal Institutional Review Board of Yonsei University (2015-0051). A total of 13 female pigs that are 12-week-old were used as the subjects. The subjects were divided to include four (4) in the control group, and nine (9) in the experimental group. Each pig had weighed approximately 30-35 kilograms (kg). The facility is following "Guide for the care and use of laboratory animals" from National Research Council. After arriving at the animal facility, the pigs were allowed to acclimate for three to four days and the colon was prepared using 4 Liters of Colyte with electrolyte-rich uid one day before the surgery. Operative Procedure Under general anesthesia, each subject was placed in the supine position and a sterile drape with placing foley catheter. A mid-abdominal incision was made for the camera port and CO 2 insu ation was performed. 5-and 12-mm working ports were placed in the middle and lower abdominal quadrants. After colon identi cation, the usual low anterior resection was performed. All but the rst subject was exposed to surgery without disruption of the mesentery vessels since they had a single-vessel non-collateralized posterior rectal artery blood supply to the rectum.(11) After reaching to deep pelvis, a 45-mm Endostapler (Covidien Inc.) was used for bowel resection. A mini-laparotomy was performed to insert an anvil (EEA, 25 mm; Covidien Inc.) into the end tip of the resected proximal colon. The double stapling method was used and a water-air leakage test was performed. An anastomosis was made 10 cm above the anal verge. In the experimental pig, a Niti-S rectal fully covered stent (Taewoong Medical Inc.) was inserted using a guidewire and unfolded above the anastomosis under direct laparoscopic vision. The stent was a xed near the anus with several Vicryl 3 − 0 stitches. For the last three subjects, a full-thickness intracorporeal suture was placed above the anastomosis where the unfolded stent was exposed. Postoperative Procedure The subjects received an intramuscular injection of antibiotics (amoxicillin clavulanate 14 mg/kg) on rst postoperative day, while oral pain killers (meloxicam 0.2 mg/kg) were mixed with the diet until the seventh day of postoperative procedure. Colyte was mixed with pig chow until the completion of the experiment. Upon completion of the experiment, 10-cm length of bowel including the anastomosis line was removed from the subjects and the water-air leak and barium leakage x-ray tests were performed to con rm anastomosis integrity ( Fig. 2). Table 1 shows the results from the four control subjects. Initially, three subjects were assigned to the control group; however, one subject marked 'porcine no.10' had excessive feces in the rectum that prevented the stent insertion. Thus, porcine no.10 was reassigned to the control group. All subjects were planned to be sacri ced after Postoperative Day 7. *LAR : Low anterior resection 2L of Colyte was mixed to diet to feed the subjects on purpose of softening the feces. Despite the use of Colyte, the fecal volume was still large and rmer than expected. This was because the meal consumptions for the subjects were unable to be controlled throughout the experiment. Thus, natural stent removal occurred at no more than sixth days into the experiment. Finally, the proximal site of the stent was a xed to the bowel during the surgery. The stents in porcine no.11 and 12 were a xed with a single intracorporeal laparoscopic stitch, while the stent in porcine no.13 was a xed with three (3) intracorporeal laparoscopic stitches (Fig. 2). Even with a single intracorporeal laparoscopic stitch, the stent was removed naturally by Postoperative Day 5 in porcine no. 11 and 12. Porcine no.13 also showed natural stent removal on Postoperative Day 6. Anastomosis Leakage Amongst the four subjects in the control group, due to lack of experience, one subject showed anastomosis leakage with a perirectal abscess. No leakage was seen in the other three subjects on the water-air leak or barium x-ray tests (Table 1). Amongst the nine subjects in the experimental group, two subjects (porcine no. 6 and 8) showed anastomosis leakage on the intraoperative water-air leakage test. However, after rectal stent insertion, the anastomosis leakage site was fully covered by the stent and no leakage was seen on the subsequent water-air leakage test. The stent was removed naturally on Postoperative Day 5 in porcine no.6. Porcine no. 8 expired by sepsis on Postoperative Day 6. Sepsis seems to have originated by leakage site, as severe ischemic colitis was veri ed in the rectum. The leakage site was con rmed after specimens were obtained from both porcine no. 6 and 8. However, no other subjects in the experimental group showed leakage on the water-air leak or barium leakage x-ray tests. Discussion There have been several trials to protect against rectal anastomosis leakage by scientists and colorectal surgeon. Ravo and Ger proposed using an intracolonic bypass tube to protect against rectal anastomosis leakage. (12) Ros tried several drainable tubes in colon anastomosis in rats and found that an intraluminal drainable tube improved survival compared to the control group. (13) A sterilized condom has also been used to protect the rectal anastomosis. (14) All of these trials aimed to prevent fecal contamination and decrease fecal loading, which may affect disruption of the intestinal anastomosis. Also, by using various different types of intracolonic bypass materials, they attempted to decrease the intraluminal pressure.(9, 10) A transanal tube showed decreased rectal pressure up to Postoperative Day 5.(15) A recent multi-center randomized trial from the Netherlands used a biodegradable drain called a C-seal a xed to the anastomosis line by a circular stapler.(16) However, the results were unsatisfactory, as 10% of cases demonstrated anastomosis leakage, 7.7% required re-intervention, and 5% of controls demonstrated anastomosis leakage. Problems related to the C-seal included detachment from the anvil, di cult stapler removal, and anal pain. Learning from these previous studies, it was essential for us to devise a simpli ed method in order to protect the anastomosis without the use of a diverting stoma. We then went back to the beginning to create a concise intraluminal drainage tube. Among several problems, we found that preventing rectal stent migration was most important. In practice, even in cases of stent placement to avoid obstructions in rectal cancer, stent migration is one of the most common complications. (17) The stent was maintained for no more than six days despite of its maximal width 36 mm and performing a laparoscopic intracorporeal stitch on a proximal stent and bowel. Also, to reduce anal pressure, a diet containing Colyte was used to induce the defecation of loose stool. These efforts were not as effective as we had anticipated as not all subjects could consume required dose of Colyte as humans could. However, the subjects chosen were still considered to be the best animal to use in this experience since they cannot remove the stent themselves when compared to other mammals. All of the experimental subjects were sacri ced within Postoperative Day 7. When considering the inconvenience of placing Rectal Stents in patients, and as rectal anastomosis collagen density is highest at one week postoperative,(18) seven day period appears to be the minimum requirement. However, all Rectal Stents were removed naturally within seven days for our experiment. Despite the encountering of the natural stent removal, there were no cases of gross anastomosis leakage seen on the water-air leak or barium leakage x-ray tests. For porcine no.11-13, we did not expect to nd any gross leakage since subject's stents were xed with intracorporeal stitches. These results suggest stent safety in the bowel mucosa and anastomosis site despite insertion and detachment. Only two subjects demonstrated gross anastomosis leakage during and after surgery; however, neither demonstrated leakage on either leakage test. Although our experiment encountered di culties due to the natural stent removal, the proper stent placement in humans may be achieved. To our knowledge, so far, no trials have added perianal xation of Rectal Stents to the experiment. The Rectal Stent developed by our team has a smooth vinyl component at the end with a su ciently long length for a xing to the buttock area. The design to include narrow stent end diameter will also minimize perianal area irritation. Also, the vinyl is strong enough to not tear off easily. Although all stents in this study were removed naturally, it was not due to the strength of the vinyl, but due to fecal pressure. Limitation Even with early natural stent removal, it was a challenging experiment to observe the stent's effect on the anastomosis. The best way to maintain the stent would be to make the subjects fast consistently. Unfortunately, the chosen subjects cannot survive whilst maintaining fasting regiment. However, the humans are able to tolerate fasting regiments by implementing a parenteral nutrition. In this study, it was not possible to control the bowel movement of gilts, but this may be different in patients. Although this study produced insu cient results due to the early natural removal of Rectal Stents, it may have brought signi cant outcome if diet and fasting regiments of subjects were able to be controlled. History of Stent evolution 1. Silicone covered stent: Partially silicone covered straight formed initial stent. 2. Partially PTFE covered stent: To remove stent easily, coverage portion has increased and its material has changed to PTFE. 3. Fully PFTE covered stent: Final stent version was PTFEfull covered with funnel type, to make easy removal and to protect anastomosis effectively. ARRIVEGuidelineword.docx
2020-07-23T09:01:30.579Z
2020-07-20T00:00:00.000Z
242180520
s2orc/train
v2
Digital multiple health behaviour change intervention targeting online help seekers: protocol for the COACH randomised factorial trial
Digital multiple health behaviour change intervention targeting online help seekers: protocol for the COACH randomised factorial trial Introduction Unhealthy lifestyle behaviours continue to be highly prevalent, including alcohol consumption, unhealthy diets, insufficient physical activity and smoking. There is a lack of effective interventions which have a large enough reach into the community to improve public health. Additionally, the common co-occurrence of multiple unhealthy behaviours demands investigation of efforts which address more than single behaviours. Methods and analysis The effects of six components of a novel digital multiple health behaviour change intervention on alcohol consumption, diet, physical activity and smoking (coprimary outcomes) will be estimated in a factorial randomised trial. The components are designed to facilitate behaviour change, for example, through goal setting or increasing motivation, and are either present or absent depending on allocation (ie, six factors with two levels each). The study population will be those seeking help online, recruited through search engines, social media and lifestyle-related websites. Included will be those who are at least 18 years of age and have at least one unhealthy behaviour. An adaptive design will be used to periodically make decisions to continue or stop recruitment, with simulations suggesting a final sample size between 1500 and 2500 participants. Multilevel regression models will be used to analyse behavioural outcomes collected at 2 months and 4 months postrandomisation. Ethics and dissemination Approved by the Swedish Ethical Review Authority on 2021-08-11 (Dnr 2021-02855). Since participation is likely motivated by gaining access to novel support, the main concern is demotivation and opportunity cost if the intervention is found to only exert small effects. Recruitment began on 19 October 2021, with an anticipated recruitment period of 12 months. Trial registration number ISRCTN16420548. ABSTRACT Introduction Unhealthy lifestyle behaviours continue to be highly prevalent, including alcohol consumption, unhealthy diets, insufficient physical activity and smoking. There is a lack of effective interventions which have a large enough reach into the community to improve public health. Additionally, the common co-occurrence of multiple unhealthy behaviours demands investigation of efforts which address more than single behaviours. Methods and analysis The effects of six components of a novel digital multiple health behaviour change intervention on alcohol consumption, diet, physical activity and smoking (coprimary outcomes) will be estimated in a factorial randomised trial. The components are designed to facilitate behaviour change, for example, through goal setting or increasing motivation, and are either present or absent depending on allocation (ie, six factors with two levels each). The study population will be those seeking help online, recruited through search engines, social media and lifestyle-related websites. Included will be those who are at least 18 years of age and have at least one unhealthy behaviour. An adaptive design will be used to periodically make decisions to continue or stop recruitment, with simulations suggesting a final sample size between 1500 and 2500 participants. Multilevel regression models will be used to analyse behavioural outcomes collected at 2 months and 4 months postrandomisation. Ethics and dissemination Approved by the Swedish Ethical Review Authority on 2021-08-11 (Dnr 2021-02855). Since participation is likely motivated by gaining access to novel support, the main concern is demotivation and opportunity cost if the intervention is found to only exert small effects. Recruitment began on 19 October 2021, with an anticipated recruitment period of 12 months. Trial registration number ISRCTN16420548. INTRODUCTION Behavioural risk factors, such as harmful alcohol consumption, unhealthy diets, insufficient physical activity and smoking, contribute to about one-third of global disability-adjusted life-years, and are leading causes of non-communicable diseases (NCDs), including cardiovascular disease, respiratory disease, cancer and diabetes. 1 2 The WHO has determined that reducing the prevalence of behavioural risk factors should be a priority in many societies to reduce the incidence of NCDs and disability-adjusted life-years. 3 It is therefore important that effective and scalable means of helping individuals to improve their health behaviours are established. The Public Health Agency of Sweden's national public health survey from 2020 4 (n=16 947) reports data on lifestyle behaviours of Swedish citizens aged . According to the survey, 16% of respondents report hazardous or harmful alcohol consumption, 35% report being insufficiently physically active, 12% report smoking occasionally or daily and 93% report eating less fruit and vegetables than recommended. Additionally, 52% of individuals report being obese or overweight. Unfortunately, with the exception of smoking, the prevalence rates of these behaviours have not decreased markedly over the past 10 years, with some increasing, witnessing of a lost decade for prevention efforts. STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ Pragmatic recruitment of individuals seeking help online to a factorial trial allow for dismantling of the effectiveness of the components which make up a digital multiple health behaviour change intervention. ⇒ An adaptive trial design reduces the risk of underrecruitment and over-recruitment of participants. ⇒ Despite double blind procedures, research participation effects may affect self-reported outcomes and introduce bias. ⇒ Single face-valid items used to measure mediators reduce participant burden but may limit the interpretation of findings. Open access For prevention efforts to have an impact on the general population, they need to have extensive reach among those who may benefit. No single setting will be able to achieve this, for example, only 1%-5% of individuals visiting primary healthcare clinics in Sweden are given advice with respect to their lifestyle, 5 despite many more in need of such advice. Unhealthy lifestyle behaviours also tend to cluster and interact, 6 7 for example, those who are overweight are more likely to be physically inactive, and excessive alcohol consumption may lead to weight gain. Risks from multiple unhealthy lifestyle behaviours may be multiplicative 8 ; thus, it is of value to not only extend the reach of interventions, but to also investigate tools designed to support change of multiple health behaviours. One way of reaching further into the community with a multiple health behaviour change intervention is to offer digital support tools to those searching online for help. This is especially promising in Sweden, since the internet is used daily by approximately 90% of the population, and the same proportion use smartphones on a regular basis. 9 10 A recent effectiveness trial of a digital alcohol intervention among online help-seekers in Sweden found evidence of positive effects on alcohol consumption, 11 but also that only 13.5% of study participants turned off the support, which indicates that receiving support for behaviour change through digital means is an acceptable method for many. Studies evaluating digital interventions addressing multiple health behaviours have also shown promising results. [12][13][14][15] However, the evidence of these types of interventions in more general populations is lacking, as the majority of studies have been conducted among university students, employees within specific fields, or patients with specific health conditions. In addition, behaviour interventions often consist of several components or modules, yet are commonly evaluated as a whole, 16 leaving a paucity of evidence for the effects of the dismantled components. Increasing our understanding of the effects at the component level, in particular with respect to multiple behaviours, may help move the field of behaviour interventions forward. Objectives This study aims to estimate the effects of the components of a digital intervention on multiple health behaviours (alcohol, physical activity, diet and smoking) among individuals seeking help online. The objectives of the study are to: 3. Detect interactions among health behaviour change, for example, those who stop smoking may also reduce their alcohol consumption, and the degree to which this is moderated by the components of the intervention. METHODS A double-blind factorial randomised trial 17 (six factors with two levels each) will be employed to address the objectives of the study. A Bayesian group sequential design will be employed to periodically make decisions to continue or stop recruitment. [18][19][20] This protocol contains relevant items from the Standard Protocol Items: Recommendations for Interventional Trials. 21 The methods of this trial, including the statistical analysis plan, was preregistered on the Open Science Platform prior to enrolment commenced (https://osf.io/xyj3p/). Study setting, recruitment and eligibility We will recruit individuals seeking information about health and behaviour change by advertising on Google, Bing and Facebook (restricted to Sweden), as well as on websites which focus on lifestyle and behaviour change (eg, livsstilsanalys. se). Individuals exposed to the advert will be advised to sign up to the study by sending a text message with a specific code to a dedicated phone number. Within 10 min, individuals will receive a text message with a hyperlink that takes them to a web page with informed consent materials. Consent will be given by clicking on a button on the bottom of the page. All individuals giving informed consent will be asked to complete a baseline questionnaire, which will also assess eligibility for the trial (please see online supplemental appendix A). Individuals will be included in the trial if they fulfil at least one of five conditions: ► Weekly alcohol consumption: Consumed 10/15 (female/male) or more standard drinks of alcohol the past week. A standard drink of alcohol is in Sweden defined as 12 grams of pure alcohol. ► Heavy episodic drinking: Consumed 4/5 (female/ male) or more standard drinks of alcohol on a single occasion at least once the past month. ► Fruit and vegetables: Consumed less than 500 g of fruit and vegetables on average per day the past week. ► MVPA: Spent less than 150 min on MVPA the past week. ► Smoking: Having smoked at least one cigarette the past week. Individuals will be explicitly excluded if they do not fulfil any of the criteria or if they are less than 18 years of age. The trial information and intervention will be entirely in Swedish and delivered to participants' mobile phones, thus not comprehending Swedish well enough to sign up or not having access to a mobile phone will implicitly exclude individuals. Interventions The digital intervention, which is called Coach, consists of six components which users access using their mobile phone, based on an intervention design we have used previously. 22 23 The intervention is designed around social cognitive theories of behaviour change, with a focus on modifying environment, intention and skills. 24 25 The intervention's components are intended to be used as a toolbox, allowing users to choose which parts of the intervention to interact with and tailor the support to their needs. Participants eligible for the trial will be allocated to one of 64 factorial conditions, each condition representing a unique combination of the six components-which are either present or absent (2 6 =64 conditions). The intervention materials can be accessed at participants' discretion over a 4-month period, and each Sunday afternoon participants will receive a text message with a link and a reminder to access the intervention materials. A summary of the components is presented in table 1, and a detailed description of the six components is available in online supplemental appendix B. Measures Outcomes are listed here and subsequently explained. All questionnaires (baseline, 1-month, 2-month and 4-month follow-up) used in the trial can be found in online supplemental appendix A. Primary outcome measures ► Alcohol: Weekly alcohol consumption; monthly frequency of heavy episodic drinking. ► Diet: Average daily consumption of fruit and vegetables. ► Physical activity: Weekly MVPA. Every Sunday afternoon, participants will receive a text message with a hyperlink which takes them to a questionnaire regarding their current health behaviours. Once complete, feedback on their current behaviour is given in relation to national guidelines. Thereafter users are given access to the rest of the components (depending on allocation). When absent participants will not be shown the questionnaire but instead only national guidelines without personal feedback. Goalsetting and planning This component let participants set a goal for their future behaviour and plan for what to do when they struggle and succeed. Participants can also accept challenges for the coming week, for example, to walk for 15 min each day, or to not drink any alcohol this week. Self-composed challenges are also available. Reminders are sent via texts to participants about their goals and challenges throughout the week. When absent, this component will not be visible. Motivation This component contains information and tools to increase participants' motivation for change. This includes information on negative health consequences, costs induced from certain behaviours and reflective tasks. If participants choose, they can also activate motivational text messages which are sent to them throughout the week. When absent, this component will not be visible, and text messages will not be available. Skills and know-how Concrete tips on how to initiate and maintain change in everyday life is offered in this component. This includes giving participants strategies they can use to say no to alcoholic beverages at parties, how to increase the nutritional value of their breakfast, etc. If participants choose, they can also activate text messages with tips sent to them throughout the week. When absent, this component will not be visible, and text messages will not be available. Mindfulness This component aims to increase users' awareness of their own lived experience and strengthen their capacity for non-reactive, compassionate and less stressful way of being in the world. Mindfulness exercises are offered to participants, including guided meditations. When absent, this component will not be visible, and guided meditations not available. Self-composed text messages Participants are given the opportunity to compose messages and have them sent to themselves throughout the week (on days and times of their own choosing). A participant may for instance write a message to themselves reminding them to eat two fruits each day, to not drink anything on Wednesdays, or to go for a walk with a friend. When absent, this component will not be visible. Primary and secondary outcomes Weekly alcohol consumption will be assessed by asking participants the number of standard drinks of alcohol they consumed last week (short-term recall method 26 ). Frequency of heavy episodic drinking will be assessed by asking participants how many times they have consumed 4/5 (female/male) or more standard drinks of alcohol on one occasion the past month. These two outcomes are both part of the proposed core outcome set for brief alcohol interventions, [27][28][29] and represent different risk behaviours which are sometimes found in the same individual and sometimes not. For instance, one may have a high weekly alcohol consumption, and thereby be at risk for negative health consequences, without consuming 4/5 or more drinks on the same occasion. Similarly, having one episode of heavy episodic drinking increases the risk of short-term consequences (such as injury) and long-term health consequences, but does not fulfil the criteria for total weekly consumption. Diet and physical activity will be measured using a questionnaire based on the previously published questionnaire by the National Board of Health and Welfare in Sweden, 7 and was further modified to also include portion sizes. The consumption of fruit and vegetables will be measured using two questions concerning the number of portions (100 g) of fruit and vegetables (respectively) the participants ate on average per day during the past week. Sugary drinks consumption will be measured by a question regarding the number of units (33 cl) of sugary drinks participants consumed the past week, and candy and snacks will be measured using a single question regarding number of servings consumed last week. MVPA will be estimated by summing responses to two questions regarding the number of minutes spent on moderate and vigorous physical activity, respectively, during the past week. BMI will be measured by asking participants to report their weight and height. Four-week point prevalence of smoking abstinence (no cigarettes the past 4 weeks) will be asked as a binary question. This is a suggested measure by the Society of Research on Nicotine and Tobacco. 30 Participants who have smoked any cigarette the past 4 weeks will be asked for the number of cigarettes smoked the past week. QoL will be measured using PROMIS Global 10, 31 both to estimate the degree to which intervention components effect QoL but also for health economic evaluations. Perceived stress will be assessed using the short form Perceived Stress Scale-4. 32 Mediation measures Participants will be asked to report on confidence, importance and know-how, which are three psychosocial factors believed to be important markers of behaviour change. 24 25 33-35 To reduce participant burden, we will use single face-valid items, acknowledging the limitation of such measures. Participant timeline and follow-ups A trial participant timeline is presented in figure 1. Intervention components (depending on allocation) will be made available to participants all at once and stay available to participants at their own discretion throughout the 4-month period (with weekly reminders). There are three follow-up stages: 1-month, 2-month and 4-month postrandomisation. All follow-ups will be initiated by sending text messages to participants with hyperlinks to questionnaires. The following additional attempts will be made to collect data: 1. A total of two text reminders will be sent 2 days apart to those who have not responded. 2. If there is no response to the mediator questions at the 1-month follow-up, then the questions will be sent in a text message and participants are asked to respond directly with a text. 3. If there is no response to the 2-month and 4-month follow-ups, then we will call participants to collect Open access responses for the primary outcome measures only. A maximum of five call attempts will be made. Assignment of interventions Randomisation will be fully automated and computerised. Block randomisation will be used to allocate participants to the 64 conditions (random block sizes of 64 and 128). Neither research personnel nor participants will be able to influence allocation. Research personnel will be blind to allocation throughout the trial. All participants will have access to the intervention, although with different components, and they will not be made aware of the other available conditions and will therefore be blind to allocation. Patient and participant involvement statement Outcome measures used in the trial are informed by national guidelines in Sweden, as well as those set by the WHO. Also, the Swedish National Board of Health and Welfare 7 have reported that research regarding multiple health behaviour change interventions is lacking. No patients or participants were involved in the planning of this trial or design of the intervention; however, both have been informed by our previous research involving individuals looking for help to change health related behaviours. ANALYSIS All analyses will be done keeping all participants in the groups to which they were randomised. Analyses will be done using both available data and imputation. Imputation will be done using multiple imputation with chained equations. 36 The implicit missing at random (MAR) assumption underlying this approach will be investigated by two attrition analyses: (1) if data are missing systematically then it may be the case that early responders (answering without reminders) differ from non-responders (requiring several attempts), and in extension that late responders are more alike nonresponders. Therefore, one attrition analysis will regress primary outcomes against number of attempts to collect follow-up before a response was recorded; (2) we will further explore the MAR assumption by investigating if responders and non-responders are different with respect to baseline characteristics. Groups will be contrasted using multilevel regression models with covariates for group by component interactions and participant level adaptive intercepts. Models of longitudinal data (primary outcomes and perceived stress) will include group by time by component interactions. We will explore pairwise interactions among components. Bayesian inference will be used to estimate the parameters of the models 37-39 (see Sample Size for priors). For each coefficient of interest, we will report the marginal posterior probability of effect, and the median will be used as a point estimate of the magnitude of the effect. We will also report on 50% and 95% compatibility intervals. Models Primary and secondary outcomes Analyses of primary outcomes will be conducted among those fulfilling the respective criteria for inclusion at baseline, for example, weekly alcohol consumption will be analysed among those who reported having consumed 10/15 (female/male) or more units of alcohol the past week. BMI, sugary drinks, candy/snacks, QoL and perceived stress will be analysed among all participants, and number of cigarettes smoked weekly among baseline smokers. Weekly alcohol consumption, frequency of heavy episodic drinking per month, weekly intake of candy and snacks, number of sugary drinks per week and cigarettes smoked per week are all count variables that are likely skewed and over dispersed. Therefore, these outcomes will be analysed using negative binomial regression. If found not to be over dispersed, we will consider using normal regression (possibly log transformed). Average intake of fruit and vegetables per day, MVPA minutes per week, BMI, QoL and perceived stress will be analysed using normal regression (possibly log transformed). Point prevalence of smoking abstinence will be analysed using logistic regression. All models will be adjusted for age, sex and mediators (importance, confidence and know-how) at baseline. Primary outcomes and perceived stress will be adjusted for their respective baseline values, except for smoking prevalence which will be adjusted by the weekly number of cigarettes smoked at baseline. BMI, sugary drinks and candy/snacks will be adjusted for baseline MVPA minutes per week and average intake of fruit and vegetables per day. Number of cigarettes smoked last week will be adjusted by its baseline value. QoL will be adjusted for perceived stress at baseline. In addition to pairwise interactions between components, effect modification will be explored in all models to assess if any of the baseline characteristics moderate the effects of the components of the intervention. Mediator outcomes Mediators will be explored using a causal inference framework, [40][41][42] using Bayesian inference to estimate the natural direct effect and natural indirect effect (as per the definitions of Pearl 42 ). We will report on the posterior distributions of these two estimates, as well as the proportion of the total effect which is accounted for by the natural indirect effect. Four models will be created for each primary outcome measure, three which investigate the mediating factors on their own, and a fourth which incorporates all mediators at once. If any baseline characteristics were found to moderate the effects in the primary analysis, then additional mediator models will be created to include these as moderators. Open access Interactions among health behaviours Outcome interactions, and determinants of such, will be investigated in an exploratory analysis. For instance, those who quit smoking may also be more likely to reduce their alcohol consumption, and this interaction may be moderated by baseline characteristics. In addition, we will investigate interactions between changes in perceived stress, QoL and behaviour change. Models to detect such interactions will be explored and findings will be used to create hypotheses for future research. Outcomes analysed using normal regression will be standardised when checking the above criteria. For the effect and harm criteria, we will use a standard normal prior for dummy covariates (mean=0, SD=1.0), and a slightly wider prior will be used for the futility criterion (mean=0, SD=2.0). The criteria should be viewed as targets, thus at each interim analysis we will evaluate each criterion and decide if we believe that recruitment should stop or continue. We will continue recruitment until one criterion is fulfilled for each component, for each outcome, at each follow-up interval. We will consider removing factors from the trial if the harm criteria are fulfilled for a component on all outcomes. We will not remove factors for which the effect or futility criteria are satisfied, as collecting additional data will facilitate reducing uncertainty regarding interaction effects. Note that we are estimating each component's effect on each outcome, thus we are not a priori excluding any combination. If a component is ineffective with respect to a specific outcome, then this will be captured by the futility criteria, and will also be reported as a finding. While the final sample size is not determined a priori, we conducted a series of simulations with effect sizes at the minimal value of the above criteria (0.1 Cohen's d for fruit/veg and physical activity, 1.1 incidence rate ratios for alcohol and 1.1 ORs for smoking). Simulations suggested that approximately 1500-2500 participants will be necessary to recruit. However, the criteria will decide, not the simulations. Despite having more conditions than in a traditional two-arm trial (in this case 64 conditions), the factorial design is fully powered for each contrast. 17 This can be understood by observing that half the study population are given access to each individual component (see online supplemental appendix table 1 in appendix B), thus the other half creates a contrast (a type of control). Note that the Bayesian approach allows us to make unlimited looks at the data without worrying about multiplicities and error rates, as would be necessary using a frequentist approach. 43 Also, since no fixed effect size is prespecified, we reduce the risk of stopping recruitment both too early and too late. 20 DISCUSSION Maintaining a healthy diet and adequate physical exercise are proven ways to decrease the risk of many NCDs such as cancer and type II diabetes. More specifically, evidence suggests that the risk of many types of cancer is reduced by a diet which, among other things, includes vegetables and fruits and limits high-calorie foods and sugary drinks. 44 Smoking has been identified as the most prominent risk factor for developing many types of cancer, however, there are indications that more complex connections are in effect. For instance, alcohol consumption is a strong risk factor for cancer in and of itself, however, it has a synergetic relation with smoking in the context of developing certain types of cancer, meaning that a combination of these health behaviours amounts to bigger risks than their individual effects. 45 46 Research has provided strong evidence that risk factors for disease such as smoking, alcohol, physical inactivity and poor diet tend to have a clustered and co-occurring pattern in populations. 47 48 Swedish data show a similar tendency, increasing the risk of poor health outcomes in the population and hence providing additional incitement for future studies to use a multibehaviour approach. Furthermore, previous research concludes the need for future research to use a holistic approach, focusing on multiple and simultaneous interventions for behavioural change 13 47 49-52 Two meta-analyses reported modest effects of multiple health behaviour interventions in non-clinical 50 and clinical populations, 53 with various suggested reasons, including poor implementation. Some of the limitations of past efforts may be difficult to overcome with traditional face-to-face interventions, due to the large demand on staff and other resources. Only 4 of the 69 trials in one of the meta-analyses 50 investigated the use of interventions delivered via digital technology (eg, email, text messages or websites). These trials were however limited by low power or engagement, targeted university students or young individuals, and had questionable external validity. All in all, despite the extended reach which digital interventions may have, there is a lack of evidence for digital Open access multiple health behaviour interventions targeting a more general population. This factorial trial investigates the components of a novel multiple behaviour intervention. While our aim of the trial is to estimate the effects of the components on behaviour, we plan to conduct exploratory studies of engagement, 54 which in combination with effect estimates will be used to determine future directions of study. Decisions to retain or remove components will therefore not be based solely on the statistical analyses in this study, but rather combined with engagement data and the evidence from the literature more widely. If for instance some components are found to exert only small effects, but was hardly used, we are more inclined to in future studies understand why it was not used and based on this redesign the component. On the other hand, components which are used often but still exert small effects may be candidates for replacement. If some components are found to only be effective for some behaviours, then these may be candidates for inclusion among those only with these unhealthy behaviours. Generalisability and limitations We have adopted a pragmatic recruitment strategy for this trial, using online channels, which closely mimics the way the intervention would be disseminated in a real-world context. The trial should therefore be viewed as estimating effectiveness of the intervention's components, rather than an efficacy. However, careful consideration should be taken due to the trial context creating expectations of and from participants, 55 56 and those who take part in trials may be systematically different from those who do not. In addition, several limitations of the trial should be considered when interpreting findings. The factorial design of this trial allows all participants to receive some support, even if some will receive a minimal number of components. Since conditions are unknown to participants we consider them blinded to allocation, which reduces the risk of bias. 57 58 This does not however protect entirely against social desirability bias, as those who are positive to the treatment received may want to support its dissemination by reporting more positive outcomes than actual, 59 which may be less likely if fewer components of the intervention are received. Compensatory rivalry bias could exacerbate this issue. 60 We will ask questions with respect to participants' perceptions about the support received to support reasoning about the strength of these threats to validity. Condition allocation may be revealed to research personnel when participants are called to collect follow-up data. This may be a source of bias, as non-blinded assessment of subjective measures have been found to bias estimates. 61 Deducing the exact allocation is however unlikely, and personnel are instructed to not ask about anything else than the follow-up data. Using phone calls is a strategy employed to reduce the risk of attrition bias, which we believe outweighs the risk of detection bias. Finally, there are two methodological compromises which are important to address. First, we use single facevalid items for mediators to reduce participant burden, which means that any marked mediation effect should be carefully interpreted to relate to the full concept of importance, confidence and know-how. Second, criteria for stopping enrolment are based on the analysis of individual components which does not consider interactions among components. While it would be advantageous to include criteria for interactions, it is not practical to do so as it would increase the expected sample size markedly. ETHICS AND DISSEMINATION The study was approved by the Swedish Ethical Review Authority on 2021-08-11 (Dnr 2021-02855). Participants are likely to have been motivated to sign up for the trial by the potential of receiving novel support, leading to a risk of opportunity cost if the intervention only exerts small effects on behaviour. However, considering that current prevention efforts seem to not be enough to reduce the prevalence of unhealthy behaviours, and the potential effects and reach a digital multiple health behaviour change intervention could have among those seeking help online, this risk was deemed acceptable. Recruitment began in October 2021, and we anticipate that recruitment will last no more than 12 months. A final dataset will therefore be available in January 2023, and findings will be subsequently submitted for peer-review in open access journals. Contributors Study objectives and outcomes were decided by MB, ML, PB, PH and HH. MB and KÅ designed the trial and analysis plan. Intervention materials were conceptualised and developed by KÅ, JB, MB, OL, ML, PB, PH and HH, based on an intervention design by MB. MB, KÅ and JB drafted the protocol, which was revised by ML, PB, PH, HH and OL-all authors contributed with intellectual content and approved the final version. JB, KÅ and MB will be responsible for data collection and statistical analysis. All authors will be responsible for communication of findings from the trial. Competing interests MB and PB own a private company (Alexit AB) that develops and distributes lifestyle behaviour interventions for use in healthcare settings. Alexit AB had no part in funding or planning of this trial but is relied upon for a service to send text messages. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits
2022-07-28T06:18:13.719Z
2022-07-01T00:00:00.000Z
251104510
s2orc/train
v2
NEWS
NEWS Competition: School team launches a rocket Conference: Norway focuses on physics teaching Science on Stage: Canadian science acts take to the stage Particle Physics: Teachers get a surprise at CERN Teaching: Exploring how students learn physics University: Oxford opens doors to science teachers Lasers: Lasers shine light on meeting Science Fair: Malawi promotes science education In November 2015, the BMGF announced an additional US$ 120 million investment in family planning over the next three years. This is a 25% increase on its current funding level, and aims to meet the Family Planning 2020 goal of giving a further 120 million girls and women voluntary access to birth control. The BMGF will continue to invest in new forms of birth control to expand the range available to women, eg, injectable methods than can be easily delivered by community health workers, or self-administered at home. (Devex, 11 Feb 2016)` In an interview ahead of the publication of the BMGF' s annual letter, Bill and Melinda Gates highlighted how, in the wake of the Ebola tragedy, the zika virus has spurred brought a faster and more united response. Bill Gates noted that the BMGF has invested in modifying mosquitoes not to carry viruses, and in reducing their numbers, and that the same breed of mosquitoes carries dengue and zika pathogens. This year' s letter calls for young people' s involvement in tackling inequity, focusing on energy and time. It highlights the need for cheap carbon-free energy that would benefit people in developing countries, and the gap in the amount of time spent on unpaid work between men and women. This gap hampers people rising out of poverty; bringing labour-saving devices to developing countries would help free women to earn money for their families and improve health care and nutrition. (Reuters,23 Feb 2016)` PATH, the non-profit global health organisation, is opening a Center for Vaccine Innovation and Access, with initial funding of US$ 11 million from the BMGF. PATH will use the funding to accelerate the development and distribution of vaccines to halt deaths from preventable diseases. Currently, PATH has more than 20 vaccines at different stages of development and use, which target the world' s leading causes of child mortality -pneumonia, diarrhoea, malaria, plus diseases like polio and meningitis. PATH also plans to use the new Center to address the new threats from diseases like ebola and zika. "The new Center will bring together PATH' s expertise across the entire vaccine development and introduction process, from pre-clinical trials on novel vaccine concepts to regulatory approval and policy review, from design and conduct of field trials to innovative approaches for new vaccine development," says Mr David Kaslow, the Center's head. (GeekWire,16 Mar 2016)` The BMGF has made a US$ 5 million equity investment in Amyris Inc., a US-based bioscience company. The investment will fund work on further reducing the cost of a leading malaria treatment, focusing on the continued production of high-quality and secure supplies of artemisinic acid and amorphadiene for use in artemisinin combination therapies (ACTs), which are recommended by the WHO as the primary first-line treatment for malaria. Amyris made its artemisinic acid-producing strains available to Sanofi in 2008 on a royalty-free basis. Sanofi scaled-up this technology to produce artemisinin for ACT treatments, intending to produce enough semi-synthetic artemisinin for up to 150 million treatments by 2014, and ensure distribution on a "no profit, no loss" principle. (Business Standard, 12 Apr 2016)T he`GAVI`Alliance` GAVI has signed a US$ 5 million deal with the pharmaceutical company Merck to keep 300 000 Ebola vaccine doses ready for emergency use or further clinical trials. Merck will submit a licensing application by the end of 2017, which would help GAVI prepare a global stockpile. Early trials of the VSB-EBOV vaccine -which combines a fragment of the Ebola virus with another safer virus -suggest it may give 100% protection, although this is still preliminary. The deal was announced at the World Economic Forum at Davos, Switzerland. Isolated flare-ups of Ebola NEWS are still anticipated, and the vaccine could be important with dealing with these, as well as heading off any future epidemics. (BBC, 20 Jan 2016)` Médecins Sans Frontières (MSF) expressed grave concern that the high cost of vaccines is not being given higher priority at the Ministerial Conference on Immunization in Africa. MSF say that the high cost of vaccines affects its ability to provide health care in developing countries, and call for pharmaceutical companies to reduce the cost of three-dose pneumonia vaccinations to US$ 5 per child, amongst others. MSF argue that lack of immunisation progress in some countries since 2013 is due to high priceseg, vaccinations for pneumonia, diarrhoea and HPV have increased 68 times between 2001 and 2014. MSF are concerned that countries that are not poor enough to qualify for GAVI support have to negotiate prices on their own, risking their coverage rates. They call for GAVI to negotiate better deals with pharmaceutical companies. "From 2001-2014, the US has given GAVI US$ 1.2 billion in direct funding, and has pledged US$ 1 billion for 2015-18. This money can go much further if the vaccines, like Pfizer' s pneumonia vaccine, are cheaper." (Humanosphere, 26 Feb 2016)` Nepal passed the country' s immunisation bill in January, which aims to improve oversight of immunisation services, set higher standards for vaccine testing and usage, and change how Nepal finances its immunisation programme. Nepal currently relies on financial support from GAVI to fund 60-70% of its purchases. However, Nepal will no longer be eligible for GAVI support when it transitions from low-to middle-income status -expected by 2022, thus giving a few years to establish its own domestic financing arrangements. The new law sets out two methods for financing immunisation. First, the law commits the government to allocating funds to the National Immunization Fund, levied through general taxation. Second, health partners can contribute to a separate Sustainable Immunization Support Fund -although this will probably requires incentives such as tax exemptions to be effective. The law highlights Nepal' s commitment to immunisation, and the Chairperson of the Parliamentary Committee on Women, Children, Senior Citizens and Social Welfare, Hon. Ranju Kumari Jha, calls it "a milestone to protect child rights of getting quality immunisation service, increase country ownership and sustain the national immunisation programme by securing adequate funding." (healthaffairs.org, 7 Mar 2016)` Seth Berkley, GAVI' s CEO, believes that Ebola and zika have removed attention from measles. He argues that measles should be prioritised, partly because it is highly infectious and kills 115 000 people each year, and also because measles outbreaks act as an early warning system against other threats to global health security. Measles' highly infectious nature means that outbreaks are a useful measure for gauging a health system' s ability to cope with potential global epidemics, as 90% immunisation coverage is needed to reach herd immunity, compared to 80-85% for other common diseases. If populations are under-immunised, it is likely that other vital health interventions are lacking, rendering people even more vulnerable to disease outbreaks. Mr Berkeley calls for more resources on routine immunisation, supplemented with catch-up campaigns as required. (Devex, 27 Apr 2016)` GAVI is backing a new national drone delivery network to distribute blood supplies in Rwanda. There will be further tests planned of its suitability for a wider range of drugs, including vaccines, HIV treatments and treatments for malaria and tuberculosis. This phase will see the drones making up to 150 deliveries of blood to 21 transfusion facilities in western Rwanda -crucial for Africa, which has the world' s highest rate of maternal deaths from postpartum haemorrhaging. If successful, Rwanda' s drone network could save thousands of lives and be a model for other countries to duplicate. The project uses drones from the Californian robotics company Zipline, and the "global citizenship" art of the delivery and logistics giant UPS. "It is a totally different way of delivering vaccines to remote communities and we are extremely interested to learn if UAVs [unmanned aerial vehicles] can provide a safe, effective way to make vaccines available for some of the hardest-toreach children," says Seth Berkley, CEO of GAVI. (Pharma Market Live, 9 May 2016) The`World`Bank` Ahead of a World Bank conference on how improved land management can reduce poverty and foster development, Mr Klaus Deininger, the conference organiser, argues that women' s right to land creates other benefits. These include improved health and education for children, in-creased household resources, and fewer child marriages as daughters are less likely to be married for financial reasons. Women with land rights tend to have bank accounts, and their financial resources can render them less vulnerable to domestic violence. In sub-Saharan Africa, women com-NEWS prise more than 50% of the agricultural labour force, but fewer than 20% own farms. According to the UN World Food Programme, if women farmers had the same access to land as men farmers, global hunger could be substantially reduced. The conference will focus on women and property, with particular emphasis on gender equality and land rights -key to achieving the Sustainable Development Goals. (Thomson Reuters Foundation, 13 Mar 2016)` Costa Rica, already considered to have one of the best health care systems in Latin America, has been granted a US$ 420 million loan to further strengthen the financial sustainability of its universal health insurance system, and the management, organisation and delivery of its services. This is in line with Costa Rica' s strategic health agenda, which was developed by the Costa Rican Social Security Administration to modernise primary health care networks. It will include the expansion of e-health and a 40% increase in screening in areas with high incidence of colon cancer. The programme will include a 25% advance to ensure progress. (Public Finance International, 21 Mar 2016)` The World Bank announced a US$ 5 billion loan to Tunisia to support its democratic transition and economic development. Tunisia has been hit with falling tourist revenues -which account for 7% of GPD -after the Islamic militant attacks in 2015, unrest over unemployment and limited economic reforms despite wider political advances following its 2011 uprising. Economic growth was 0.8% in 2015, and is forecast to increase to 2.5% in 2016, but unemployment is 15.1% and is much higher amongst the country' s youngsters -who comprise more than 50% of the population. The loans will be used to stimulate investment and job creation, and to intensify development in disadvantaged areas. The World Bank agreed that Tunisia' s economic reforms to date are headed in the right direction, but more reforms are needed in the financial sector and to increase transparency. The International Monetary Fund and Tunisia are also in talks over a US$ 2.8 billion credit to support economic reform. (Al Arabiya, 25 Mar 2016)`` Following on from the annual spring meeting of the World Bank and IMF, 5 key themes have emerged. First, the World Bank' s track record of involuntary resettlement -which has faced severe criticism from human rights groups -was put under the spotlight, as the bank' s Inspection Panel made recommendations for better practice. Second, there were moves towards closer collaboration with other development banks, which could help close the US$ 60-70 billion infrastructure gap in Africa, amongst others. Third, in the wake of the "Panama papers", World Bank President Jim Yong Kim emphasised how tax avoidance hinders ending poverty, and that world leaders wish to work with the Bank to track down illicit revenue flows. Fourthly, the President of the African Development Bank, Akinwumi Adesina, wants Africa' s leaders to focus more on nutrition. And finally, UN Secretary General Ban Ki-moon called for more work on addressing the root causes of conflict behind the global refugee crisis, and for the world to mobilise to ensure the safety and well-being of those crossing borders. (Devex, 20 Apr 2016)` According to a World Bank study, South Asia could create millions of new jobs in the clothing industry by taking advantage of rising manufacturing costs in China, boosting both economic growth and job opportunities for women. With low labour costs and a growing young, working-class population, South Asia is strongly positioned to increase its share of this labour-intensive industry. Women' s participation in South Asia' s labour market is low, and increasing job opportunities for women is vital for raising marriage ages, reducing birth rates, improving nutrition and school enrollments, and stronger economic growth. However, the industry has a track record of poor working conditions -highlighted by the collapse of the Rana Plaza building in Bangladesh in 2013 -and growth opportunities will not be fully realised without closer attention to safety and improved conditions, due in part to increased scrutiny from global brands and retailers. (Voice of America, 29 Apr 2016) `United`Nations`(UN)` The World Food Programme (WFP), a UN agency, is leading a project to boost incomes and improve food security in developing countries. It will help 1.5 million smallscale farmers across Africa, Asia and Latin America with contracts to buy their crops, signed before they are planted, to a value of US$ 750 million. It aims to enable marginalised farmers to access reliable markets -50% of the world' s 795 million people are farmers, and in some Afri-can countries up to 90% of the population are smallholder farmers -so that farmers could move from subsistence to market-oriented production. However, critics warn that the project could fail if it does not prioritise helping poor farmers to adapt to climate change, by promoting crops which are more resilient to drought. In addition, there are concerns that farmers will be encouraged to buy hybrid seeds which require chemical fertilisers which deplete soil, and rely on regular rainfall. The WFP responded by noting that participating farmers can buy seeds from any supplier, but will receive recommendations on which seeds are best for their soil and water conditions. (Thomson Reuters Foundation, 10 Feb 2016)` 175 world leaders gathered in New York to ratify the Paris climate deal on the world Earth Day, marking the first steps towards binding the countries to the promises they made to cut greenhouse gases. It will come into effect when the 55 countries responsible for 55% of greenhouse gases have ratified the accord, and is set to begin in 2020. China and the USA have agreed to ratify in 2016, and the EU' s 28 member countries are expected to ratify within 18 months. The agreement comes as the 2016 El Niño is believed to have caused droughts, floods, severe storms and other extreme weather patterns, and 2016 is set to break global temperature records. In welcoming the agreement, the UN Secretary-General Ban Ki-moon said "the era of consumption without consequence is over. We must intensify efforts to decarbonise our economies. And we must support developing countries in making this transition." (Al Jazeera, 22 Apr 2016)` April' s UN General Assembly Special Session on the World Drug Problem (UNGASS) did not lead to any radical shifts in drug policy. The central goal of the UN global drug policy is the elimination in the sale and use of illegal narcotics. The hard-line interpretation of this policy -used by most countries -does not lead to harm reduction, which underpins the UN conventions on drugs. Countries such as Mexico, Guatemala and Colombia have agreed that this approach has failed, and benefits criminals. Whilst UN-GASS has not shifted from this policy, there are changes in the language around drug use, which reflects a greater focus on prevention and treatment -albeit falling short of what is required to address the estimated 400 000 drugsrelated deaths each year. In moves widely seen as significant, countries such as Mexico and Canada quietly announced at the session that they are moving away from UNGASS policy by introducing their own reforms (eg, on cannabis use and legalisation). Many campaigners call for full decriminalisation of drug use, although this would not ensure the elimination of violence and corruption around black markets. In summary, whilst there was no radical changes, the session heralds some important first steps in the evolution of global drugs policies. (Huffington Post Australia, 2 May 2016)` The UN has convened the first World Humanitarian Summit (WHS), to take place on 23-24 May, in response to the worst humanitarian crisis since World War II. One UN report found that the average length of displacement is 17 years, and another UN/World Bank report found that 90% of Syrian refugees in Jordan and Lebanon live below the national poverty line, and many are unable to legally earn money, and many children cannot access education. However, commentators have noted it is unclear what outcomes or actions the summit will produce. Médecins Sans Frontières (MSF) have pulled out of the summit, expressing concerns that the summit will not improve emergency response and reinforce impartial humanitarian aid; and nor will it make states accountable or responsible. This decision has added to the debate over creating "better aid", and Care International emphasises the importance of addressing the demand side, as well as reactive humanitarian aid. Mr Gareth Price-Jones of Care International notes the nexus between humanitarian aid and development aid, and addressing them together could more effectively address complex, long-term crises. (IPS, 9 May 2016)` Turkish security forces have been accused by the UN and the group Human Rights Watch of committing serious human rights violations against Turkish civilians and Syrian refugees. Turkish security forces may have deliberately shot civilians, destroyed infrastructure, carried out arbitrary arrests, and caused displacements in an ongoing military campaign against ethnic Kurdish separatists in the country' s southeast. A separate report from Human Rights Watch claim that Turkish border guards have shot and beaten Syrian asylum seekers. The UN said that many Kurdish-majority towns in the southeast have been sealed off "for weeks" and are almost impossible to access, and that there are reports of ambulances and medical staff being prevented from reaching the wounded. This is in the wake of a deal between the EU and Turkey to halt the flow of migrants to Europe, in exchange for aid and visa-free travel for Turkish citizens. (Washington Post, 10 May 2016) UN`AIDS`and`The`Global`Fund` Cambodia' s dispute with the Global Fund over travel expenses is now resolved, allowing the country to access millions of dollars in aid money to fight malaria -amidst fears over the rise of drug-resistant malaria. The dispute appears to have arisen over receipts for travel paymentsthese are difficult to obtain in rural Cambodia and are therefore not a requirement for government officials -and the Global Fund has agreed not to ask for these. However, NEWS travel plans must be submitted in advance, spot checks can be carried out to verify travellers' locations, and staff will have to reimburse any "irregular" funds. The agreement means that Cambodia' s National Malaria Centre (CNM) can now access a new US$ 12 million grant, plus another (almost untouched) grant of US$ 9 million. Although welcoming the resolution, CNM' s director Dr Huy Rekol noted that Village Malaria Workers were not paid during the dispute and stopped alerting the authorities on local malaria cases, and this may be linked to some deaths from malaria. (Phnom Penh Post, 28 Jan 2016)` The Global Fund plans to send an advance supply of antiretroviral drugs to Uganda, after the country ran out of supplies at the end of 2015. In Uganda, 1.5 million people -1.5% of the population -are HIV positive. The shortages, which began in September 2015, affected 240 000 patients on publicly-funded treatment programmes, forcing them to modify treatment or stop outright. Private-sector clinics were unaffected. The government claimed that a weak currency and insufficient foreign exchange hindered its ability to finance drug imports. However, critics blame high election spending for the financial shortfall. The Global Fund acknowledged that the advance supply is a "short-term solution" and called for the government to mobilise resources to fill the gaps and find a long-term solution. (Yahoo, 25 Jan 2016)` UNAIDS and Xinhua News Agency have signed an agreement to enhance global co-operation towards ending HIV-AIDS by 2030. The deal builds on an existing agreement from 2011, and new measures include strengthening collaboration in areas such as social media. The two sides will work towards this goal through in-depth co-operation, consultation and information exchange. Mr Michel Sidibé, the Executive Director of UNAIDS, said "combined with the power of media and communication, we could work together to build a legacy in promoting ending AIDS." Xinhua President Cai Mingzhao also stated that "to end AIDS requires the joint efforts from the international community." (Xinhua, 18 Mar 2016)`` Ahead of the UN General Assembly Special Session on the World Drug Problem, UNAIDS has released a report which shows that countries which do not adopt healthand rights-based approaches for drug-users experience no falls in HIV infections in people who inject drugs. Countries have implemented health-and rights-based approaches to drugs have reduced new HIV infections in these groups. Examples of successful programmes include the free voluntary methadone programme in China, Iran' s integrated services for the treatment of sexually-transmitted infections, injecting drug users and HIV, and a peerto-peer outreach programme in Kenya on using sterile equipment. A key part of ending the HIV epidemic by 2020 is reaching 90% of injecting drug-users with HIV prevention and harm reduction services. This would require an annual investment of US$ 1.5 billion in outreach, needlesyringe distribution and opioid-substitution therapy in low-and middle-income countries. However, these programmes are cost-effective and deliver wider benefits, such as lower crime rates and reduced pressure on health services. (Merh, 17 Apr 2016)` Médecins Sans Frontières (MSF) has called for governments, UN and European agencies, PEPFAR and the Global Fund to develop and implement a fast-track plan to scale-up antiretroviral treatment (ART) for countries where coverage reaches less than 33% of the population, particularly in West and Central Africa. MSF warn that globallyagreed goals to halt the HIV epidemic by 2020 will not be met without this plan. In West and Central Africa-a region of 25 countries, 75% of people who require them cannot access HIV care-equivalent to 5 million people. "The converging trend of international agencies to focus on highburden countries and HIV 'hotspots' in sub-Saharan Africa risks overlooking the importance of closing the treatment gap in regions with low antiretroviral coverage. The continuous neglect of the region is a tragic, strategic mistake: leaving the virus unchecked to do its deadly work in West and Central Africa jeopardises the goal of curbing HIV/ AIDS worldwide", says Dr Eric Goermaere, MSF' s HIV referent. (Health24.com, 20 Apr 2016) UNICEF` UNICEF has warned that 25 000 children are suffering from acute severe malnutrition in North Korea, and are in need to urgent treatment. It calls for US$ 18 million to support this, as part of a wider US$ 2.8 billion appeal to help 43 million children in humanitarian emergencies. In the wake of severe droughts that causes a 20% reduction in North Korea' s crop production, UNICEF needs US $8.5 million for nutrition, US$ 5 million for water and sanitation, and US$ 4.5 million for health care to help these children. There are often shortfalls in humanitarian funding for North Korea -70% of North Koreans suffer from food insecurity, funding fell from US$ 300 million in 2004 to under US$ 50 million in 2014 -due to its restrictions on humanitarian workers and concerns over its nuclear capa- orld`Health`Organization`(WHO)` A WHO report published ahead of the first Ministerial Conference on Immunisation in Africa shows that Rwanda' s immunisation coverage is 99%. This success is attributed to improving routine immunisation and the introduction of new vaccines. Dr Matshidiso Moeti, the WHO regional director for Africa, noted that Africa has increased vaccination coverage from 64% in 2004 to 79% in 2014. However, she urged further action from governments at the conference, because only 9 countries have immunisation coverage of 80% or higher, and 1-in-5 children in Africa do not receive basic vaccinations. "We have the tools, we need to save children' s lives, and all we need is the political will and financial support to deliver," she said. Currently, GAVI funds immunisation in 70% of African countries, but as more African countries move from low-to middle-income status they will be ineligible for GAVI support, so must prepare to meet immunisation costs from their own budgets. (allafrica.com, 26 Feb 2016) bilities. According to Ghulam Isaczai, the UN' s resident coordinator for North Korea, "[North] Korea is both a silent and underfunded humanitarian situation. Protracted and serious needs for millions of people are persistent and require sustained funding." (International Business Times, 26 Jan 2016)` On the eve of the International Day for Zero Tolerance of Female Genital Mutilation (FGM), UNICEF warned that growing populations in high-prevalence countries are undermining efforts to tackle the practice, which is widely regarded as a serious abuse of human rights. 50% of girls and women subjected to FGM live in Egypt, Ethiopia and Indonesia, and if current trends continue, the number of cases will increase over the next 15 years. Previously Indonesia has been excluded from UNICEF' s FGM statistics due to a lack of reliable data -its recent inclusion has led to a sharp upwards revision in the number of global FGM victims -and other countries were FGM is reported are also omitted, such as India, Oman and the United Arab Emirates. However, countries such as Liberia, Burkina Faso and Kenya have experienced steep falls in FGM cases and condemnation is growing, and UNICEF calls for accelerated efforts to eliminate the practice. (Thomson Reuters Foundation, 5 Feb 2016)` UNICEF estimates that one-third of combatants in Yemeni' s civil war are children, on both the rebel side and troops fighting for President Abdullah Mansour Hadi. UNI-CEF believes that children as young as 14 are front-line fighters, despite pledges from both sides to end the practice. The massive destruction of schools and infrastructure encourages children to fight, and the rise of terrorist groups such as Isis and al-Shabaab makes negotiations over child combatants impossible. The situation in South Sudan is graver still, with 16 000 children being recruited into both sides in the country' s civil war. Anthony Nolan, one of UNI-CEF' s child protection specialists, says many children are driven to join by a lack of resources or a desire to seek revenge for their families, and that their recruitment threatens to prolong the conflict for future generations. (The Independent,8 Feb 2016)` On the 5 th anniversary of the Syrian civil war, a UNICEF report shows the resultant refugee crisis, with over 2.4 million Syrian children living as refugees outside their country, 200 000 live as refugees within Syria -and a further 306 000 children were born as refugees. Over 250 000 people have died in the conflict -at least 400 children were killed in 2014. And now, twice as many people live in areas under siege or otherwise hard-to-reach compared to 2013, and 2 million of those cut off from help are children, with UNICEF reporting children suffering from extreme malnutrition or death from starvation. There are concerns over the increases in recruitment of child soldiers -both boys and girls, and children have reported being beaten, indoctrinated and forced to commit violence. The psychological effects of living under siege are also devastating. "Children living under siege almost have to re-learn what it' s like to be a human being," says Mr David Nott, a trauma surgeon who has worked in Syria. (Business Insider, 14 Mar 2016)` According to UNICEF, more than 700 million women were married before their 18 th birthday, and Bangladesh has the world' s second highest rate of marriage of girls aged under 15, after Niger. However, a study by the New Yorkbased Population Council shows that child marriage fell by 31% when girls are educated or took classes in critical thinking and decision-making, with further falls when girls received job skills training. In Bangladesh, 75% of girls marry before they are 18 years old. "In Bangladesh, limited evidence exists on what works to delay child marriage. These results are a major leap forward," said Ann Blanc, Vice President of the Population Council. (Thomson Reuters Foundation, 23 Mar 2016) NEWS`` The WHO, in its role as pharmaceutical watchdog in markets with inadequate regulation, has suspended its approval of tuberculosis drugs made by India' s Svizera Laboratories. The company is a major supplier to developing countries, and the move follows concerns over its manufacturing and quality standards. The WHO also recommended that batches of medicine already on the market should be retested by independent experts, and that supplies may need to be recalled. The company disagreed with the WHO' s decision (which follows earlier warnings on standards in Svizera Laboratories, including dirty surfaces, black mould in a cleaning area, poor hygiene and inadequate record keeping), claiming the WHO had ignored evidence that Svizera' s operations were up to standard. India' s pharmaceuticals industry supplies cheap copies of generic drugs, but in recent years it has been beset by problems over the quality of its products. (medicaldaily.com, 19 Mar 2016)` The WHO' s zika response differs markedly from its 2014 response to the Ebola outbreak. The WHO quickly flagged zika as a public health emergency -despite significantly fewer deaths -it took 5 months and nearly 1000 deaths before it declared Ebola a "public health emergency of international concern." Although the faster responseintended to jump-start scientific research, vaccine and treatment development, and mosquito control, may partly be caused by a wish to act quickly after criticisms over Ebola -the overall picture is more nuanced. For example, the WHO' s regional office for the Americas (PAHO) had more expertise on zika compared to the Ebola expertise within the WHO' s regional office in Africa, and PAHO came under pressure from the USA -which is more likely to be affected by zika than Ebola -to act decisively. Finally, the impact of zika can be presented in distressing images of newborn babies with microcephaly, whereas Ebola affected wider swathes of society, thus making it harder to press for action for a single group. (Chicago Tribune, 5 Apr 2016)`` A WHO-led analysis published in The Lancet Psychiatry shows how the global failure to tackle depression and anxiety costs US$ 1 trillion each year in lost productivity and causes "an enormous amount of human misery." It found that without scaled-up treatment, 12 billion working days -50 million years of work -will be lost to depression and anxiety disorders each year up to 2030. Scaling-up treatment would cost US$ 147 billion, meaning that every US$ 1 invested in treatment would lead to a US$ 4 return in better health and ability to work, and the authors argue that both developing and developed countries should improve mental health care. The study notes that common mental health conditions are increasing -the number of people suffering with depression and/or anxiety rose from 416 million in 1990 to 615 million in 2013. 10% of the world' s population is affected, and mental disorders account for 30% of the global burden of non-fatal diseases. Treating these disorders would help the world meet the SDG of reducing premature deaths from non-communicable diseases by 33% by 2030. War and humanitarian crises increase this urgency, as the WHO estimates that up to 20% of people suffer from depression and anxiety during emergencies. (The Guardian, 12 Apr 2016)` In April, the WHO launched its global strategy to combat leprosy, with the overall aim of reducing to zero the number of children diagnosed with leprosy. Although the disease prevalence rates for leprosy fell to below 1 per 10 000 population in 2000, worldwide there are still 213 899 new cases a year -and India, Brazil and Indonesia account for 81% of these cases. Key interventions to combat leprosy include targeting detection amongst higher-risk groups via campaigns in highly endemic areas, and improving health care coverage for marginalised groups. Early detection, especially amongst children, is essential for reducing disabilities and transmission. (livemint.com, 21 Apr 2016)
2018-01-23T22:40:07.301Z
1908-05-01T00:00:00.000Z
209345410
s2orc/train
v2
Simultaneous Measurement of Refractive Index and Flow Rate Using a Co 2+ -Doped Microfiber
Simultaneous Measurement of Refractive Index and Flow Rate Using a Co 2+ -Doped Microfiber : This paper has proposed and experimentally demonstrated an integrated Co 2+ -doped microfiber Bragg grating sensor (Co-MFBGS) that can measure the surrounding liquid refractive index (LRI) and liquid flow rate (LFR) simultaneously. The Co-MFBGS provides well-defined resonant modes of core and cladding in the reflection spectrum. By monitoring the wavelength of the cladding mode, the LRI can be measured; meanwhile, by monitoring the wavelength of the core mode caused by the heat exchange, the LFR can be measured. The LRI and LFR can be distinguished by the wavelength separation between cladding mode and core mode. The experimental results show that in aqueous glycerin solution, the maximum measurement sensitivity for LRI detection is − 7.85 nm/RIU (refractive index unit), and the LFR sensitivity is − 1.93 nm/( µ L/s) at a flow rate of 0.21 µ L/s. The liquid flow rate (LFR) measurement is also a paramount index for many applications, such as medical and biological analysis [20]. In the application of optical fiber biosensors, due to the herringbone microfluidic chip's capture efficiency being highly dependent on the flow rate, its accuracy and efficiency can be significantly improved by the measurement of flow rate [21,22]. "Hot-wire" is the most famous microfiber flowmeter [23][24][25]. For many applications, not only is the LFR needed, but also other parameters, such as LRI and temperature [19]. Sometimes simultaneous measurements may be necessary, and the measurement parameters should not be cross-impacted. In this paper, based on the Co 2+ -doped microfiber Bragg grating sensor (Co-MFBGS), an integrated LRI and LFR sensing sensor has been presented and experimentally demon-strated, which can measure both parameters simultaneously. The Co-MFBGS reflection spectrum provides a well-defined core mode resonance as well as two cladding mode resonances; the wavelength shifts of two kinds of resonances have quadratic non-linear relationship to LRI and LFR. The temperature cross-impact to the LRI measurement can be eliminated by monitoring the wavelength interval change between the core mode and cladding mode. Additionally, due to the non-radiation effect of Co 2+ -doped fiber, the Co-MFBGS can be heated by a pump laser; by interrogating the core mode's wavelength shift caused by the heat exchange effect, the LFR can be measured. The proposed integrated optical fiber sensor has potential value in chemical, medical and environmental applications. Fabrication of the Co-MFBGS In the proposed Co-MFBGS, the sensing device is a section of Co 2+ -doped optical fiber (CDOF) with 8.4/125 µm core diameter and cladding diameter, and the length is 20 mm. At a laser wavelength of 1480 nm, the section of CDOF absorption coefficient is approximately equal to 31.3 dB/m. The CDOF with a length of 20 mm was spliced to two sections of 1m length single-mode optical fibers, one at each end. The fabrication of the proposed Co-MFBGS has two steps. First, an FBG with a length of 15 mm was written in the section of CDOF using a KrF excimer laser (248 nm) with a phase mask. Second, the CDOF with an FBG was immersed into a 20% aqueous hydrofluoric acid solution for 105 min, then removed and rinsed with ultrapure water to clean off the acid solution. Thus, a Co-MFBGS with a diameter of 16.7 µm (Figure 1a) was fabricated. With the same process, a Co-MFBGS with a diameter of 6.4 µm (Figure 1b) was also fabricated. linear relationship to LRI and LFR. The temperature cross-impact to th can be eliminated by monitoring the wavelength interval change betw and cladding mode. Additionally, due to the non-radiation effect of C Co-MFBGS can be heated by a pump laser; by interrogating the core shift caused by the heat exchange effect, the LFR can be measu integrated optical fiber sensor has potential value in chemical, medical applications. Fabrication of the Co-MFBGS In the proposed Co-MFBGS, the sensing device is a section of Co 2+ (CDOF) with 8.4/125 μm core diameter and cladding diameter, and t At a laser wavelength of 1480 nm, the section of CDOF absor approximately equal to 31.3 dB/m. The CDOF with a length of 20 mm sections of 1m length single-mode optical fibers, one at each end. The fabrication of the proposed Co-MFBGS has two steps. First, a of 15 mm was written in the section of CDOF using a KrF excimer la phase mask. Second, the CDOF with an FBG was immersed in hydrofluoric acid solution for 105 min, then removed and rinsed with clean off the acid solution. Thus, a Co-MFBGS with a diameter of 16.7 fabricated. With the same process, a Co-MFBGS with a diameter of 6.4 also fabricated. The reflection spectra of the Co-MFBGS with diameters of 16. investigated, as shown in Figure 1c. It is clear that the Co-MFBGS wit μm has a core mode and two cladding modes, but the Co-MFBGS wit has only a core mode, the wavelength range from 1525 to 1565 nm. A μm Co-MFBGS was selected as the test sensor for the rest of the expe in Figure 1c, the wavelength separation of λa (core mode) and λb (cla The reflection spectra of the Co-MFBGS with diameters of 16.7 and 6.4 µm were investigated, as shown in Figure 1c. It is clear that the Co-MFBGS with a diameter of 16.7 µm has a core mode and two cladding modes, but the Co-MFBGS with a 6.4 µm diameter has only a core mode, the wavelength range from 1525 to 1565 nm. Accordingly, the 16.7 µm Co-MFBGS was selected as the test sensor for the rest of the experiments. As shown in Figure 1c, the wavelength separation of λ a (core mode) and λ b (cladding mode) is 7.1 nm. The spectral characteristics of the Co-MFBGS were analyzed by using the numerical mode simulation software COMSOL, and the reflection spectrum compositions of the core mode and one cladding mode, and amplitude distributions of the transverse electric field, are as shown in the bottom of Figure 1. This indicates that on the Co-MFBGS surface, the evanescent field of the cladding mode is stronger than that of the core mode. Principle of the Proposed Sensor It is well known that the FBG center wavelength λ B and the resonant wavelength λ cl,k of the kth cladding mode can be shown as [18,26] where n eff,co and n eff,cl,k are the effective index of the core mode and the kth cladding mode in the Co-MFBGS, respectively; Λ is the Bragg grating period. As the core mode was confined in the core of the Co-MFBGS, it was insensitive to the surrounding LRI. Meanwhile, the cladding modes of the Co-MFBGS due to the evanescent field are very sensitive to the LRI. So, the Co-MFBGS can be used for LRI sensing, and from Equation (2) the wavelength drift of the cladding mode caused by LRI change is rewritten as where ∆n eff,cl,k (RI) is the kth cladding mode LRI change. The wavelength interval λ int is defined as It is the wavelength difference between core mode and kth cladding mode. Combining Equations (3) and (4), the wavelength interval shift ∆λ int (RI) caused by the change of LRI can be written as ∆λ int (RI) = ∆λ cl,k (RI) = ∆n e f f ,cl,k (RI)·Λ (5) The wavelength drift of FBG caused by temperature change is well known as [27] ∆λ B (T) = 2n e f f ,co ·Λ( 1 n e f f ,co dn e f f ,co dT where ∆λ B (T) and ∆λ cl,k (T) are the wavelength changes of the core mode and k th cladding Cobalt is a kind of light absorption material and is easy to get, and the CDOF can be heated by the pump laser as a result of the non-radiation effect; meanwhile, the temperature of the Co-MFBGS increases and the wavelengths of the core mode and cladding modes have a redshift. When the Co-MFBGS was used as an LFR sensor, parts of the sensor heat can be carried away by the fluidic sample [28], which would lead to a decrease in the sensor's temperature and a blue shift in the wavelengths of the core mode and cladding modes. So, the LFR can be measured by detecting the wavelength shift of the Co-MFBGS. Depending on the theory of hot-wire, the relationship between the LFR of the fluidic sample and heat loss is shown as [29] Q = ∆T(A + Bν i ) (8) where Q is the power absorbed by the CDOF, ν is the LFR, and A, B and i are empirical coefficients. So, Equations (6) and (7) can be rewritten as Additionally, the wavelength interval shift ∆λ int (T) caused by the change of temperature can be deduced as As the difference of α and γ is very small (<0.00324 K −1 ) [20], Equation (11) can be rewritten as According to Reference [20], when the LFR changes from 0 to 1 µL/s, ∆λ int (T) is quite small (~0.017 nm)-that is, the wavelength interval shift caused by the effect of LFR can almost be neglected. Consequently, the LRI and LFR can be measured simultaneously using the cladding mode wavelength and core mode wavelength of the Co-MFBGS, respectively, and the crosstalk can be distinguished using the wavelength interval shift. Experimental Setup As shown in Figure 2, the experimental setup includes an amplified spontaneous emission (ASE) laser (wavelength: 1525-1565 nm, output power: 20 mW) and a tunable pump laser (1480 nm), and both the lasers are launched into the Co-MFBGS from the opposite ends. The reflective spectrum of the Co-MFBGS was interrogated by an optical spectrum analyzer (OSA, YOKOGAWA AQ6370D). At the beginning of the experiments of LRI and LFR measurement, a specially designed PDMS-based microfluidic channel was used to place the proposed Co-MFBGS, in order to impart a slight pre-stretch; then, in order to avoid the effects of other forces, a UV-sensitive adhesive was used to fix the two ends of the Co-MFBGS fiber, as shown in Figure 2. During the experiments of LRI and LFR measurement, aqueous glycerin solutions were injected into the microfluidic channel via a pressure controller whose pressure stability is as high as 10 µbar. The temperature characteristics of the Co-MFBGS were tested first by putting the The temperature characteristics of the Co-MFBGS were tested first by putting the sensor into a chamber whose temperature could be adjusted in the range of 20-140 • C in steps of 10 • C and a resolution of 0.1 • C. The temperature experimental result of the wavelength interval (λ a-b ) change between λ a and λ b of the Co-MFBGS is shown in Figure 3a, which is clearly showing that λ a-b is almost free from temperature perturbations. So, the temperature cross-sensitivity of LRI measurements can be neglected. Before beginning the LFR measurement experiment, the Co-M using pumped laser powers of 250, 350 and 450 mW at a waveleng Co-MFBGS can be heated by pump laser due to the light absorptio shows the wavelength shift of the core mode (λa) at different pumpe clearly shows that λa has a redshift while increasing pumped sensitivity increases significantly with increasing heating pow temperature of Co-MFBGS is increased, so, when the Co-MFBGS is the heat of Co-MFBGS will be carried away by the flowing liquid, t MFBGS will be decreased, and λa will has a blueshift. During the LFR measurement experiment of the proposed Covaried in 0-0.98 μL/s by using a commercial electronic-controlled s an aqueous glycerin solution with a constant concentration into the meanwhile, the pumped laser power was set to 450 mW. A PZT flo The LRI measurement experiment of the Co-MFBGS was conducted at room temperature (25 • C). In each measurement, the aqueous glycerin solutions, with concentrations whose RI changed from 1.32 to 1.44 RIU in steps of 0.02 RIU, were injected into the microfluidic channel. After the aqueous glycerin solution had flowed to surround the entire Co-MFBGS, the reflection spectrum was interrogated several times by the OSA until almost no change was observed, and then the spectrum was recorded, so the reliable measuring values can be obtained. The experimental results of the proposed Co-MFBGS wavelength shift with different LRIs are shown in Figure 3b-d. Figure 3b-d shows the value change of λ a , λ b and λ a-b with different LRIs, respectively. It is clear that the core mode wavelength of the proposed Co-MFBGS does not shift, which means it is insensitive to the LRI. Meanwhile, the cladding mode is exceedingly sensitive to the LRI; its wavelength shift decreases by 0.69 nm with the LRI changes from 1.32 to 1.44 RIU. So, the value change of λ a-b is almost the same with the value change of λ b , and it can be fit to a quadratic non-linear equation with LRI change (1.32-1.44 RIU). The equation indicates that the maximum LRI sensitivity of the proposed Co-MFBGS wavelength interval is about −7.85 nm/RIU, Figure 3d. Before beginning the LFR measurement experiment, the Co-MFBGS was heated by using pumped laser powers of 250, 350 and 450 mW at a wavelength of 1480 nm, as the Co-MFBGS can be heated by pump laser due to the light absorption of cobalt. Figure 4f shows the wavelength shift of the core mode (λ a ) at different pumped laser powers, which clearly shows that λ a has a redshift while increasing pumped laser power, and the sensitivity increases significantly with increasing heating power. That means the temperature of Co-MFBGS is increased, so, when the Co-MFBGS is used for LFR sensing, the heat of Co-MFBGS will be carried away by the flowing liquid, the temperature of Co-MFBGS will be decreased, and λ a will has a blueshift. l. Sci. 2021, 11, x FOR PEER REVIEW With increasing LFR, both λa and λb experience a blueshift temperature of the Co-MFBGS; reflection spectra corresponding to d shown in Figure 4a. Figure 4b shows the value change of λb and rates; it can be seen that their wavelengths shift to shorter wavele but in Figure 4c it is shown that λa-b exhibited almost no change. Th the LRI can be discriminated by wavelength interval. Figure 4b als quadratic non-linear relationship between the value change of λa an During the LFR measurement experiment of the proposed Co-MFBGS, the LFR was varied in 0-0.98 µL/s by using a commercial electronic-controlled syringe pump to inject an aqueous glycerin solution with a constant concentration into the microfluidic channel; meanwhile, the pumped laser power was set to 450 mW. A PZT flow sensor was used to calibrate the LFR. The experimental results are shown in Figure 4. With increasing LFR, both λ a and λ b experience a blueshift due to the decreasing temperature of the Co-MFBGS; reflection spectra corresponding to different flow rates are shown in Figure 4a. Figure 4b shows the value change of λ b and λ a with different flow rates; it can be seen that their wavelengths shift to shorter wavelengths simultaneously, but in Figure 4c it is shown that λ a-b exhibited almost no change. That means the LFR and the LRI can be discriminated by wavelength interval. Figure 4b also shows that there is a quadratic non-linear relationship between the value change of λ a and the LFR; a flow rate of 0.21 µL/s can lead to a sensitivity of −1.93 nm/(µL/s). From Figure 4e, it can be seen that it takes about 10 s for the reading to stabilize when the LFR increases from 0 to 0.39 µL/s; therefore, the response time of the LFR measurement of the proposed Co-MFBGS is 10 s. Conclusions This paper has presented an integrated Co-MFBGS, which can measure the LRI and LFR simultaneously. A single straight Bragg grating was written in a section of Co 2+ -doped fiber and then immersed into aqueous hydrofluoric acid solution to fabricate a Co-MFBGS with a diameter of 16.7 µm. The Co-MFBGS of the proposed sensor provides well-defined resonances in reflection; by detecting the wavelength of the cladding mode and core mode, the LRI and LFR can be measured, respectively, and they can be distinguished by the change of wavelength separation between cladding mode and core mode, and the temperature cross-impact of LRI measurements is also eliminated. The experimental results show that in aqueous glycerin solution the maximum measurement sensitivity for LRI detection is −7.85 nm/RIU, and the LFR sensitivity is −1.93 nm/(µL/s) at a flow rate 0.21 µL/s. The proposed Co-MFBGS can potentially be used in bioengineering, medicine, environmental protection, etc.
2021-11-11T16:20:19.074Z
2021-11-09T00:00:00.000Z
243981710
s2orc/train
v2
Delays in presentation of intussusception and development of gangrene in Zimbabwe
Delays in presentation of intussusception and development of gangrene in Zimbabwe Introduction prompt diagnosis and treatment are considered key to successful management of intussusception. We examined pre-treatment delay among intussusception cases in Zimbabwe and conducted an exploratory analysis of factors associated with intraoperative finding of gangrene. Methods data were prospectively collected as part of the African Intussusception Network using a questionnaire administered on consecutive patients with intussusception managed at Harare Children´s Hospital. Delays were classified using the Three-Delays-Model: care-seeking delay (time from onset of symptoms to first presentation for health care), health-system delay (referral time from presentation to first facility to treatment facility) and treatment delay (time from presentation at treatment facility to treatment). Results ninety-two patients were enrolled from August 2014 to December 2016. The mean care-seeking interval was 1.9 days, the mean health-system interval was 1.5 days, and the mean treatment interval was 1.1 days. Mean total time from symptom onset to treatment was 4.4 days. Being transferred from another institution added 1.4 days to the patient journey. Gangrene was found in 2 (25%) of children who received treatment within 1 day, 13 (41%) of children who received treatment 2-3 days, and 26 (50%) of children who received treatment more than 3 days after symptom onset (p = 0.34). Conclusion significant care-seeking and health-system delays are encountered by intussusception patients in Zimbabwe. Our findings highlight the need to explore approaches to improve the early diagnosis of intussusception and prompt referral of patients for treatment. Introduction Intussusception is an enteric invagination into an adjacent segment of bowel. Some intussusception cases have been associated with infection with various enteric viruses causing Peyer´s patch hypertrophy [1,2]. This assertion was bolstered by some studies finding a seasonal pattern of intussusception cases [3,4]. A slightly increased risk of intussusception of 1 to 6 excess cases per 100,000 vaccinated infants has been observed following rotavirus vaccination in clinical trials in high-and middle-income countries [5]; however, no association was found between rotavirus vaccine and intussusception in a multi-country analysis in sub-Saharan Africa [6]. Intussusception is the most common cause of childhood intestinal obstruction in Zimbabwe [3], and is also the most frequently encountered paediatric surgical emergency [3]. This is similar to the experience in other African countries [7]. It was found to be the most common cause of childhood intestinal obstruction in Nigeria and of acute mechanical obstruction in children in Niger [8,9]. Intussusception is managed surgically, with manual reduction or resection, or nonoperatively by air, hydrostatic or contrast enema. In Africa rates of surgical intervention are higher than for non-operative reduction [6,7,10]. Ekenze et al. reported that in south eastern Nigeria surgical management was performed routinely in cases of intussusception [11]. In contrast, 81% of intussusception patients in a study in Europe had non-operative reduction [12]. Delays in presentation and treatment of serious surgical diseases, including intussusception, are common in low-resource countries due to limited access to care [7,10,13,14]. In a study from Nigeria only 7.7% of patients presented within 24 hours of onset of intussusception symptoms [15]. Late presentation of intussusception cases is considered a risk factor for gangrene and death, increasing the need for surgery [16][17][18] and predicting the failure of non-operative reduction [16,[19][20]. It also increases the chances of sepsis, multiple organ dysfunction and death. [21,22] In this analysis we describe the time intervals from onset of symptoms to definitive treatment of infants with intussusception in Zimbabwe. As an exploratory analysis, we considered the relationship between delayed presentation and gangrene. Patient population All patients < 12 months old admitted and treated for intussusception at Harare Children´s Hospital from August 2014 to December 2016 and enrolled as part of the African Intussusception Surveillance Network were included in this analysis. Patients were included if they fulfilled level 1 of the Brighton Collaboration Intussusception Working Group criteria of diagnostic certainty [23]. For this analysis, patients were excluded if they did not have an ileocolic intussusception ( Figure 1). Non-ileocolic intussusception is frequently caused by a distinct lead point [24,25], which would confound the effect of embryological mechanical factors. Study setting The study was performed at Harare Children´s Hospital, a public, teaching referral hospital. Data collection Data were collected using a structured questionnaire on admission and during hospital stay. Information regarding age, sex, home address, pertinent dates in the referral journey, method of definitive treatment, intraoperative findings, and procedure performed was collected. Patient codes were used to anonymize the data. Patients with missing time interval and intraoperative data were excluded from statistical analysis ( Figure 1). Description of surgical procedure Patients were operated by the paediatric surgical team of 10 experienced surgeons and surgical trainees at Harare Children´s Hospital paediatric theatre. The surgical procedure was performed as per institutional standard and involved initial exploratory laparotomy with an attempt at reduction made if bowel was assessed to be viable. Bowel was considered to be viable if bowel had good colour, contractility and consistency as well as strong mesenteric pulsations. Bowel was resected with primary anastomosis if it was judged to be gangrenous, based on these four parameters. The viability of the unresected intestines was confirmed by post-operative follow-up. Gangrene of resected intestines was corroborated on histological examination of resected specimens which is performed routinely for all resections. Definitions of time intervals The time from symptom onset to definitive management was split into three time intervals using a modification of Three Delays Model [26]. This includes: care-seeking interval, health-system interval and treatment interval. Composite intervals were added to this model as described below. The care-seeking interval was calculated as the time in days from the date of first symptoms to the date of first contact with the health system at a conventional medical institution. The health-system interval was calculated from the date of first contact with the health system until the date of admission to Harare Children´s Hospital. Treatment interval was calculated as the time in days from the date of admission at Harare Children´s Hospital to the date of definitive management. Total time to hospital (TTH) was calculated as the time in days from the date of symptom onset to the date of admission at Harare Children´s Hospital, in cases where the child was not transferred from another facility and the first contact with the healthcare system was Harare Children´s Hospital, the care-seeking interval and time to hospital were equal. Total time to treatment (TTT) was calculated from the date of symptom onset to date of definitive treatment. Statistical analysis We used descriptive statistics to describe the demographic characteristics and the patient journey time intervals. Sample means, and standard deviations were calculated for each interval. A dependent t-test was used to determine whether the care-seeking interval and health-system intervals were significantly different from one another. We used chisquare or Fisher´s exact tests to investigate whether a relationship existed between time to hospital; time to treatment; referral status and the intraoperative finding of gangrene. P-values of < 0.05 were considered significant. Demographics Ninety two (92) patients with intussusception were included in this analysis. 59 (64%) were male with a male to female ratio of 1.8:1. The median age was 6 months and interquartile range was 5-9 months. All patients were treated with surgery and 41 (45%) developed gangrene. Geographic factors Home addresses were used to determine where patients lived at the time of illness onset. Figure 2 shows the distribution of patients according to home address in Zimbabwe and the mean delay for each province. The prevalence ranged from 25.8 per 100,000 live births in Harare to 3.3 per 100,000 live births in Mashonaland Central. There were no cases admitted to Harare Children´s Hospital from the provinces of Matabeleland North, Matabeleland South, or Bulawayo for intussusception during the surveillance period. The shortest mean time to hospital was 2.5 days among children from Harare and Midlands provinces. The longest mean time to hospital was 14.3 days among children from Mashonaland Central. Time intervals in the patient journey eighty two (82) patients (89%) were transferred to Harare Children´s Hospital from another health institution and 10 patients (11%) came directly from home. Of those who were transferred from another hospital, mean care-seeking interval, health-system interval and treatment interval were 1.9 days (SD: 3.6), 1.5 days (SD: 1.9) and 1.1 days (SD: 1.2) respectively. No significant difference was observed between the careseeking interval and health-system interval (p = 0.501). For patients admitted from home, mean care-seeking interval was 2.0 days (SD 2.3) and treatment interval was 1.1 days (SD: 0.3) (Figure 3). For all patients, the mean treatment interval was 1.1 days (SD: 1.2). Mean time to hospital was 3.3 days (SD: 3.6) and mean time to treatment was 4.4 days (SD: 3.8). Children who were transferred from another facility to Harare Children´s Hospital had an average of 1.4 days longer time to hospital compared to children who were not transferred. Relationship with development of gangrene Of the patients that were transferred from another facility, 44% (n = 36) developed gangrene and 56% (n = 46) did not (p = 0.75) ( Table 1). Gangrene was found intraoperatively in 38% (n = 9) of children who arrived to hospital within 1 day, 42% (n = 16) of those who arrived to hospital 2-3 days, and 53% of those who arrived at hospital more than 3 days of symptom onset (p = 0.47). Similarly, gangrene was found intraoperatively in 25% (n = 2) of children who received treatment within 1 day, 41% (n = 13) of children who received treatment 2-3 days, and 50% (n = 26) of children who received treatment more than 3 days after symptom onset (p = 0.34). Complications Five patients died postoperatively due to multi-organ dysfunction. Three patients died after hospital discharge from unrelated causes. One patient required another laparotomy 1 month postoperatively for adhesive small bowel obstruction. Discussion We found significant delays between the onset of intussusception symptoms and reduction among children < 12 months old in Zimbabwe. The mean care-seeking interval was slightly higher than the mean health-system interval but this difference was not statistically significant. Therefore, both intervals likely contributed equally to delays in reaching definitive treatment. The evidence to date would suggest that diagnostic delay plays a large part in late presentation rather than socioeconomic factors, which has been reported by other evaluations [27][28][29]. Barriers to timely care in paediatric surgery were explored by Pilkington et al and include transport and cost on the part of the patient as well as shortcomings in hospital infrastructure and resources [30]. The mean treatment interval was 1.1 days in our study and was comparable to guidelines for wait times in paediatric surgical patients formulated by the Canadian Paediatric Surgical Wait Times Taskforce [31]. It was also much shorter than average treatment interval in Uganda [30]. This is a surrogate quality measure and shows that, definitive management is instituted quickly once the decision has been made. Surgery was used to manage intussusception for 100% of this study population because of lack of facilities required for enema reduction during the study period. Additionally, when duration of symptoms is more than 24 hours, surgeons may be tempted to forgo non-operative reduction because of a presumed high rate of failure in these patients. The percentage of patients who received surgery is very high when compared to the much lower rates observed in Europe (19%) [12] and Vietnam (8%) [2]. The provision of facilities for non-operative reduction should be prioritised since a sizeable percentage of patients may be amenable to this method of treatment even when they present late. While we observed a trend toward increasing rates of gangrene with increasing intervals from intussusception onset to treatment, the results were not statistically significant likely because of our small sample size. Although some previous studies have found such a relationship [18][19][20], other studies have not found a relationship between duration of symptoms and success of non-operative reduction or need for surgery [17,[32][33][34][35][36][37][38]. Gangrene is the major reason for failure of nonoperative reduction and failure of reduction may be considered a proxy for gangrene. This suggests there may be additional factors that influence the development of gangrene. Mechanical factors have been suggested that influence the tension or pressure on mesenteric blood vessels including abnormalities of intestinal fixation [39][40][41]. The assertion by Brereton [42], Gil-Vargas [40] and others [43] that an excessively long, loose mesentery may be an etiological factor for intussusception is plausible. It may also protect the bowel from the development of gangrene. Furthermore, rectal protrusion of intussusception has been thought to represent an excessive delay in presentation [44,45], but equally could reflect excessive laxity of the mesentery of normally fixed retroperitoneal structures [46]. One patient from Nigeria with rectal protrusion reported presented after 28 days and had no gangrene or perforation [47]. Similarly, in our study one patient received definitive treatment 33 days after onset of symptoms and had viable bowel requiring only manual reduction. Further research is needed in this area. Limitations A major limitation of this study is that intraoperative clinical judgment was used in the determination of intestinal gangrene, which may have overestimated the presence of gangrene compared to other techniques such as fluorescence or laser Doppler ultrasound [48][49][50]. However, there was > 95% concordance between histological assessment and clinical judgment in this population suggesting that clinical judgment was an acceptable method for intraoperative gangrene assessment for this study. The dates of intussusception symptom onset were self-reported by each The data shows a trend towards higher rates of gangrene when the pre-hospital and pre-treatment delay is longer. The inability to find a statistically significant relationship may have been related to inadequate power of the study to detect differences considering the low sample sizes in some cells. Future studies with larger sample sizes could help clarify this possibility. Because this was a single-centre study, it may not be generalizable to all of Zimbabwe. Harare Children´s Hospital is the only dedicated paediatric hospital in Zimbabwe, however a small number of patients from the south-west of the country are managed by general surgeons in the region. Conclusion Time to hospital for treatment of intussusception in Zimbabwe is longer than commonly accepted benchmarks. Low sample size in this study may not have provided enough statistical power to show significant associations between gangrene and pre-hospital and pre-treatment duration although these may have existed. Advocacy and training among primary care providers to improve timeliness and accuracy of diagnosis and capacitating small peripheral health institutions as well as health education in parents to improve healthcare-seeking behaviour are potential targets for reducing delays in the pre-treatment interval. Future research should investigate mechanical factors and the morphology of the bowel in intussusception.
2019-10-23T12:38:35.294Z
2021-07-28T00:00:00.000Z
237582000
s2orc/train
v2
Contextual Application of Pulse-Compression and Multi-frequency Distance-Gain Size Analysis in Ultrasonic Inspection of Forging
Contextual Application of Pulse-Compression and Multi-frequency Distance-Gain Size Analysis in Ultrasonic Inspection of Forging Ultrasonic pulse-echo non-destructive testing, combined with Distance Gain Size (DGS) analysis, is still the main method used for the inspection of forgings such as shafts or discs. This method allows the inspection to be carried out, assuring in turns the necessary sensitivity and defect detection capability in most of the cases. However, when testing large or highly attenuating samples with standard pulse-echo, the maximum achievable signal-to-noise ratio is limited by both the beam energy physical attenuation during the propagation and by the inherent divergence of any ultrasound beam emitted by a finite geometrical aperture. To face this issue, the application of the pulse-compression technique to the ultrasonic inspection of forgings was proposed by some of the present authors, in combination with the use of broadband ultrasonic transducers and broadband chirp excitation signals. Here, the method is extended by applying DGS analysis to the pulse-compression output signal. Both standard single-frequency/narrowband DGS and multi-frequency/broadband DGS analyses applied on pulse-compression data acquired on a forging with known defects are tested and compared. It is shown that the DGS analysis works properly with pulse-compression data collected by using a separate transmitter and receiver transducers. Narrowband analysis and broadband analyses provide almost identical results, but the latter exhibits advantages over the traditional method: it allows the inspection frequency to be optimized by using a single pair of transducers and with a single measurement. In addition, the range resolution achieved is higher than the one achievable for the narrowband case. Introduction Ultrasonic NDT is the only technique that allows the inspection of the whole volume of large forgings. Pulse-echo (PuE) is a widespread method-all the Standards and evaluation procedures have been developed around it. Among these, the analysis based on the DGS diagrams is the standard method [1][2][3]. Various probes at different incidence angles and with different central frequencies are used to guarantee the inspection of the whole sample's volume with an adequate sensitivity for each possible type of embedded defect. Despite DGS analysis with PuE is effective in most of the situations, two main critical points emerge: (1) the increasing need of automatic inspection procedures, makes the use of many different probes inconvenient; (2) in the presence of high attenuation and/or large dimensions of the forgings, PuE could not guarantee an enough SNR level and sensitivity. To face the former point, phased-array systems have been introduced, making the automatic inspection easier. In fact, a single phased-array probe can replace several standard probes. Moreover, the sensitivity can be increased where needed to address also the latter issue, as it is possible to vary the focusing of the ultrasonic beam. However, this is not enough in some critical applications, especially when a high sensitivity is required with very weak signals or high noise level. Some of the present authors proposed to exploit pulsecompression (PuC) technique in combination with the use of two separate transducers, one transmitter (Tx) and one receiver (Rx), and chirp signals to increase the SNR of the measurement, and in turns to increase the defect detection sensitivity [4,5]. In the present work, the method is improved by developing a numerical simulation tool for calculating DGS curves for an arbitrary Tx-Rx configuration working with both single-element and phased-array probes. The resulting DGS diagrams are used to evaluate the size of known flat bottom hole defects realized on a steel forging. Two different DGS analysis procedures are implemented and compared: one makes use of a narrowband excitation chirp signal and of a single-frequency DGS analysis, so replicating the conditions of a standard single-frequency DGS analysis made in PuE with a single narrowband probe, the other makes use of a broadband chirp signal and a simultaneous multi-frequency DGS analysis. The paper is organized as follows: the basic theory of PuC is summarized in Sect. 2; in Sect. 3, the multi-frequency DGS analysis is introduced; in Sect. 4, experimental results and a comparison between single-frequency and multi-frequency DGS analysis are reported. In Sect. 5, some conclusions and perspectives are drawn. Pulse Compression Basic Theory Flaws detection through ultrasonic inspection consists in measuring the impulse response h(t) of the sample under test (SUT) with respect to a mechanical wave excitation. In standard PuE method, the impulse response h(t) of the system under inspection is estimated by exciting the SUT with a short pulse δ(t) and then recording the system's response h (t) δ(t) * h(t), where "*" is the convolution operator. If δ (t) is short enough to cover uniformly the whole bandwidth of the transducers, the approximation can be considered very close to the true expected signals. On the contrary, in a PuC measurement scheme an estimate h(t) of h(t) is retrieved by: (I) exciting the system with a coded signal s(t); (II) measuring the output of the coded excitation y(t) s(t) * h(t); (III) applying the so-called matched filter ψ(t) to the output [6]. The result obtained at the end of the PuC procedure is mathematically described in Eq. (1): where the "pulse-compression condition" ψ(t) * s(t) δ (t) ≈ δ(t) has been exploited. As in most of the applications, in the present paper the matched filter is defined as the timereversed replica of s(t), ψ(t) s(−t), so that δ(t) turns out to be the autocorrelation function of s(t). The PuC condition can be therefore assured by every waveform having a δ-like autocorrelation function. An extended literature is available on this topic (see for example [4,[7][8][9]) even if, as a matter of fact, two main classes of coded excitations are of practical interest and used for various applications, among which NDT: (a) frequency modulated "chirp" signals and (b) phase-modulated "binary" signals. Chirp signals are sinusoidal signals characterized by a timevarying istantaneous frequency f ist (t). Chirps can be either linear, i.e. with a linearly varying instantaneous frequency, and non-linear. A notable example of non-linear chirp is the exponential one in which the istantanesous frequency changes exponentially with the time and that is used in combination of pulse-compression to characterize a large class of non-linear systems [10]. More generally, a non-linear chirp can be defined to reproduce any arbitrary continuous spectrum depending on the application [11]. Phase-modulated coded excitations are instead derived by binary sequences having peculiar mathematical properties, e.g. Barker codes, maximum-length sequences (MLS), Golay codes, etc. [7-9, 12, 13]. In general such codes have a base-band spectrum but they can be modulated by a single frequency signal to obtain a bandpass one. Nevertheless, phase-modulated signals have not the same flexibility in shaping the excitation spectrum of the chirp signals. On the other hand, PuC measurement schemes based on binary phase-modulated coded signals can reach a perfect "pulse-compression condition", as in the case of Golay [12], or an almost perfect one, as in the case of MLS [13], whereas the chirp-based ones are characterized by the sidelobes of the δ(t) [14]. Unfortunately, in many NDT applications, and in the case of ultrasonic inspection of large reverberating structures as in the present case, nor the Golay-based neither the MLS-based PuC schemes are suitable. The former needs to combine two measurements for obtaining a single impulse response estimate h(t). In the case of large forgings, the ultrasound energy can reverberate into the SUT for a long time, so a longer pause must be placed between the two measurements to avoid any echo from the first excitation being collected, and wrongly interpreted, during the second measurement. MLSbased PuC scheme instead requires a single measurement, but in which a periodic excitation (at least two periods) must be used [8,9,14]. Under this condition, the reconstructed h(t) is very close to h(t), provided that the excitation period is longer than h(t). Also in this case, the fact that the ultrasound energy can last for long times inside forgings, and hence h(t) as well, makes the use of MLS-based PuC not suitable for forging inspection. On the other hand, PuC schemes relying on chirp signals have not these issues, and the worsening of the "pulse-compression condition" is largely counterbalanced by an extreme easiness and flexibility in the application. Further, most of the NDT tests operate in linear regime and use band-pass transducers and measurement systems. In these cases, the use of non-linear chirps can provide few advantages with respect to the use on linear ones. For these reasons, the most used waveform in NDT applications of PuC is the Linear Chirp (LC), that is the signal employed also for the present application. LC is described by the expression [11]: (2) where T is the duration of the chirp signal, f 1 is the start frequency, f 2 is the stop frequency, F c ( f 1 + f 2 )/2 is the centre frequency and B % ( f 2 − f 1 )/F c is the percentage bandwidth of the chirp. Note that T and B are not constrained by each other, so that the duration of the LC can be increased arbitrarily. A(t) is a time-windowing function that modulates the amplitude of the chirp and 2T t 2 is the chirp phase function that determines the instantaneous frequency of the signal accordingly with f ist (t) Φ (t) 2π . In the case of a rectangular window, i.e. A(t) A for t ∈ [0, T ], LC has a constant envelope and an almost flat power spectrum in the spanned frequency range f ∈ [ f 1 , f 2 ]. However, a non-constant and usually symmetric A(t) is used to reduce sidelobes of δ(t) and in the present work the Tukey-Elliptical window it is used [15]. Experimentally, while PuE requires only one transducer that acts both as Tx and Rx, PuC based schemes usually employ two separated Tx and Rx transducers in pitch-catch configuration to allow the excitation signal duration to be extended arbitrarily. The increased complexity of the PuC procedure is justified by the benefits provided in terms of both resolution and SNR enhancement. Indeed, by using two distinct transducers, the excitation signal can be long as the typical inspection time (few milliseconds for steel forgings) and therefore thousands of times longer than typical pulses used in PuE, which duration is inversely proportional to the transducer bandwidth. This allows more energy to be delivered to the system, increasing in turns the SNR. Moreover, it was found that PuC is optimal to reduce both environmental and quantization noise, the latter introduced by the Analogto-Digital Converter [12,13]. Figure 1 summarizes the PuC procedure adopted in this work, while Fig. 2 reports an example of the PuC procedure applied to a benchmark forging sample, on which a flat bottom hole was realized and filled afterward with soldering, so as to simulate a small void close to the backwall surface. Multifrequency DGS Analysis As previously mentioned, the standard procedure for forgings inspection relies on two pillars: the PuE method and the DGS diagrams analysis. In the previous Section, a measurement procedure based on PuC, an alternative to PuE, has been introduced to increase the SNR of the measurement and hence the sensitivity of the inspection. In this section, the procedure for applying DGS analysis in combination with PuC is shown, together with a thorough explanation on how to implement a multi-frequency DGS analysis that can be beneficial when: (1) the optimal inspection frequency is not known, (2) the effect of the inspection frequency on the defect sizing must be considered and (3) an accurate analysis of the defect sizing capability is of interest. To accomplish these aims, first of all it is worth to note that after the application of the PuC procedure, the signals h(t) are very similar to those provided by PuE, i.e. h(t), so the standard DGS analysis can be applied on the h(t) provided that: (i) the DGS diagrams for the Tx-Rx configuration are known, (ii) the overall measuring system composed of a linear chirp excitation signal and the Tx-Rx probes exhibits a narrowband nature, centred around the frequency F c of the employed DGS diagrams and with relative bandwidth B % ∼ 40. Regarding point (i), a numerical tool was implemented that calculated the DGS diagrams by exploiting the Rayleigh-Sommerfeld Integral Model. The two probes were modelled both as piston transducers and full interference in the path Tx-defect-Rx was considered [16][17][18]. Regarding point (ii), usually the narrowband characteristic of the measurement system is guaranteed utilizing narrowband transducers. This is because the control over the excitation power spectrum is little when using single pulse or short burst excitation. Note that 40% is the typical B % value of narrowband probes used for DGS analysis in PuE, e.g. GE Krautkamer B2S. Indeed, even if DGS are calculated by considering a single frequency value, in practical applications B % < 40 implies a low range resolution that could hamper the defect detection. Conversely, the excited bandwidth can be shaped with great accuracy and almost arbitrarily by employing a LC as input signal, provided that the so-called time-bandwidth product of the chirp T · B is large enough, and this is the usual case for forgings inspection. This means that the transducers can also be broadband, B % > 100, but the ultrasonic generated spectrum is determined by the chirp signal. Fig. 1 Block diagram of the PuC measurement procedure implemented. A windowed linear chirp is used as coded excitation signal. The sample output signal, y(t), at which is superimposed an Additive White Gaussian Noise, is then filtered with the matched filter, corresponding in the present case to the time reversed replica of the input signal. After the application of the PuC, an estimate h(t) of the impulse response, i.e. the reflectogram, is retrieved Fig. 2 Generated signal-a windowed linear chirp of duration equal to 33 µs, at central frequency of 5 MHz and bandwidth of 180% was used for the inspection of 300 mm long and 70 mm of diameter, a steel cylinder shown in the picture. The acquired signal, combined with additive noise, is passed through the matched filter (implementing the pulsecompression procedure) to retrieve the h(t) of the sample. The SNR of the measurement process at the backwall echo and the defect locations can be estimated from the Envelope of the retrieved h(t) DGS diagrams were developed so far for PuE inspections, i.e. for single probe UT measurement setups. Hence, they cannot be applied to the PuC UT inspections, due to unique geometry of the dual probe measurement setup. In the fol-lowing sections, the results of the numerical tools proposed here for DGS calculation are compared to the standard DGS diagrams, and the procedure of multifrequency DGS analysis is described as well. Field Simulation and DGS Calculation A numerical tool was developed to compute the DGS diagrams for arbitrary shapes and positions of both the transducers and the defects, thus allowing the DGS method to be applied in PuC procedures employing pitch-catch configuration. The numerical tool solves the Rayleigh Summerfield integral model of wave propagation by considering the "fullinterference" case and piston transducers model [16]. The Tx, Rx and defects are discretized, and the amplitude of the echo due to a defect is calculated by coherently summing up the contributions of all possible paths Tx-defect-Rx, i.e. considering amplitude and phase of each contribution. The defect is assumed to behave as a perfect reflector and the surfaces of the transducers are considered perfectly rigids. In Fig. 3, the DGS diagrams produced by using the numerical tool for a GE Krautkrämer's B2S probe in PuE configuration are compared with those reported in the probe Numerically evaluated AVG diagrams of single probe (PuE inspection method) and dual probes (PuC inspection method). The two methods exhibit the same pattern of amplitude values for both the infinite and the disk reflectors in the far field. In the near field, PuC sensitivity and hence the DGS diagrams' amplitudes is lower due to the partial superposition of the Tx and Rx beams data sheet. The diagram obtained for the infinite reflector and the DGS diagrams for various disk reflectors in the far field matched perfectly with the vendor DGS curves. Instead, in the near field, the numerical DGS curves exhibits a series of local minima and maxima while the standard DGS diagrams are more regular. This phenomenon is well known, and it was first discussed by Krautkrämer brothers [1,18]: in numerical curves calculated at a single frequency, constructive and destructive interference is considered, and interference has a high impact especially for small defects in the near field. In practical applications, although sometimes visible, interference phenomena are less relevant due to the finite but non-null bandwidth of the transducers, which implies different interference locations for each frequency value leading to a resultant averaging effect. Moreover, it must be considered that real defects are not perfect reflectors as well as the propagating medium is not perfectly homogeneous. Increasingly, as a matter of fact, the equivalent defect size evaluation is made by considering not a single measurement point but a finite area of inspection. So, to remove the interference effects in probes' datasheet, the DGS diagrams in the near field are usually established by experiment or by performing frequency and spatial averaging over the theoretical singlefrequency calculated curves. Figure 4 reports an example of this last approach and the related effect on destructive and constructive interference-it is shown that the averaged diagrams are very close to the experimental calculated ones in the near field. Once having verified the robustness of the tool for the calculation of DGS diagrams in PuE configuration, the DGS diagrams for two probes in pitch-catch used in the PuC procedures were evaluated. Figure 5 compares the single probe DGS diagrams of B2S probes with the DGS curves obtained for a pair of two B2S probes placed side-by-side. Note that here the probes' case dimensions (45 mm of diameter) have Fig. 8 Example of application of the multi-frequency DGS analysis to data collected with broadband probes centred at 5 MHz (Olympus V108 Videoscan). The broadband signal passes to several narrowband filters and the outputs of these filters undergo to a standard single-frequency DGS analysis. The defect detection capability depends on the frequency of the analysis. In this case, it is found to be maximised at 3 MHz Fig. 9 Sketches of the two forgings inspected: a cylindrical forging with D 3 mm FBB reference defect and b disk section forging with D 1 mm FBB reference defect. Sample (a) was inspected with longitudinal waves while sample (b) with share waves been considered. In the near field, the DGS diagrams for pitch-catch configuration exhibit a lower sensitivity in PuC configuration than in PuE, meaning that the backwall echo or a defect echo gives a signal of less amplitude in the PuC case. However, the PuC and PuE sensitivity values become closer and closer as long as the distance of backwall or defects increases, almost coinciding in the far field. In the case considered here, the sensitivity difference in the near field is very large. This is because the case of the probes is approximately twice the element size. Thus, the ultrasonic beam of the Tx and those of the Rx superimpose significantly only at a certain distance, and the Tx and Rx beams superposition is strictly related to the DGS diagrams values. Only beam's sidelobes can superimpose at a very small distances for a pair of probes which elements are separated by some gap. This is depicted by the two-dimensional images reported in Fig. 6, which shows the sensitivity for both PuE and PuC cases for X-Z and Y-Z planes, wherein the sensitivity map is formed by visualizing pixelwise the amplitude of the echo signal due to a defect of 1 mm placed at the pixel position. For PuE, the Tx and Rx fields coincide, thus the sensitivity is proportional to the beam energy. In addition, for circular probes as the B2S here considered, the sensitivity on the X-Z and Y-Z planes is the same. On the other hand, in pitch-catch configuration, the superposition of the Tx and Rx field patterns is not symmetrical and so is the sensitivity. By using fingertips type probes, the centre-centre distance for the Tx-Rx pair can be minimized, yielding to an increased sensitivity in the near field. Please note that the experimental results reported later were obtained by using finger-type probes. Multi-frequency DGS Analysis Procedure In this Section the multi-frequency DGS analysis procedure is introduced and quantitatively compared with the standard single-frequency one. Please note that the single-frequency DGS analysis must be more properly defined as narrowband analysis, meaning that when the standard single-frequency analysis is considered, it is referred to the use of a narrowband signals, B % 40 or 60 exciting broadband transducers (VIDEOSCAN Tx-Rx-pairs from Olympus). Which are the main reasons for introducing the multifrequency DGS analysis? One is the possibility to implement DGS defect sizing at different frequencies simultaneously, so as to increase reliability and accuracy; the second one is related to the fact that broadband signals exhibit a higher spatial/range resolution than narrowband one, and this helps the defect detection by reducing grain noise and possible pile-up of different echoes. In addition, the use of broadband signals Results of multi-frequency DGS analysis calculated at the same frequencies of Fig. 12, but using a single broadband chirp excitation signal, employing a pair of Olympus V109-fingertips Probes on a 3 mm diameter flat bottom bore defect is dual beneficial in PuC, since the largest is the bandwidth, the higher is the SNR increment, and the higher is the bandwidth, the smaller are the δ(t)'s sidelobes [14]. At the same time, the use of broadband signals conflicts with the direct application of the standard DGS procedure, even though this has been recently proposed [19]. We there-fore investigated if the DGS analysis could be extended to the use of broadband signals and transducers. In this paper we propose and test the following procedure: 1. A broadband LC signal and a broadband Tx-Rx transducers pair are used; For each frequency f j , the standard DGS analysis is applied (the physical attenuation is calculated and counterbalanced numerically, the echo envelope is compared with the DGS diagrams). The process of standard single frequency and multifrequency DGS analysis is further explained in the flow chart in Fig. 7. Moreover, an example of the procedure is depicted in Fig. 8 while in the following Section, some results obtained with both narrowband and broadband LC are reported. In perspective, this method could be further developed considering only a unique broadband DGS diagram. This can be done by considering the spectrum of the input signal and the frequency-dependent attenuation within the sample, thus providing the estimation of the defects size as well as the defect detection sensitivity by exploiting the SNR values and the range resolution of broadband data. It is worth to note that a similar approach has been already considered in calculating standard narrowband DGS to deal with the real bandwidth of the transducers [19]. Experimental Results To test and compare the single-and the multi-frequency DGS analysis, experimental data were collected on two samples containing reference defects, see Fig. 9. The first sample, (a), was a cylindrical forging of diameter~600 mm and length~1450 mm with a flat bottom bore defect (FBB) of diameter D 3 mm, length L 20 mm and depth 1430 mm realized on the back flat surface. The second sample, (b), was a section of a disk sample with outer radius~873 mm and the inner radius~147 mm, in which there was a FBB defect of D 1 mm and L 20 mm drilled on one of the cross-section radial flat surfaces so that the normal incidence condition on its flat surface is attained by using a beam oriented at 45°w ith respect the normal of the curved outer surface. Sample (a) was inspected with a pair of Olympus fingertips V109 VIDEOSCAN probes (active element diameter 0.5 in, central frequency 5 MHz, with centre-centre distance in pitch catch of 17 mm), and with a pair of Olympus, V108 VIDEOSCAN probes (active element diameter 0.75 in, central frequency 5 MHz, with centre-centre distance in pitch catch of 35 mm). Both pairs were used without any wedge so that longitudinal waves were generated within the sample and the beam axis had a 0°angle with respect the normal of the inspection surface. Sample (b) was inspected with the same pair of Olympus fingertips V109 VIDEOSCAN probes and with a pair of Olympus C106 CENTERSCAN probes (active element diameter 0.5 in, central frequency 2.25 MHz, with centre-centre distance in pitch catch of On sample (a), both single-frequency and multi-frequency DGS analyses were done. Figures 10 and 12 depict the results obtained using a narrow-band linear chirp, B % 40, exploiting several measurements at different central frequencies. For multi-frequency case instead, Figs. 11 and 13, the analysis was carried out by acquiring a single broadband signal and then applying the procedure illustrated in Fig. 7-bottom. The results obtained by using multi-frequency DGS are almost identical to those achieved by standard narrowband DGS, even more precise in some cases. This is illustrated in Fig. 14, which summarizes the values of the equivalent defect diameter estimated at various frequency, by using both narrowband and wideband excitation signals. In subplot (a), the diameter values, D est , estimated for the 3 mm defect by using a broadband excitation together with the multi-frequency DGS analysis are compared with the D est values retrieved by using narrowband chirp signals with B % 40 and B % 60 respectively. It emerges that the results of multifrequency analysis applied to a broadband signal at different central frequencies are more precise and accurate than those attained by using a narrowband signal for each frequency. The D est values estimated for the 3 mm defect at various frequencies by using B % 180 broadband chirps with Fig. 17 Same Multifrequency AVG diagrams of the disk sample as Fig. 15, but in this case using a pair of the Probes C106 and the broadband chirp signal centred at 2.25 MHz different F C 's are compared and showed in subplot (b). The results are almost identical in the three cases demonstrating that the procedure is robust and does not depends from the effective bandwidth of the excitation, provided that the frequency range of the multiple DGS analysis is covered. The two aspects evidenced by Fig. 14 show that the proposed method provides precise results at different frequencies and a reduction of the inspection time. In addition, a better spatial resolution was also obtained employing the broadband excitation signal with respect to the narrowband one. This is shown in Fig. 15 where a zoom of the defect signal echo envelope and of the backwall echo, that were at 20 mm of distance, is depicted. The spatial width of the measured defect echoes is smaller by using broadband excitation. Moreover, broadband signals allow reducing the sidelobes of the backwall echo, which can hide possible defects located at a very short distance from the backwall. For sample (b), only results attained by using broadband excitation are reported. Figure 16 illustrates the results of the pair of V109 probes placed on a 30°wedge; Fig. 17 illustrates the results attained with the pair of C106 probes placed on a 30°wedge. The defect is clearly detected, and its diameter is well estimated, except for the smaller frequency of analysis corresponding to 1.5 MHz. Conclusions An application of the pulse-compression technique to the ultrasonic inspection of forgings is presented. By using broadband probes and broadband excitations, the standard DGS analysis of echograms was extended to perform multifrequency DGS analysis on a single measurement. The procedure was compared with the use of narrowband signals, even in combination of pulse-compression. Results showed that the defect sizing capability is left unaltered by using broadband signals and then applying filters before DGS analysis, but this approach can increase the precision and accuracy of the defect sizing and the spatial resolution, while lowering the inspection time. In addition, such procedure allows the optimal inspection frequency being established for a given measurement point. The results open space for further developments in terms of inspection frequency optimization and for the development of a broadband DGS defect estimation procedure, which should benefit from the pulsecompression in terms of both SNR gain and spatial resolution. Moreover, the use of such procedure in combination with 3D imaging protocols (see for instance [20]) could further improve the defect characterization, while providing its location within the sample, which is also relevant in the evaluation of defect impact. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2022-12-16T15:00:26.927Z
2019-07-26T00:00:00.000Z
254701800
s2orc/train
v2
Defining CD4 T helper and T regulatory cell endotypes of progressive and remitting pulmonary sarcoidosis (BRITE): protocol for a US-based, multicentre, longitudinal observational bronchoscopy study
Defining CD4 T helper and T regulatory cell endotypes of progressive and remitting pulmonary sarcoidosis (BRITE): protocol for a US-based, multicentre, longitudinal observational bronchoscopy study Introduction Sarcoidosis is a multiorgan granulomatous disorder thought to be triggered and influenced by gene–environment interactions. Sarcoidosis affects 45–300/100 000 individuals in the USA and has an increasing mortality rate. The greatest gap in knowledge about sarcoidosis pathobiology is a lack of understanding about the underlying immunological mechanisms driving progressive pulmonary disease. The objective of this study is to define the lung-specific and blood-specific longitudinal changes in the adaptive immune response and their relationship to progressive and non-progressive pulmonary outcomes in patients with recently diagnosed sarcoidosis. Methods and analysis The BRonchoscopy at Initial sarcoidosis diagnosis Targeting longitudinal Endpoints study is a US-based, NIH-sponsored longitudinal blood and bronchoscopy study. Enrolment will occur over four centres with a target sample size of 80 eligible participants within 18 months of tissue diagnosis. Participants will undergo six study visits over 18 months. In addition to serial measurement of lung function, symptom surveys and chest X-rays, participants will undergo collection of blood and two bronchoscopies with bronchoalveolar lavage separated by 6 months. Freshly processed samples will be stained and flow-sorted for isolation of CD4 +T helper (Th1, Th17.0 and Th17.1) and T regulatory cell immune populations, followed by next-generation RNA sequencing. We will construct bioinformatic tools using this gene expression to define sarcoidosis endotypes that associate with progressive and non-progressive pulmonary disease outcomes and validate the tools using an independent cohort. Ethics and dissemination The study protocol has been approved by the Institutional Review Boards at National Jewish Hospital (IRB# HS-3118), University of Iowa (IRB# 201801750), Johns Hopkins University (IRB# 00149513) and University of California, San Francisco (IRB# 17-23432). All participants will be required to provide written informed consent. Findings will be disseminated via journal publications, scientific conferences, patient advocacy group online content and social media platforms. ABSTRACT Introduction Sarcoidosis is a multiorgan granulomatous disorder thought to be triggered and influenced by gene-environment interactions. Sarcoidosis affects 45-300/100 000 individuals in the USA and has an increasing mortality rate. The greatest gap in knowledge about sarcoidosis pathobiology is a lack of understanding about the underlying immunological mechanisms driving progressive pulmonary disease. The objective of this study is to define the lung-specific and blood-specific longitudinal changes in the adaptive immune response and their relationship to progressive and non-progressive pulmonary outcomes in patients with recently diagnosed sarcoidosis. Methods and analysis The BRonchoscopy at Initial sarcoidosis diagnosis Targeting longitudinal Endpoints study is a US-based, NIH-sponsored longitudinal blood and bronchoscopy study. Enrolment will occur over four centres with a target sample size of 80 eligible participants within 18 months of tissue diagnosis. Participants will undergo six study visits over 18 months. In addition to serial measurement of lung function, symptom surveys and chest X-rays, participants will undergo collection of blood and two bronchoscopies with bronchoalveolar lavage separated by 6 months. Freshly processed samples will be stained and flow-sorted for isolation of CD4 +T helper (Th1, Th17.0 and Th17.1) and T regulatory cell immune populations, followed by next-generation RNA sequencing. We will construct bioinformatic tools using this gene expression to define sarcoidosis endotypes that associate with progressive and non-progressive pulmonary disease outcomes and validate the tools using an independent cohort. Ethics and dissemination The study protocol has been approved by the Institutional Review Boards at National Jewish Hospital (IRB# HS-3118), University of Iowa (IRB# 201801750), Johns Hopkins University (IRB# 00149513) and University of California, San Francisco (IRB# 17-23432). All participants will be required to provide written informed consent. Findings will be disseminated via journal publications, scientific conferences, patient advocacy group online content and social media platforms. Strengths and limitations of this study ► This is the largest and most geographically diverse longitudinal bronchoscopy study with paired blood analysis performed to date in pulmonary sarcoidosis in the USA. ► The study design will use repeated measures of a comprehensive clinical phenotyping strategy over 18 months to ascertain pulmonary outcomes in patients with recently diagnosed disease that will be a rich clinical and biological data resource for studies in pulmonary sarcoidosis. ► The experimental design will leverage single cell analysis to define immunophenotypic changes in disease associated CD4 T cell populations over time and associate findings with progressive and non-progressive pulmonary sarcoidosis clinical phenotypes. ► The experimental plan will also measure transcriptional profiles of isolated CD4 T cell populations using RNA sequencing to enable construction of novel computational prognostic tools via expression deconvolution and association of findings with progressive and non-progressive pulmonary sarcoidosis. ► A potential methodological limitation is the reliance on flow cytometers at each centre for processing of freshly isolated samples requiring significant optimisation. INTRODUCTION Sarcoidosis is a multisystem granulomatous disorder that is thought to be triggered and influenced by gene-environment interactions. 1 This condition affects at least 45-300/100 000 individuals in the USA 2 and has a rising mortality rate. 3 Sarcoidosis affects the lungs in more than 90% of cases. The greatest gap in knowledge in sarcoidosis management is a lack of tools to discern which subjects will develop progressive disease vs stable or remitting disease. In addition, advanced therapeutics that are effective are lacking which is linked to a lack of understanding about the pathophysiology driving progressive disease. 4 While the environmental triggers are unknown, 5-8 the inflammatory response is characterised by a predominance of activation of CD4 +T cells, cytokine production, macrophage activation and granuloma formation. [9][10][11][12][13][14][15][16][17][18] The immunological determinants of clinical outcomes in sarcoidosis remain poorly understood. Sarcoidosis is traditionally characterised by enhanced production of interferon-gamma (IFNγ) by CD4 +T cells. Importantly, recent novel findings identified that a majority of IFNγ-producing cells in the bronchoalveolar lavage (BAL) from sarcoidosis patients bear a Th17 phenotype and are more properly classified as Th17.1 cells rather than Th1 cells. 19 In addition, a robust expansion of IFNγ-producing Th17.1 cell was also identified in a European cohort of newly diagnosed sarcoidosis patients, and an even more striking expansion of interleukin 17-producing Th17 cells in the peripheral blood. 20 Mouse models have revealed functional 'plasticity' in Th17 cells, demonstrating their ability to 'transdifferentiate' to an anti-inflammatory phenotype as defined by their transcriptional profile and regulatory capacity. 21 Other studies support that a deficit in regulatory T cell (Treg) capacity may permit disease activity in sarcoidosis. [22][23][24][25] In other words, disease activity in sarcoidosis may depend, in part, on T cell functional plasticity, resulting from transdifferentiation of Th17 cells into pathogenic Th17.1 cells versus anti-inflammatory Tregs, as shown in vivo. 21 As such, our study's hypothesis states that Th17.1 and Tregs play significant yet opposing roles to determine clinical outcomes in sarcoidosis, and that the role of Th17 cells could transcriptionally reflect a multifunctional (eg, Treg) profile in patients with stable (non-progressive or improving) disease. The BRonchoscopy at Initial sarcoidosis diagnosis Targeting longitudinal Endpoints (BRITE) study is an NIH (National Institute of Health)-sponsored, multicentre, longitudinal study that will measure changes in the immune response and its relationship to pulmonary outcomes in sarcoidosis patients early in their disease course. This is the first study to investigate longitudinal changes in paired blood and BAL T cell phenotype. The study has three primary goals. In aim 1, we will measure changes in T cell lineage diversity (Th17.1, Th17, Th1 and Treg) that occur longitudinally as sarcoidosis subjects develop either progressive or non-progressive disease. In aim 2, we will define changes in T cell lineages during disease course through identification of genome wide transcriptional profiles of purified Th17.1, Th17, Th1 and Treg cells using low-input RNA-sequencing (RNAseq). In aim 3, we will construct computational tools from T cell lineage diversity and gene expression data to define and/or predict sarcoidosis endotypes using bioinformatic deconvolution and biostatistical methods. We will then extend the genomic findings from the NIH GRADS study (Genomic Research in Alpha-1 anti-trypsin Deficiency and Sarcoidosis) 26 by comparing our genomic T cell signatures with gene expression data from this independent cohort to predict T cell lineages and relate findings to outcome definitions used in that study. These bioinformatic tools can also be applied to other genomic datasets from US and European cohorts. This study fills a major gap in our current knowledge by focusing on the relationship between longitudinal measures of pulmonary outcomes with T cell lineages using an integrated approach of phenotypic cell-sorting, followed by RNA transcriptional profiling, in addition to the measurement of T cell phenotypes in the blood and BAL compartments. Gene expression data obtained in this study will be deconvoluted to create a pipeline of computational tools that can be applied to other datasets to predict disease phenotypes and clinical course. These tools can also be used to assess clinical responses to emerging immunomodulatory therapies and to identify cellular signalling pathways that could be targeted with existing small molecule inhibitors or biological therapies. METHODS AND ANALYSIS Study setting This study involves four academic universities with established clinical centres in sarcoidosis care and research: National Jewish Health (NJH), University of Iowa, University of California, San Francisco (UCSF) and Johns Hopkins University. All centres represent a geographic spectrum from Coast to Coast. Each site is responsible for enrolling a similar number of participants (figure 1). Eligibility criteria Participants will have been diagnosed within 18 months of enrolment according to criteria endorsed by the American Thoracic Society. 27 Additional eligibility criteria for enrolment are presented in box 1. Participants with fibrotic (chest X-ray, CXR stage IV) lung disease will be excluded as this form of lung disease was considered long-standing. Study procedures Study participants will undergo several procedures as depicted in table 1. A history and physical examination will be performed at each study visit to assess disease status and changes in measurements and to confirm safety of performing bronchoscopy with lavage. An anteriorposterior CXR and pulmonary function testing including forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1) and a single breath diffusing capacity for carbon Open access monoxide (DLCO) will be performed at baseline and at 6-month intervals during follow-up. Standard bronchoscopy with lavage will be performed at baseline and 6 months later. At each 6-month study visit, participants will complete a blood draw and three patient-reported outcome measures to numerically quantify dyspnoea (University of California, San Diego shortness of Breath Questionnaire 28 ), quality of life (12-item Short Form Survey (SF-12) 29 ) and fatigue (patient-reported outcomes measurement information system 30 ). Outcomes The primary outcome at the completion of the study visits will be progressive pulmonary sarcoidosis. To define this outcome, we will use a composite measure of lung function and radiology measurements. For lung function, we will define progression by declines in lung function using thresholds that have precedent in the interstitial lung disease literature. 31 32 Specifically, participants will meet the definition of progressive lung disease if there is a≥10% decline in FVC and/or FEV1 or a ≥15% decline in DLCO. For chest imaging, we will define progression by increasing opacities on chest radiography as determined by the interpreting radiologist/investigator. If a participant meets either the lung function definition or the CXR definition, they will be categorised as progressive lung disease. Participants who do not meet these definitions will be categorised as non-progressive disease. We will exclude other causes that may mimic progressive disease based on clinical presentation, radiographic pattern consistent with sarcoidosis and analysis of BAL cell counts and differentials. BAL cultures will be performed based on clinical indications. Participant timeline Study visits and timeline are depicted in figure 2. Procedures at each visit are outlined in table 1. At enrolment (V1) and at 6-month intervals (V3, V5, V6), participants will be assessed with the procedures of lung function, CXR and questionnaires during the 18-month follow-up period. The V2 and V4 visits focus on biospecimen collection for bronchoscopy with lavage and venipuncture and will occur within 1 month of V1 and V3, respectively (figure 2). The second bronchoscopy will occur approximately 5-6 months after the first bronchoscopy. Participants will also undergo venipuncture for blood collection at V5 and V6. Sample size Our primary outcome is to determine the distribution of T helper cells and Treg cells in paired blood and BAL samples over time and relate these measurements to clinical outcome (progressive vs non-progressive disease). Based on previous cohort studies from our group, 33 Figure 1 Organisationalstructure of the multisite study. Participants will be enrolled at four clinical centres, University of Iowa (UI), University of California, San Francisco (UCSF), National Jewish Health (NJH) and Johns Hopkins University (JHU). Each of these centres is responsible for collection of the clinical data and processing of biospecimens. Flow cytometric-sorted CD4 T cell populations will be shipped to the genomics facility at National Jewish Health for RNA isolation and next-generation sequencing. Inclusion criteria A histopathological diagnosis of sarcoidosis according to the American Thoracic Society/European Respiratory Society sarcoidosis statement with the exception of Lofgren's syndrome which are exempt from a pathological diagnosis. Diagnosis of sarcoidosis within 18 months of enrolment. Non-smokers (<10 pack-years and no smoking for 6 months prior to enrolment) including vaping and use of marijuana. No history of systemic immunosuppression within 2 weeks of enrolment. Chest X-ray Scadding stage 0, I, II, III (chest X-ray stage 0 requires lung biopsy confirming granulomatous inflammation). Exclusion criteria Unable to tolerate study procedures as determined by the site principal investigator. Pregnancy. Active smoker. Systemic immunosuppressive therapy within 2 weeks of enrolment except for inhaled steroids. Scadding stage IV. Evidence of active cardiac, neurological or ophthalmic involvement with sarcoidosis and require systemic immunosuppressive therapy. Diagnosis of beryllium sensitisation and/or disease, common variable immunodeficiency, mycobacterial and/or fungal infection or suspected hypersensitivity pneumonitis. On anticoagulation except for aspirin. Active bacterial or viral infection, use of antibiotics or immunisation within 4 weeks of enrolment (may reassess for participation after the 4-week period). Known medical problems that could affect biological interpretations, including malignancy, autoimmunity, asthma and COPD (Chronic Obstructive Pulmonary Disease), chronic viral infections (hepatitis B, C, Herpes virus requiring suppressive medications, HIV (Human Immunodeficiency Virus)). History of cancer other than presumed cured non-metastatic skin cancer. Currently institutionalised (eg, prison, long-term care facility). Other comorbid conditions that increase risk of complications from bronchoscopy including uncontrolled hypertension and/or diabetes, unstable coronary artery disease or decompensated heart failure, active cardiac arrhythmias. Open access approximately 50% of sarcoidosis patients met the definition for progressive disease. Therefore, we assumed 50% of the target enrolment sample (ie, 40 participants out of a total target enrolment of 80 participants) will be progressive. With alpha=0.00625 (0.05/8 for four subsets, two time points), we will have >80% power to detect a 0.82 SD difference in a cell subset percentage between groups; if only 40% progress, we will have >80% power to detect a 0.85 SD difference between groups. For the gene expression analyses, we will have 80% power to detect a 0.95 SD change in expression between progressive and non-progressive participants or an R 2 of 0.19 between expression and a quantitative clinical phenotype such as FVC, assuming alpha=0.001. Data collection methods Deidentified clinical data will be entered and stored in a central REDCap (Research Electronic Data Capture) database (https://www. project-redcap. org) that was created for the BRITE study and hosted by UCSF. Data instruments include demographic information, organ involvement using a modified organ assessment tool, 34 medical history, medications, questionnaire responses, pulmonary function measurements, complete blood counts and details related to the biospecimen collection of blood and BAL. The organ assessment tool was developed by Delphi study methodology 34 and is a well-accepted survey to quantify organ involvement by sarcoidosis. 26 The questionnaires will be completed by participants on paper or through a REDCap survey. The dyspnoea and fatigue questionnaires have been validated in patients with interstitial lung disease 28 35 36 and sarcoidosis, 30 37 respectively. The SF-12 instrument is an extensively used quality of life assessment tool that is scored into a Mental Component Summary score and a Physical Component Summary score. 29 Bronchoscopy with lavage will be performed in two subsegments using up to a total of 480 mL sterile saline. Prebronchoscopy and postbronchoscopy assessment and monitoring will be performed as per each institution's standard of care. Each centre will follow recommended ATS/ERS guidelines for test performance for spirometry 38 and DLCO. 39 Biospecimen collection A complete blood count will be collected at visits 2, 4, 5, 6 (figure 2). Serum will be collected at visits 2, 4, 6. PMBC will be collected at visits 2 and 4 and processed for flow cytometric and gene expression assays. PBMC (Peripheral Blood Mononuclear Cell) will also be collected at visits 5 and 6 and frozen. Excess PBMCs at visits 2 and 4 will also be frozen. Whole blood will be collected into DNA PAXgene tubes and frozen at visit 5 and will be collected and frozen into RNA PAXgene tubes at visits 2 and 6. All frozen samples will be stored for future use as specified in the IRB approvals at each site. Cells isolated from PBMC and BAL samples will undergo multiparameter flow cytometry with sorting of Treg, Th1, Th17.0 and Th17.1 populations. Sorted cells will be lysed with appropriate RNase inactivating buffers and kept frozen at −80°C until processed. Vials of lysed cells will be shipped to NJH for RNA isolation and RNA-seq next-generation sequencing of each sorted population. Data management The data analyst will perform data validation in accordance with the study-wide protocol specifications. Quarterly data check programmes will be run to identify discrepancies in entered data. Study sites will be notified about data discrepancies and query resolution will be performed and the database updated as appropriate. Informed consent X History and physical exam X X X X X X Chest X-ray X X X Spirometry X X X X Diffusing capacity X X X X Bronchoscopy with lavage X X Blood draw X X X X Nasal swab for COVID-19 PCR X X X X X X Pregnancy test (if applicable) X X X X X X Figure 2 Study design with timeline. Participants will be assessed with the procedures of lung function, CXR and questionnaires at enrolment (visit 1 or V1) and at 6-month intervals (V3, V5, V6) during the 18-month follow-up period. The V2 and V4 visits consist of biospecimen collection for bronchoscopy with lavage and venipuncture and will occur within 1 month of V1 and V3, respectively. The second bronchoscopy will occur approximately 5-6 months after the first bronchoscopy. Participants will also undergo venipuncture for blood collection at V5 and V6. CXR, chest X-ray. Open access Discrepancies to be flagged will include inconsistent data, missing data, range checks and deviations from the protocol. There will be a final data validation check of the study-wide dataset and the database will be locked after approval from all investigators. Statistical analysis Our primary outcome will be to determine the distribution of T helper cells and Treg cells at diagnosis and follow-up in paired blood and BAL samples and relate these measurements to clinical course (progressive vs nonprogressive disease). The primary statistical analyses will use t-tests or rank sum tests, as appropriate, to compare the percentage of each subset (eg, Th17.1) between participants with progressive and non-progressive disease. The primary analysis will use the enrolment distribution to predict progressive vs non-progressive disease at follow-up; secondary analyses will consider the change in a subset frequency between enrolment and follow-up. Exploratory analyses will test for association between the cell subset percentages and measures of disease severity (FEV1, FEV1/FVC, DLCO). Association between expression and clinical outcomes: To determine whether individual genes or combinations of genes are associated with clinical outcomes, a series of analyses will be computed using gene expression from BAL and PBMC samples. The main clinical outcomes are as defined in aim 1 (progressive vs non-progressive). To leverage the ability to infer more than cross-sectional correlation, primary analyses will test whether gene expression at enrolment predicts the primary clinical outcomes at follow-up. Secondary analyses will test whether significant gene expression changes between enrolment and follow-up identified above are associated with the clinical outcomes at follow-up. Primary inference regarding disease aetiology will be based on gene expression from BAL samples among all sarcoidosis cases. To identify potential gene expression markers of our primary outcome, PBMC will be used to identify genes whose expression is correlated with expression of genes in BAL that predict the clinical outcomes. Exploratory analyses will consider other time point combinations and solely use PBMC gene expression to predict clinical outcomes. Differential expression between dichotomous groups (eg, progressive vs non-progressive): this will be identified via negative binomial regression using the software DESeq2, 40 adjusting for age and sex. We will also use CIBERSORTx to deconvolve RNA-seq data to predict both cell-type proportions 41 (which we can validate against our flow analysis results) and to compute cell-type-specific gene expression for the cell types we can reliably separate from bulk. Association between continuous outcomes (eg, FVC) and read count will be tested via regression models with the clinical outcome as the independent variable and read count as a predictor, also adjusting for age and sex. We will perform these tests using read counts for all genes at enrolment for the primary analyses; the secondary analyses will use change in normalised read counts from enrolment to follow-up for only those genes differentially expressed overall between enrolment and follow-up. After the progressive versus non-progressive disease differential expression analysis, we will use the significantly associated genes to perform hierarchical clustering to determine the ability of the combination of genes to distinguish between progressive and nonprogressive cases. We will perform similar analyses as described above in terms of identifying pathways involved and overlap among the significantly associated genes for all the clinical outcomes measured in this study. For aim 3, concordance between known and predicted cell-type proportions or cell-type-specific expression values will be determined by Pearson correlation coefficient and root mean square error to measure linear fit and estimation bias, respectively. 42 Empirical p values are generated to test the null hypothesis that no cell types in the signature matrix are present in a given mixture using a Monte Carlo sampling strategy for randomised mixture samples. 42 A similar strategy will be used to estimate p values for the cell-type-specific expression data. Clinical outcomes: Similar to aim 2, we will first test for whether gene expression at specific genes or combinations of genes is associated with clinical outcomes. In this aim, we will be able to use the cell-type-specific expression data to test these associations. We will use the same approach in terms of primary analyses (expression in BAL at enrolment predicting clinical outcomes at follow-up) and secondary analyses (changes in BAL expression between enrolment and follow-up predicting clinical outcomes at follow-up). We anticipate choosing one or two cell-specific subset expression profiles rather than testing all subsets; we will use the results from aim 2 and aim 3 to determine the cell types that appear most likely to be driving any clinical associations. Exploratory analyses will evaluate all the cell subsets and PBMC expression. We will first test individual genes within each subset but will also consider using principal components analysis to reduce dimensionality in addition to other refinement of the list of genes to test based on our results in aim 2. If we test all genes, we will use a false discovery rate of ≤0.001, similar to aim 2. If we have a smaller subset of genes to test in primary analyses, we will use a Bonferroni correction to determine the type I error rate. The constructed computational tool and identified endotypes from the BRITE study cohort will be tested with an independent cohort of samples (ie, the GRADS datasets). Study timeline Recruitment began in February 2020 with a projected completion of study visits by February 2023. Patient and public involvement It was not appropriate or possible to involve patients or the public in the design, or conduct, or reporting, or dissemination plans of our research. ETHICS AND DISSEMINATION This multicentre study is being conducted in accordance with globally accepted standards of good practice, in agreement with the Declaration of Helsinki and with local regulations. The study protocol has been approved by the Institutional Review Boards at National Jewish Hospital (IRB# HS-3118), University of Iowa (IRB# 201801750), Johns Hopkins University (IRB# 00149513), and University of California, San Francisco (IRB# 17-23432). All participants will be required to provide written informed consent. All manuscripts resulting from the study will be submitted to peer-reviewed journals. Findings will be disseminated via journal publications, scientific conferences, patient advocacy group online content and social media platforms.
2021-11-11T06:23:47.817Z
2021-11-01T00:00:00.000Z
243939900
s2orc/train
v2
Ecological drivers of bacterial community assembly in synthetic phycospheres
Ecological drivers of bacterial community assembly in synthetic phycospheres Significance The regions surrounding living marine phytoplankton cells harbor communities of heterotrophic bacteria that play roles in carbon and energy flux in the microbial ocean and have global-scale carbon cycle implications. Yet, the drivers underlying bacterial community assembly remain unclear. In synthetic systems designed to mimic the chemistry and turnover time of natural phycospheres, bacterial community assembly could be predicted as a simple sum of assemblages supported by each individual metabolite. For host phytoplankton cells in the ocean, this implies control over bacterial associates through excreted metabolites, a condition that could favor the evolution of marine microbial interactions and influence heterotrophic carbon processing in the surface ocean. In the nutrient-rich region surrounding marine phytoplankton cells, heterotrophic bacterioplankton transform a major fraction of recently fixed carbon through the uptake and catabolism of phytoplankton metabolites. We sought to understand the rules by which marine bacterial communities assemble in these nutrientenhanced phycospheres, specifically addressing the role of host resources in driving community coalescence. Synthetic systems with varying combinations of known exometabolites of marine phytoplankton were inoculated with seawater bacterial assemblages, and communities were transferred daily to mimic the average duration of natural phycospheres. We found that bacterial community assembly was predictable from linear combinations of the taxa maintained on each individual metabolite in the mixture, weighted for the growth each supported. Deviations from this simple additive resource model were observed but also attributed to resource-based factors via enhanced bacterial growth when host metabolites were available concurrently. The ability of photosynthetic hosts to shape bacterial associates through excreted metabolites represents a mechanism by which microbiomes with beneficial effects on host growth could be recruited. In the surface ocean, resource-based assembly of host-associated communities may underpin the evolution and maintenance of microbial interactions and determine the fate of a substantial portion of Earth's primary production. phytoplankton-bacteria interactions | community assembly | phycospheres T he ecological interactions that occur between ocean phytoplankton and bacteria are among the most important quantitative links in global carbon and nutrient cycles. Marine phytoplankton are responsible for half of Earth's photosynthesis, and heterotrophic marine bacteria process 40 to 50% of this fixed carbon (1)(2)(3). Much of the bacterial consumption of recent photosynthate occurs through uptake of dissolved organic carbon released by host phytoplankton cells into surrounding seawater by mechanisms such as leakage and exudation from living cells, as well as from mortality via senescence and predation (4). In the diffusive boundary layer immediately surrounding phytoplankton cells, termed the phycosphere, the opportunity for transfer of substrates to bacteria is enhanced. Compared to bulk seawater where concentrations of labile metabolites are in the low nanomolar to picomolar range, phycosphere concentrations can reach into the hundreds of nanomolar and remain elevated above bulk seawater concentrations for up to hundreds of microns away (5). The metabolites released by phytoplankton span broad chemical classes, including carboxylic acids, amino acids, carbohydrates, C1 compounds, and organic sulfur compounds (4,(6)(7)(8). Yet, the specific mix of metabolites present in a given phycosphere is variable and influenced by phytoplankton taxonomy (8,9) and physiology (10,11). The composition of bacterial communities that consume phytoplankton metabolites impacts the rates and efficiencies of marine organic matter transformation (12)(13)(14)(15), with the latter a key factor in ocean-atmosphere CO 2 balance. Further, bacterial community composition has cascading effects on food web yield governed by the susceptibility of bacterial taxa to protist grazing and viral infection (16,17) and also impacts host biology (18,19). The ecological mechanisms that influence the assembly of phycosphere microbiomes are not well understood, however, in part because of the micrometer scale at which bacterial communities congregate. It remains unclear whether simple rules exist that could predict the composition of these communities. Phycospheres are short-lived in the ocean, constrained by the 1-to 2-d average life span of phytoplankton cells (20,21). The phycosphere bacterial communities must therefore form and disperse rapidly within a highly dynamic metabolite landscape (14). We hypothesized a simple rule for assembly in metabolically diverse phycospheres in which communities congregate as the sum of discrete metabolite guilds (22). Each guild is hypothesized to support one to many bacterial species that exploit a metabolite resource either directly or indirectly via intermediate products, and these single-resource guilds form building blocks for the mixed-resource communities. To the extent that communities assemble in this additive fashion, composition is controlled by the host phytoplankton through the metabolites they release. Deviations from predictions of this strict resource-based model would indicate the influence of other drivers of community composition, particularly species-species interactions among the congregating heterotrophic bacteria. We tested this resource-based model using laboratory systems that mimic phycosphere metabolite composition and turnover time. The synthetic phycospheres contained from one to five compounds, organized into two suites representative of either Significance The regions surrounding living marine phytoplankton cells harbor communities of heterotrophic bacteria that play roles in carbon and energy flux in the microbial ocean and have globalscale carbon cycle implications. Yet, the drivers underlying bacterial community assembly remain unclear. In synthetic systems designed to mimic the chemistry and turnover time of natural phycospheres, bacterial community assembly could be predicted as a simple sum of assemblages supported by each individual metabolite. For host phytoplankton cells in the ocean, this implies control over bacterial associates through excreted metabolites, a condition that could favor the evolution of marine microbial interactions and influence heterotrophic carbon processing in the surface ocean. diatom or dinoflagellate exometabolite mixtures (6,8,19). For each phytoplankton type, resource conditions with varying proportions of the five metabolites were inoculated with a natural assemblage of bacterial and archaeal cells concentrated from coastal seawater. Communities were transferred into fresh media once per day for 8 d, and their composition was assessed after the final growth cycle. A weighted-sum (WS) model was used to test for resource-controlled community assembly by summing the taxonomic compositions of the single-metabolite guilds after weighting each by the growth it supports. Results Synthetic phycospheres were established in a 96-deep-well-plate format using metabolites that are synthesized by phytoplankton and support growth of associated heterotrophic bacteria (8,19). One suite of five metabolites represented molecules with higher release rates by the diatom Thalassiosira pseudonana compared to the dinoflagellate Alexandrium tamarense; these were xylose, glutamate, glycolate, ectoine, and dihydroxypropanesulfonate (DHPS) (8). The second suite represented molecules with higher release rates by A. tamarense compared to T. psuedonana; these were ribose, spermidine, trimethylamine (TMA), isethionate, and dimethylsulfoniopropionate (DMSP) (8) (Fig. 1A). The synthetic diatom phycospheres were composed of a single resource (five treatments), a mixture of all five resources (one treatment: A1), or mixtures of four resources (five treatments: A2, A3, A4, A5, and A6); in total, 11 different diatom resource treatments were replicated four times (Fig. 1B). The synthetic dinoflagellate phycospheres similarly contained single, a mixture of five (B1), or mixtures of four (B2, B3, B4, B5, and B6) resources (Fig. 1B). The total carbon concentration in each medium was 7.5 mM, established during pilot tests to maximize biomass for downstream sequencing while maintaining aerobic growth in the stirred wells. In natural phycospheres, metabolite concentrations are estimated to reach 240 nM carbon (5), with bacterial biomass scaled down proportionately compared to our synthetic phycospheres. Each metabolite set contained molecules of roughly similar compound classes: organic nitrogen compounds (glutamate and ectoine in the diatom media; spermidine and TMA in the dinoflagellate media), a sugar monomer (xylose; ribose), an organic sulfur compound (DMSP; isethionate and DHPS), and an osmolyte (ectoine; DMSP). The synthetic phycospheres were inoculated with a microbial community concentrated from coastal seawater (0.2-to 2.0-μm size fraction). To initiate the study, ∼6 × 10 4 bacterial and archaeal cells were added to each well. Phycospheres were incubated for eight sequential 1-d periods, with a 5% inoculum at each transfer (4.3 doublings per growth-dilution cycle; Fig. 1B). The composition of the phycosphere communities after eight growth-dilution cycles (P8) was analyzed by 16S ribosomal RNA (rRNA) sequencing (23), with operational taxonomic units (OTUs) defined based on 97% sequence identity (Fig. 2). The number of OTUs contributing at least 0.1% of the community averaged 21 ± 4 in the P8 diatom phycospheres after normalizing to sequencing depth and 25 ± 5 in the P8 dinoflagellate phycospheres (Fig. 2), indicating that the synthetic phycosphere systems retained considerable diversity even after eight growth-dilution cycles. The seawater inoculum contained 137 (± 7) OTUs contributing at least 0.1% of the community. Archaea and Cyanobacteria together accounted for ∼12% of the inoculum but were less than 0.1% of the P8 communities. Single-Resource Communities. From the diverse coastal inoculum a suite of heterotrophic bacterioplankton lineages emerged that are typical of those associated with marine phytoplankton (18,24,25), dominated by lineages in the Alphaproteobacteria, Gammaproteobacteria, and Flavobacteriia. Of the 50 most abundant 16S rRNA sequences from the synthetic phycospheres, 48% had >95% identity to a sequence obtained previously from bacterial associates of marine diatoms or dinoflagellates (SI Appendix, Table S1). Bacterial community assembly was generally coherent among the replicates of single-guild phycospheres (Fig. 2). In pairwise comparisons of the five diatom single-metabolite treatments, 7 of 10 pairs were statistically different (Adonis test, P < 0.05); the three exceptions were the DHPS, ectoine, and glutamate guilds, which were not statistically distinct from each other because of within-treatment variability. For example, in the ectoine communities a unique bacterial OTU (related to Aestuariispria) was present in only two of four replicates (Fig. 2). This OTU was not represented in the ∼100,000 16S rRNA amplicons sequenced from the seawater inoculum, suggesting bottlenecks in inoculation of low-abundance OTUs. Amplicon sequencing of the complete time series of ectoine communities (growth-dilution cycles P2 through P7) showed that the differences among replicates were already evident in the early transfers and stable communities were established by P8 (SI Appendix, Fig. S1). In pairwise comparisons of the five dinoflagellate single-metabolite treatments, all 10 community pairs were different from one another (Adonis test, P < 0.05). Chemical analysis of spent medium from P8 indicated that most single-metabolite communities fully consumed the substrate between transfers; the exceptions were the diatom metabolite DHPS (26% remained unused) and the dinoflagellate metabolites TMA and isethionate (40% and 20%) (SI Appendix, Fig. S2). Overall, we found that the bacterial communities that formed on different single resources were compositionally distinguishable. Relative abundance Multiple-Resource Communities. Communities assembling on mixtures of metabolites were less distinct from one another than single-resource guilds, as expected from the overlap of substrates across mixtures. In pairwise comparisons among the six diatom mixed-metabolite treatments, only 1 out of 15 pairs was statistically distinguishable; among the dinoflagellate mixed-metabolite treatments, only 6 out of 15 were distinguishable (Adonis test, P < 0.05). Although we anticipated that the multiple substrates available in the metabolite mixtures would support higher community richness than single-substrate treatments, as expected from theory (26,27) and experiments (28), this was not the case (24 ± 5 OTUs for mixed versus 25 ± 6 for single), nor were there differences in community diversity (Shannon index 1.25 ± 0.38 for mixed versus 1.10 ± 0.51 for single) (SI Appendix, Fig. S3). The OTUs with membership in single-resource guilds but missing in mixedresource communities were biased toward Gammaproteobacteria (Neptunomonas, Marinomonas, or unclassified) and Flavobacteriia (Flavobacteriales) (SI Appendix, Table S2). Chemical analysis of spent medium from P8 indicated that mixed-metabolite communities fully consumed the substrates with the exception that TMA remained in 9 of the 20 replicates that contained it; these were from media types B1, B2, B4, B5, and B6 and averaged 21% of the initial TMA concentration (SI Appendix, Fig. S2). Test of a Resource-Based Model for Community Assembly. We predicted the composition of mixed-resource communities according to a WS model. To encompass the variability within treatments, one replicate from each single-resource guild was randomly selected to include in the predicted community. OTU abundances were normalized to the growth supported by the selected replicate (measured as optical density at 600 nm, OD 600 ) and to the concentration of the resource in the mixed-metabolite medium. A distribution of the predicted relative abundance of OTUs was obtained by generating 1,000 such predicted communities (SI Appendix, Fig. S4). Comparisons of observed versus predicted OTU composition showed that resource availability is an effective predictor of community assembly (Fig. 3 and SI Appendix, Fig. S5), with the dominant taxa in multiple-resource communities successfully predicted based on weighted averages of taxa in single-resource guilds. The linear regression coefficients (R 2 ) of observed versus predicted OTU abundance was statistically significant (P ≤ 0.001) for all mixed-resource communities. There was no difference in the strength of the relationship for communities assembling on five (A1 and B1) versus four (A2 through A6; B2 through B6) resources. The WS model predicts a 1:1 relationship between observed and predicted OTU abundance, and therefore a slope of 1.0 is expected if single-resource guilds fully explain the mixedresource communities after eight growth-dilution cycles. For 7 out of the 12 mixed-resource communities, the 95% confidence interval of regression slopes of observed versus predicted abundances included 1.0, with slopes ranging from 1.4 to 0.9. All but two regression slopes were >1.0, indicating a trend toward higher observed abundances for some OTUs than predicted by the WS model ( Fig. 3 and SI Appendix, Fig. S5). To address the systematic positive deviation from a 1:1 slope, individual OTUs were classified as significant over-or underperformers in the observed communities if the observed mean ± 1 SE fell above or below the 95% confidence intervals of the predicted distribution (SI Appendix, Fig. S4). OTU0001 was overrepresented in 8 of the 12 mixed-resource communities (Fig. 3) and OTU0002 was overrepresented in 5 (SI Appendix, Fig. S6). Overall, slope deviations were driven largely by overperformance of more abundant taxa (SI Appendix, Fig. S5). Thus, while host resources provided good predictability of bacterial community assembly in the synthetic systems, other factors weakened the strict predictions of the WS model. In these simple synthetic systems, these factors could be ecological species interactions occurring among the heterotrophic bacteria, such as competition or mutualism, or additional aspects of resource supply, such as complexity or quality. Diatom Multiple Resources To differentiate between heterotrophic bacterial interactions versus other resource-based factors in deviations from the WS model, the phycosphere system was reestablished with only one bacterial species as the inoculum. This eliminated the possibility of species interactions and allowed us to ask whether systematic OTU overperformance would nonetheless be observed. An isolate with 100% average nucleotide identity to a metagenomeassembled genome (MAG) of OTU0001, the most abundant and frequently overrepresented OTU in P8 communities (Fig. 3), was used as the single bacterium inoculum. Strain Ruegeria pomeroyi DSS-3 was isolated from seawater collected at the same site as this study's inoculum (29) and has gene content identical to OTU0001 except for 44 genes (out of 4,371 total; SI Appendix, Fig. S7 and Table S3) (30). As for the seawater-inoculated phycosphere system, R. pomeroyi DSS-3 was introduced into the 22 resource conditions, growth was tracked over eight growth-dilution cycles, and final OD 600 was predicted from the WS of OD 600 in the single resources. The bacterium achieved significantly higher growth in 10 of the 12 mixed-resource conditions compared to WS predictions from single resources, indicating overperformance of this species in multiple-resource conditions in the absence of other heterotrophic bacterial species (Fig. 4). Discussion The release of metabolites from living phytoplankton cells was first described in the 1960s (7,31,32) and recognition of the chemical complexity of the phycosphere environment has steadily grown. Phytoplankton metabolites vary based on taxonomy (7)(8)(9)(10)33), physiological state such as nutrient limitation and stress (10,34), and environmental features such as temperature (34,35). This variable chemistry offers diverse niches for heterotrophic bacterioplankton but makes predicting the composition of colonizing communities challenging. Nevertheless, we find that simple linear combinations of species abundances on single resources accurately predict assemblies of mixed-resource communities. Similarly, the composition of microbial communities assembling from natural soil, plant, and seawater inocula is highly predictable from resource availability (36,37). The success of a resource-based model in predicting phycosphere communities does not preclude important effects of interspecies interactions on community composition (38). Rather, it suggests that, if important, they operate largely within single-resource guilds rather than across them. Indeed the diversity of OTUs making up single-resource guilds (SI Appendix, Fig. S3) suggests ample opportunities for within-guild interactions in the form of competition or mutualisms (39)(40)(41). An additional level of microbial interactions occurs in natural phycospheres between bacteria and living phytoplankton that is not represented in this synthetic system; such interactions have been shown previously to encompass both mutualistic and antagonistic relationships (39,42,43). The consistent skew of some bacterial taxa away from the 1:1 linear relationship predicted by the WS model could potentially arise from between-guild species interactions. However, our direct test with single-species phycospheres recapitulated the same skew (Fig. 4). Metabolic modeling approaches have suggested that adding resources promotes higher levels of species interactions among microbes (40,44), but the short half-lives of phycospheres (20), both synthetic and natural, may provide fewer opportunities for between-guild interactions to develop. The observed overperformance of heterotrophic bacteria is consistent with previous evidence of fitness advantages when resources are available together rather than individually (45). Soil microbial ecologists have proposed a priming effect whereby energy from a more labile substrate enhances the ability of heterotrophic microbes to synthesize catabolic enzymes for a less-labile substrate (46). Microbiologists use the term "cometabolism" for a similar phenomenon although typically specify that the less-labile substrate does not support biomass (47), which is not the case in our study (SI Appendix, Fig. S8). Alternatively, species overperformance could arise when high cell numbers achieved on one resource allow procurement of a larger fraction of a second resource that itself only supports slow growth. Finally, overperformance could result when resources share degradation pathways or are regulated through common mechanisms. All three scenarios require energy savings from the simultaneous availability of multiple resources. The first two also require that the less-labile substrate is slowly or incompletely used when provided alone; the three partially consumed metabolites in the single-resource guilds (DHPS, TMA, and isethionate; SI Appendix, Fig. S2) are candidates for such substrates. In a separate analysis of R. pomeroyi growth on single metabolites, the time to maximum specific growth rate (μ) for DHPS, TMA, and isethionate was longer than for the other seven substrates (averaging 58 h versus 25 h) and the maximum biomass achieved was lower (averaging 0.07 OD 600 versus 0.12 OD 600 ) (SI Appendix, Fig. S8). These data support the scenarios of enhanced utilization of less-labile resources when catabolized with more-labile ones. In an analysis of resource use by three other abundant strains isolated from the phycosphere communities (OTU0002, OTU0003, and OTU0006), all could grow at the expense of more than one of the provided resources (SI Appendix, Fig. S8). Finding that the dominant OTU in the synthetic phycospheres was the same bacterial species as isolated 20 y prior from the same inoculum site was unanticipated. We first ruled out contamination based on minor but consistent differences in genome content between the OTU0001 MAG and R. pomeroyi DSS-3 (SI Appendix, Fig. S7 and Table S3). In addition, OTU0001 was present in the seawater inoculum at levels sufficient to add ∼240 16S rRNA genes per well (0.4% of initial 16S rRNA amplicons; Fig. 2). Previous studies of R. pomeroyi DSS-3 in phytoplankton cocultures (8,19) guided the selection of the phycosphere metabolites used in this study (Fig. 1A), and the nearly identical OTU0001 MAG also has the genetic capability to transport and catabolize all 10 metabolites (SI Appendix, Table S4). The importance of phytoplankton resources in the assembly of bacterial associates is thus reinforced by the specific enrichment of a metabolically optimized bacterium from a diverse seawater inoculum. Some mechanisms of phytoplankton metabolite release, such as direct excretion, photosynthetic overflow, photorespiration, and redox balancing (4), are controlled by the host; others, including leakage and predation (48), are not host-controlled. Regardless of the release mechanism involved, resource-based bacterial assembly offers a means by which phytoplankton could influence the taxonomic diversity of associated bacteria, and this could explain the consistency of bacterial communities colonizing natural marine phycospheres (24). Phytoplankton have been shown to accrue a number of benefits from heterotrophic bacteria, for example access to vitamins for which many are auxotrophic (49) or to scarce trace elements (50) or oxidative stress enzymes (39). The ability to control the composition of associated bacteria through metabolite release could amplify these benefits, as has been shown for multicellular photosynthetic hosts such as vascular plants that enrich for beneficial rhizosphere communities via root exudates (51). For bacteria, labile compounds occurring in predictable combinations could enhance resource discovery (14) and drive acquisition and coregulation of catabolic pathways that are needed simultaneously. R. pomeroyi/OTU0001 and its marine relatives in the Rhodobacterales are frequently found in association with phytoplankton cells (24,52,53) (SI Appendix, Table S1), and their large and well-regulated genomes are proposed to have expanded in content coincident with the diversification of eukaryotic phytoplankton (54). Release of phytoplankton metabolites has been experimentally linked to bacterial-derived benefits in a few cases, such as the polysaccharide fucoidan triggering increased bacterial vitamin B 12 production (55) and diatom-derived tryptophan initiating bacterial synthesis of the growth hormone indole-3-acetic acid (56). While our results set the stage for such mutualisms, they could nonetheless be explained without invoking selective mechanisms. For example, bacteria may take advantage of spatially and temporally clumped resources without any underlying coevolutionary relationship with phytoplankton producers. The simple logic of community assembly observed here suggests that changes in the composition of phytoplankton communities in a future ocean (57) will cascade to heterotrophic bacterial communities. Potential impacts on biogeochemical processes include changes in regeneration of macronutrients (2), formation of climate-active organic molecules (58), and mineralization of a major fraction of Earth's net primary production (3). Methods The bacterial composition of synthetic phycospheres was analyzed by 16S rRNA amplicon sequencing, performed on the Illumina MiSeq platform. Read analysis was carried out with the Mothur (v.1.39.5) pipeline following https:// www.mothur.org/wiki/MiSeq_SOP, d.d. 2018-2-8. To predict bacterial community assembly on mixed resources, a WS model was applied to the 50 most abundant taxa (>99.2% of reads). Relative abundance of each taxon on single metabolites normalized by OD 600 (SI Appendix, Fig. S9) and metabolite concentration in the medium were used to calculate relative abundances according to the equation where f p is the predicted relative abundance of a taxon, f i is the observed relative abundance of that taxon in treatment i, and OD i is the measured optical density. This step was bootstrapped 1,000 times to generate distributions of predicted relative abundances. Metabolites in spent media samples from the P8 phycospheres were quantified by 1 H NMR spectroscopy using a NEO III (Bruker) with a 1.7-mm cryoprobe. Data were acquired by a one-dimensional 1 H experiment. A unique peak region for each compound was defined. Full details of sampling and analysis are given in SI Appendix, Supplementary Methods. Data Availability. DNA sequences are available in the NCBI Sequence Read Archive (project PRJNA553557), under accession nos. SRR9668153-SRR9668338 for 16S rRNA amplicons and SRR9668573 and SRR9668574 for MAGs. ACKNOWLEDGMENTS. This work was supported by Simons Foundation grants 542391 to M.A.M. and 542385 to J.G. within the Principles of Microbial Ecosystems Collaborative. We thank J. Schreier and C. Smith for assistance, K. G. Ross for statistical advice, and the University of Georgia Complex Carbohydrate Research Center and Georgia Genomics and Bioinformatics Core for instrumentation and services. This is contribution 1083 of the University of Georgia Marine Institute.
2020-02-05T14:05:25.145Z
2020-02-03T00:00:00.000Z
211024800
s2orc/train
v2
Soybean Inoculated With One Bradyrhizobium Strain Isolated at Elevated [CO2] Show an Impaired C and N Metabolism When Grown at Ambient [CO2]
Soybean Inoculated With One Bradyrhizobium Strain Isolated at Elevated [CO2] Show an Impaired C and N Metabolism When Grown at Ambient [CO2] Soybean (Glycine max L.) future response to elevated [CO2] has been shown to differ when inoculated with B. japonicum strains isolated at ambient or elevated [CO2]. Plants, inoculated with three Bradyrhizobium strains isolated at different [CO2], were grown in chambers at current and elevated [CO2] (400 vs. 700 ppm). Together with nodule and leaf metabolomic profile, characterization of nodule N-fixation and exchange between organs were tested through 15N2-labeling analysis. Soybeans inoculated with SFJ14-36 strain (isolated at elevated [CO2]) showed a strong metabolic imbalance, at nodule and leaf levels when grown at ambient [CO2], probably due to an insufficient supply of N by nodules, as shown by 15N2-labeling. In nodules, due to shortage of photoassimilate, C may be diverted to aspartic acid instead of malate in order to improve the efficiency of the C source sustaining N2-fixation. In leaves, photorespiration and respiration were boosted at ambient [CO2] in plants inoculated with this strain. Additionally, free phytol, antioxidants, and fatty acid content could be indicate induced senescence due to oxidative stress and lack of nitrogen. Therefore, plants inoculated with Bradyrhizobium strain isolated at elevated [CO2] may have lost their capacity to form effective symbiosis at ambient [CO2] and that was translated at whole plant level through metabolic impairment. INTRODUCTION Atmospheric carbon dioxide concentration ([CO 2 ]) has increased strongly since preindustrial times (∼280 ppm) to 412.8 ppm registered in November 2020 ( 1 CO2.earth, 2020), and a further substantial increase is expected during this century. Carbon dioxide is the major greenhouse gas of anthropogenic activity origin and has been demonstrated to participate in climate change 1 www. Co2.earth whereby global temperature and precipitation patterns will be altered (IPCC, 2013). At the plant level, increasing [CO 2 ] leads to increased photosynthesis while reducing photorespiration and, as a consequence, increasing growth and seed yield (Ainsworth et al., 2002). Soybean (Glycine max L.) is the fourth most important food crop and the most cultivated legume, with 349 million tons produced in 2018 (FAOSTAT, 2020) being the most traded agricultural commodity, accounting for over 10% of the total value of the global exchange. This legume is a rich source of high-quality proteins and oil and contains a considerable amount of carbohydrates, amino acids, and minerals that contribute to its nutritional value (Medic et al., 2014). Soybean physiological responses to elevated [CO 2 ] (e[CO 2 ]) have been extensively studied in both controlled and in field environments (Ainsworth et al., 2002;Bishop et al., 2015). Increased photosynthesis due to CO 2 fertilization leads to increases in leaf carbohydrate content (Ainsworth et al., 2004;Rogers et al., 2006), radiation use efficiency (RUE) (Sanz-Sáez et al., 2017), biomass production, and seed yield (Morgan et al., 2005), while decreasing stomatal conductance (Ainsworth and Rogers, 2007;Soba et al., 2020b) and leaf respiration (Tcherkez et al., 2008). In some crops, sink limitation and photosynthesis downregulation is sometimes observed in plants grown at e[CO 2 ] as a consequence of sugar overaccumulation in the leaves (Ainsworth and Rogers, 2007;Gutiérrez et al., 2009). However, soybeans, as the rest of legume crops, forms symbiotic relationships with Rhizobiaceae family bacteria, specifically with Bradyrhizobium japonicum, which provide access to atmospheric N 2 , through biological nitrogen fixation (BNF). According to Kaschuk et al. (2009), Bradyrhizobium bacteria can consume between 4 and 11% of carbohydrates fixed through photosynthesis and, therefore, increase plant sink capacity and stimulate legume growth under e[CO 2 ] avoiding C sink limitation (Ainsworth et al., 2004). On the other hand, BNF and carbohydrate consumption by the nodule is influenced in part by the strain of Bradyrhizobium (Kaschuk et al., 2009;Sanz-Sáez et al., 2015). Besides, the microbial population structure in the rizhosphere has been shown to be altered by e[CO 2 ] (Wang et al., 2017). Therefore, there is great interest in the isolation and selection for Bradyrhizobium strains adapted to future environments, such as e[CO 2 ], which hypothetically could be more N 2 -fixation efficient ones compared with unselected or native strains. This hypothesis is reinforced by some studies in alfalfa (Bertrand et al., 2007;Sanz-Sáez et al., 2012) and soybean (Bertrand et al., 2007(Bertrand et al., , 2011Prévost et al., 2010;Sanz-Sáez et al., 2015) which showed that selected strains could improve legume productivity in response to e[CO 2 ] by fixing more N 2 and hence consuming more C. Sugawara and Sadowsky (2013) studied different native B. japonicum strains from soybean nodules of plants grown and isolated at ambient [CO 2 ] (a[CO 2 ]) (390 ppm, strain SFJ4-24) and e[CO 2 ] (550 ppm, strain , under fully openfield conditions at a FACE site at the University of Illinois at Urbana-Champaign. They observed that the strain SFJ14-36, isolated at e[CO 2 ], significantly overexpressed genes encoding for N 2 -fixation and nodulation in comparison with the strain isolated at a[CO 2 ] and the control strain (USDA 110). More recently, Sanz-Sáez et al. (2019) tested if the same strain isolated under e[CO 2 ] conditions (SFJ14-36) showed higher BNF and nodule number at e[CO 2 ] compared with plants inoculated with USDA110 as it was suggested by the expression profiles published by Sugawara and Sadowsky (2013); however, no statistical differences were observed between plants inoculated with different strains when grown at e [CO 2 ]. Therefore, the overexpression of genes observed by Sugawara and Sadowsky (2013) were not matched by a higher BNF. In addition, when the plants were grown at a[CO 2 ], those inoculated with the strain isolated at e[CO 2 ] showed lower plant fitness in comparison with USDA110. The authors hypothesized that the strains isolated at e[CO 2 ] may be attracted only by root exudates produced at e [CO 2 ] or that at a[CO 2 ] there is some metabolic restrictions in the nodules that reduce the fitness of the symbiotic relationship. With the objective of better understanding the C and N metabolism interaction in the plants inoculated with different B. japonicum strains and grown under e[CO 2 ] conditions, that favor yield under climate change conditions, it is needed to study why some rhizobium strains respond better to e[CO 2 ] than others, and how this affects plant fitness. Together and closely related with these physiological responses, plant metabolome is also perturbed under e[CO 2 ] (Aranjuelo et al., 2015;Tcherkez et al., 2020). The metabolome of the plant is the link between genotype and phenotype, allowing to study changes in gene expression in response to the environment (Saito and Matsuda, 2008). This makes metabolomic profiling an attractive tool for phenotyping, providing a comprehensive perspective of the environmental changes that influence plants (Obata and Fernie, 2012). In opposition to a traditional metabolic analysis, in which the researcher focuses on a specific class of metabolites related with a metabolic route, the simultaneous profiling of metabolites from biosynthetically unrelated pathways has been demonstrated to increase our understanding of the molecular mechanism that underlies plant response to different stresses (Li et al., 2015). While previous studies have explored the Rhizobium-legume specificity and their physiological response to e[CO 2 ] (Das et al., 2017;Rabara et al., 2017), to our knowledge, no previous works have analyzed the metabolic profile of soybean nodules and leaves inoculated with different B. japonicum strains grown under e[CO 2 ] conditions. The aim of this work was to elucidate the metabolic features involved in Bradyrhizobium-soybean specificity under contrasting CO 2 conditions. With this purpose, metabolic profiling analysis were carried out under two contrasting levels of CO 2 using the same three B. japonicum strains isolated at different [CO 2 ] (Sugawara and Sadowsky, 2013) and whose physiologic and photosynthetic parameters were studied by Sanz-Sáez et al. (2019). Plant and Bacterial Material For this study, the same B. japonicum strains isolated by Sugawara and Sadowsky (2013) were used: SFJ4-24 (serogroup 123) was a strain isolated from nodules of soybean grown at a[CO 2 ] (390 ppm of CO 2 ), meanwhile SFJ14-36 (serogroup 38) was a strain isolated from soybean nodules grown at e[CO 2 ] (550 ppm of CO 2 ). As a control, we used USDA110 strain because it has demonstrated high soybean performance at ambient and elevated [CO 2 ] at SoyFACE facility at the University of Illinois at Urbana-Champaign 2 (Sanz-Sáez et al., 2015). Soybean cultivar (Glycine max cv. 93B15; Pioneer Hi-Bred) was used in the same way as being used by Sugawara and Sadowsky (2013) in order to avoid problems of compatibility between the soybean cultivar and the Bradyrhizobium japonicum strains. These strains were provided by Prof. Michael Sadowsky (University of Minnesota; SFJ4-24, and SFJ14-36) and by USDA-ARS Rhizobium Germplasm Resource Collection in Belstville, MD (USDA110). The different B. japonicum cultures were made exactly as explained in Sanz-Sáez et al. (2019). Plant Growth Conditions, Treatments, and Sampling Soybean seeds were surface sterilized with sodium hypochlorite (1%) for 10 min and rinsed with sterile water until the smell of bleach disappeared. For the seed inoculation, 200 ml of medium liquid culture containing ≈5 × 10 9 cell ml −1 of each individual B. japonicum strain was centrifuged for 15 min at 5,100 × g to separate the bacteria from the media. The pellet containing the bacteria was resuspended in 2 ml of sterile deionized water containing 2% polivinylpolypyrrolidone (PVPP) reaching a concentration of ≈10 11 cel ml −1 . Then 200 seeds were placed in a 500-ml sterile beaker containing 2 ml of concentrated inoculum on a rotatory shaker overnight, and later immediately planted. For the liquid inoculation, 1 L of liquid culture containing ≈5 × 10 9 cell ml −1 was centrifuged and resuspended as above to a final concentration of 10 8 cell ml −1 . Five inoculated soybean seeds were planted in 10 L pots containing a mixture of peat moss, perlite, and vermiculite of 1:1:1 v/v/v that was previously sterilized as described in Sanz-Sáez et al. (2015). One week after emergence, plants were thinned to one plant. After plants emerged from the pot, they were inoculated three times coinciding with 2, 9, and 16 days after emergence (DAE) with the liquid inoculum from each B. japonicum strain (USDA110, SFJ4-24, and SFJ14-36). All plants were watered alternatively with Evans N-free solution and distilled water to avoid salt accumulation (Moore, 1974). Soybean plants were grown in two growth chambers (Phytotron Service, SGIker) at the University of the Basque Country (UPV/EHU), from the beginning of the experiment, one maintained at a[CO 2 ] (≈400 ppm CO 2 ), and other maintained at e[CO 2 ] (700 ppm CO 2 ). Both chambers were maintained at 60/70% day/night relative humidity and 25/22 • C day/night temperature, and a photosynthetic photon flux density (PPFD) of 1,200 µmol m −2 s −1 from 7:00 to 22:00 h, until developmental stage V5 (Fehr et al., 1971), when the day length was decreased by 2 h to induce flowering. Every 2 weeks, plants and CO 2 treatments were rotated among and within chambers in order to reduce potential chamber effects. Harvest was carried out when plants reached full flowering developmental stage (R2). This stage was selected because flowering is the period when N 2 fixation is supposed to be in its peak, and nodules have not started to senesce (Rogers et al., 2006). At this moment, six plants per inoculation and CO 2 treatment were harvested and separated into organ samples; leaves, stems, roots, and nodules. Each organ sample was oven dried at 65 • C for at least 72 h and then weighed. The data is presented as weight of each organ separately (g of dry weight plant −1 ), and stacked by total dry weight. For metabolic analysis, samples of leaves and nodules from three plants were collected and immediately frozen in liquid nitrogen and stored at −80 • C until analysis. Nitrogen Isotopic Composition Analyses Twenty-four hours before harvest, the underground zone was enriched with 15 N 2 , and after harvest, the labeled 15 N incorporation [measured as stable 15 N isotope composition (δ 15 N)] in nodule tissue was measured. This parameter has been recently used as a measure of nodule performance in soybean and other legumes (Soba et al., 2020a) showing that the higher the nodule δ 15 N the greater the BNF. Three plants (there was one plant per pot) per inoculation and CO 2 treatment were labeled, while three plants were used as unlabeled controls and harvested at the same time as the labeled plants. The 15 N 2 labeling was accomplished by injecting labeled gas into the root zone using handmade labeling pots following the procedure of Sanz-Sáez et al. (2015). Plants were grown in these pots for the duration of the experiment. On the night preceding the labeling experiment, the pots were sealed with plastic lids in order to avoid the escape of the labeled gas. To perform the 15 N 2 labeling, 10% 15 N 2enriched gas was prepared in Supelco-Inert Foil Gas Sampling Bags (Sigma-Aldrich, St Louis, MO, United States) by mixing the 15 N 2 -labeled gas enriched at 99% with ambient air (δ 15 N 2 at 0 ). Two hundred milliliters of 15 N 2 (10%) mixed gas was injected into the labeling pots using a gas syringe (SGE; Sigma-Aldrich) 2 and 4 h after the lights were turned on, coinciding with the period of greatest N 2 -fixing activity (Molero et al., 2019). After 24 h, the plants were harvested and separated into nodules, roots, and leaves, then dried at 65 • C for at least 72 h. The dried organs were weighed and ground to 1 mm particle size. The samples were analyzed in a Costech 4010 elemental analyzer coupled in continuous flow with a Thermo Fisher Delta V Advantage Isotope Ratio Mass Spectrometer (IRMS Thermo Scientific, Waltham, MA, United States). The 15 N/ 14 N ratio in soybean nodule, root, and leaf material was expressed in δ notation (δ 15 N) following the equation as described in Sanz-Sáez et al. (2019). The amount of 15 N fixed during the day of the labeling experiment was calculated as 15 N fixed in each organ following the equation, as described in Bei et al. (2013). Estimation of Rate of RuBP Oxygenation From Gas Exchange Measurements Photorespiratory rates were estimated as the rate of ribulose-1,5-bisphosphate (RuBP) oxygenation (v o ) derived from the measured rates of CO 2 uptake by the leaves according to Sharkey (1988) andvon Caemmerer (2000), as described in Noctor et al. (2002): where A is the measured rate of net CO 2 uptake, R d is nonphotorespiratory CO 2 release in the light (taken as 50% of the rate measured in the dark in each experiment) and O:C is the ratio of RuBP oxygenation to carboxylation. The ratio O:C was calculated as: where S rel is the specificity factor of Rubisco (taken as 110; Keys, 1999), and C c and O c are the chloroplastic concentrations of CO 2 and O 2 , respectively. O c was assumed to be that of water in equilibrium with air at 20 • C (276 µM) and C c was derived from C i by taking a CO 2 transfer conductance through the mesophyll (g i ) of 0·32 mol m −2 s −1 (Gillon and Yakir, 2000) and assuming that the rate of CO 2 uptake affects C c relative to C i as in Ruuska et al. (2000): where C c was converted to a molar concentration by applying a CO 2 solubility constant at 20 • C of 0.0392 mol L −1 (von Caemmerer, 2000). The gas exchange parameters A, R in the dark, and C c were measured exactly as described in Sanz-Sáez et al. (2019). Metabolic Analyses Leaf and nodule samples (20 mg of powder from freeze-dried material) were ground in a mortar in liquid nitrogen, and then in 2 ml of 80% methanol, in which ribitol (100 µmol L −1 ) was added as an internal standard. After centrifugation at 15,000 rpm for 15 min at 4 • C, the supernatant was collected and centrifuged again. Then, the supernatants were spin-dried under vacuum and stored at −80 • C until analysis. Relative metabolite content was determined by gas chromatography coupled to time-of-flight mass spectrometry (GC-MS) using a LECO Pegasus III coupled to an Agilent 6890N GC system. Sample derivatization and GC-MS analyses were carried out as described in Aranjuelo et al. (2015). Peak identity was established by comparison of the fragmentation pattern with MS-available databases (NIST). The integration of peaks was performed using the LECO Pegasus software and the automated peak integration was verified manually for each compound in all analyses. The quantification was normalized to dry weight (DW) to avoid any discrepancy due to changes in relative water content. Statistical Analysis Statistical analyses were performed with IBM SPSS Statistics for Windows, Version 20.0 (IBM Co., Armonk, NY, United States). Differences among the three Bradyrhizobium strains and the two CO 2 levels were evaluated by two-way analyses of variance (ANOVA), with the strain and CO 2 as fixed factors. All data were tested for normality (Kolmogorov-Smirnov test) and homogeneity of variances (Levene's test). For ANOVA analysis, the results were considered to be significant when p < 0.05. In order to reduce the multivariate data complexity and identify patterns between samples, principal component analysis (PCA) was performed for nodules and leaves and the 121 metabolites were taken into account. Heat map and PCA for the two organs were conducted using XLSTAT 2008 (Addinsoft, Paris, France) software. Heat maps were done independently for leaves and nodules with metabolites showing significant differences between B. japonicum strain, [CO 2 ], and/or interactions in each tissue. Clustering was based on the Pearson's correlation coefficients among the metabolites. In this manuscript, the red color is proportional to a lower concentration; conversely, the intensity of the green color is proportional to higher concentration values. Biomass and Physiologic Measurements At the R2 stage, strain, CO 2 level, and their interaction were found to have a significant effect on biomass of leaves, stems, and roots (data not shown). In contrast, nodule weight was only significantly affected by CO 2 level. Elevated [CO 2 ] significantly increased total biomass in all studied organs for the plants inoculated with B. japonicum strain isolated at e[CO 2 ] (SFJ14-36) but not for the reference strain (USDA110) or in the strain isolated at a[CO 2 ] (SFJ4-24) (Figure 1). However, the final value at e[CO 2 ] of SFJ14-36 strain was similar to the SFJ4-24 strain. Plants inoculated with USDA110, the reference strain, showed the highest values at both CO 2 concentrations (Figure 1). For nodule 15 N isotope labeling (δ 15 N), greatest value was found in nodules of plants inoculated with USDA110 strain FIGURE 1 | Effect of [CO 2 ] on soybean biomass in plants inoculated with three different B. japonicum strains. Nodule, root, stem, and leaf biomass (g DW plant -1 ) of soybean plant grown at a[CO 2 ] (400 ppm) and e[CO 2 ] (700 ppm) and inoculated with three Bradyrhizobium japonicum strains (USDA110, . Bars correspond to the mean ± SE of n = 6 of the biomass of each tissue. Results of statistics for total biomass (the sum of nodule, root, leaf, and stem) are shown (two-way ANOVA, P < 0.05). Different letters indicate significant differences (Tukey post hoc test P < 0.05). Frontiers in Plant Science | www.frontiersin.org grown at a[CO 2 ] which was significantly higher than plants grown at e[CO 2 ] (Figure 2). Interestingly, nodules of SFJ4-24 and SFJ14-36 did not show significant differences between CO 2 levels, and the δ 15 N was significantly greater in nodules of SFJ14-36 when compared with SFJ4-24 (Figure 2). For a more holistic vision, we calculated the amount of δ 15 N per organ (mg 15 N organ −1 ) (Figure 3). The results showed a clear significant effect of CO 2 for strain SFJ14-36 in the three studied tissues (leaf, root, and nodule) in contrast with USDA110 and SFJ4-24 strains. The estimated rate of photorespiration (v o ) from gas exchange measures showed significant differences between CO 2 levels. Whereas both SFJ4-24 and SFJ14-36 showed a significant decrease of photorespiration under e[CO 2 ], USDA 110 did not (Figure 4). Metabolite Patterns Metabolite profiling was performed by gas chromatography coupled with time-of-flight mass spectrometry (GC-MS), and 121 different metabolites were identified by reference to their MS data. These metabolites were classified in eight chemical groups (organic acids, amino acids, sugars, fatty acids, polyols, nucleotides, secondary metabolites, and others), and their relative abundance is shown in Figure S1. The statistical analysis (PCA and heat map hierarchical clustering) of leaf and nodule metabolomic profile indicates that a clear differentiation between organs could be done (Figures 5, 6). While in the nodule metabolome, B. japonicum strain was found to have the main effect, with little impact of CO 2 level (except for SFJ14-36) (Figure 5A), in leaf metabolome, CO 2 level had the main effect ( Figure 5C). Nodule PCA analysis revealed that the two principal components explained 73.5% of total variation between strains . Bars correspond to the mean ± SE of n = 3. Results of statistics are shown (two-way ANOVA, P < 0.05). Different letters indicate significant differences (Tukey post hoc test P < 0.05). and CO 2 levels ( Figure 5A). The six combinations of strains and CO 2 levels were grouped by strains without differences between treatments except for the SFJ14-36, which showed significant differences between CO 2 levels. The loading plot FIGURE 4 | Photorespiratory estimations from gas exchange measures. Estimated rate of RuBP Oxygenation (v o ) in soybean leaves grown at a[CO 2 ] (400 ppm) and e[CO 2 ] (700 ppm) and inoculated with three Bradyrhizobium japonicum strains (USDA110, . Bars correspond to the mean ± SE of n = 6. Results of statistics are shown (two-way ANOVA, P < 0.05). Different letters indicate significant differences (Tukey post hoc test P < 0.05). revealed that the discrimination of samples by PC1 (48.2% of the total variance) was, in part due to sugars such as maltose and trehalose, whereas amino acids like lysine, methionine, threonine, leucine, glycine, and glutamine contributed to the separation of samples by PC2 (25.3% of the total variance). It is interesting to note that Krebs cycle related metabolites are grouped in the lower-left quadrant of the loading plot ( Figure 5B). On the other hand, leaf PCA analysis showed a more unrelated distribution of strains and treatments, but a clear separation between Bradyrhizobium strains grown at ambient and elevated [CO 2 ] could be traced ( Figure 5C). The two principal components explained 60.6% of total variation between strains and CO 2 levels. No distinctive pattern in metabolites were detected to explain this variation; nevertheless, important metabolites such as D-glucose-6-phosphate, glycine, serine, the polyamines putrescine and spermidine, and shikimic acid were important contributors to the separation of samples by PC2 ( Figure 5D). Of the 121 analyzed metabolites, 64 were significantly affected by CO 2 , strain or their interaction in nodules and 54 in leaves (Figure 6). In order to provide a better understanding of this variation, metabolite profiling representation (heat map) and hierarchical clustering analysis was undertaken separately for each organ (Figure 6). In nodules, the hierarchical clustering of the six combinations (two levels of CO 2 and three Bradyrhizobium strains) formed three clusters, one for each strain with little differences between CO 2 treatments, except for the strain SFJ14-36 where differences between CO 2 levels were bigger (Figure 6A), as we have seen in the PCA analysis ( Figure 5A). On the other hand, the hierarchical clustering of the 64 significant metabolites allowed the grouping of metabolites into two major clusters. Cluster 1 was mostly made of metabolites in higher concentration in plants inoculated with USDA110 strain as compared with other strains and could be subdivided in two subclusters; 1A: sugars (glycolysis pathway: sucrose, glucose, fructose, mannose, etc.) and 1B: N-related compounds such as ureides and urea cycle metabolites (allantoin, uric acid, ornithine, aspartic acid, citrulline, urea, etc.). On the contrary, cluster 2 was formed of metabolites in lower concentration in USDA110 strain and mostly included Krebs cycle metabolites (fumarate, malate, citrate, isocitrate, α-ketoglutarate, etc.) ( Figure 6A). Contrary to nodules, in leaves, the hierarchical clustering of CO 2 levels and Bradyrhizobium strain combinations was grouped by CO 2 level in two clusters ( Figure 6B) Physiologic Parameters Our study showed that [CO 2 ] effect on soybean biomass was tightly dependent on the B. japonicum strain analyzed. While in SFJ4-24 and USDA110 changes between CO 2 levels were not significant, soybeans inoculated with SFJ14-36 strain showed an increase of 322% in their total biomass under e[CO 2 ] (Figure 1). Both, N 2 fixation and photosynthetic performance were involved in such different responses. When we analyze δ 15 N in nodules of SFJ14-36, used as a measure of BNF-specific rate, no significant differences were found. However, significant differences, in nodule, root, and leaf, were found when the amount of total biomass labeled 15 N 2 in this strain was calculated relating with the total N fixed by BNF and their translocation to other tissues (Figures 2, 3). The greater 15 N content under e[CO 2 ] in SFJ14-36 compared with a[CO 2 ] was in contrast with the lack of significant differences between CO 2 level for the other two strains in roots and leaves. This difference may be due to a better translocation of fixed N to the aboveground tissues as shown by nodule metabolite data (greater accumulation of aspartic acid (Asp) and allantoin in nodules of plants inoculated with SFJ14-36 at a[CO 2 ]) (Figure 7). Briefly, in the plants inoculated with the strain isolated at e[CO 2 ] (SFJ14-36), even with a similar BNF-specific rate at both CO 2 levels, the greater nodule biomass at e[CO 2 ] leads to a greater amount of fixed N and this together with a better translocation to the aerial parts, allows a better biomass growth under e[CO 2 ]. However, nodulation in soybean that were inoculated with SFJ14-36 strain seems to be restricted or delayed when grown at a[CO 2 ] as previously showed by Sanz-Sáez et al. (2019) compromising the whole plant fitness due to a lower N availability. None of these facts seems to occur in the model strain (USDA110) or in the one isolated at a[CO 2 ] (SFJ4-24). In addition to BNF, gas exchange measures also contributed to explain the different responses of plants inoculated with different B. japonicum strains to e[CO 2 ]. As previously showed by Sanz-Sáez et al. (2019), photosynthesis was reduced when plants grew at a[CO 2 ], but this reduction was significantly greater in SFJ14-36 strain when compared with the other two. One of the most important parameters affecting C fixation through photosynthesis is photorespiration. Zhu et al. (2008) estimated a reduction of gross C3 photosynthesis efficiency by 48% at current [CO 2 ] and temperature conditions, mainly associated with the consumption of fixed C and energy in the glycolate recycling process. As observed in Figure 4, both SFJ4-24 and SFJ14-36 reduced the estimated v o (32.5 and 40.7%, respectively) when grown at e[CO 2 ]; in contrast, values in USDA110 did not change. Therefore, in SFJ4-24 and SFJ14-36, the increase in the photosynthetic rate previously observed under e[CO 2 ] could be in part due to a reduction in the photorespiratory rate, especially in SFJ14-36 as validated below with the metabolic data that shows a decrease in intermediates of the glycolate cycle (glycolic acid, glycine, and serine among others) (Figure 6) in leaves grown under e[CO 2 ]. The decreased nodulation and reduction of the symbiotic fitness of the plants inoculated with SFJ-14-36 and grown at a[CO 2 ] could be due to changes in the quantity and/or quality of phenolic substances excreted by the roots (Sugawara and Sadowsky, 2013;Wang et al., 2017). At e[CO 2 ], roots excrete more and different phenolic compounds that attract rhizobium species (Sugawara and Sadowsky, 2013). As SFJ-14-36 was isolated under e[CO 2 ], an effective nodulation may have been dependent of phenolic substances only emitted under these circumstances whereas at a[CO 2 ], these compounds may have changed (Sugawara and Sadowsky, 2013), reducing the nodule formation and the amount of N that was fixed and fed to the plant. Another reason for the lower nodulation of plants inoculated with SFJ14-36 strain at ambient [CO 2 ] could be the lack of synergy between Bradyrhizobium strain and soybean cultivar. This is not likely because this study used the same soybean cultivar (cv. 93B15; Pioneer Hi-Bred) used in Sugawara and Sadowsky (2013) when the strain was isolated in e[CO 2 ] at the SoyFACE facility in Illinois. Together with physiologic parameters, plant metabolomic can give us valuable information about the status of the soybean-Bradyrhizobia symbiosis under current and future [CO 2 ]. As shown, [CO 2 ] effect on physiology was dependent on the B. japonicum strain and these differential responses are expected to be reflected in the accumulation of specific metabolites. Therefore, metabolomics are a valuable tool to decode the physiologic differences between plants inoculated with different strains and CO 2 levels, helping us understand why the plants symbiosis with the strain that was isolated at e[CO 2 ] does not perform well at a[CO 2 ] or why USDA110 is more effective than the other two strains at a[CO 2 ]. Carbon and Nitrogen Metabolism Nitrogen fixation in legume nodules is fueled by C fixed through photosynthesis. The current experiment showed that the amount of sucrose in nodules was affected by CO 2 level and by Bradyrhizobium strain. Plants inoculated with USDA110 showed the highest sucrose levels (especially at e[CO 2 ]); on the contrary, SFJ4-24 showed almost no sucrose in both ambient and elevated [CO 2 ] when compared with USDA110 (Figure 7). This strain-specific sucrose content was in opposition with the lack of statistically photosynthetic differences observed by Sanz-Sáez et al. (2019) between USDA110 and SFJ4-24. Such results point to the fact that other reasons may exist behind this contrasting sucrose content between plants inoculated with the two strains. More specifically, obtained data would reveal that a greater use of sucrose in the leaf as observed in metabolites implied in glycolysis (glucose, fructose) in leaves of SFJ4-24 at a[CO 2 ] (Figure 8) which is in accordance of the higher respiration rates observed in leaves of plants inoculated with this strain (Sanz-Sáez et al., 2019) which could limit its export to nodules. On the other hand, nodules of plants inoculated with SFJ14-36 showed greater amount of sucrose under e[CO 2 ] than under a[CO 2 ] probably due to a better photosynthetic rate at e[CO 2 ]. Also, metabolites involved in glycolysis (fructose, glucose, and glucose-6-phosphate) were significant affected by both, Bradyrhizobium strain and CO 2 level (Figure 7). These observations suggest a poor supply and/or rapid consumption of C for respiration and C skeletons in nodules of plants inoculated with SFJ4-24 and SFJ14-36 under a[CO 2 ] (Aranjuelo et al., 2014). Surprisingly, when we observed the content of malate, the main dicarboxylic acid formed after glycolysis (Lodwig and Poole, 2003), and other organic acids involved in Krebs cycle, we saw that they were FIGURE 7 | Effects of Bradyrhizobium japonicum strain and [CO 2 ] on soybean nodule metabolism. Bar charts showing the relative abundance of metabolites in ambient [CO 2 ] (white) and elevated [CO 2 ] (black) in soybean plants inoculated with three different Bradyrhizobium japonicum strains. Red, yellow, and blue lightning signs indicate the significant effect of strain, CO 2 , and their interaction, respectively. Metabolites in bold indicate metabolites that were found to have significant effect of strain, CO 2 , and/or their interaction; italic metabolites indicate metabolites analyzed that were not found to have significant effect of strain, CO 2 , and their interaction, and not bold neither italic indicates metabolites not analyzed. Red, yellow, and blue lightning signs indicate the significant effect of strain, CO 2 , and their interaction, respectively. Bold metabolites indicate metabolites analyzed that were found to have significant effect of strain, CO 2 and/or their interaction; italic metabolites indicate metabolites analyzed that were not found to have significant effect of strain, CO 2 , and their interaction, and not bold neither italic indicates metabolites not analyzed. Frontiers in Plant Science | www.frontiersin.org affected by Bradyrhizobium strain but not by CO 2 level content. On the other hand, malate content in SFJ4-24 nodules was similar to USDA110 and higher than in SFJ14-36. These two observations could suggest that in nodules of plants inoculated with SFJ4-24, much of the carbon glycolyzed was derived to maintain nodule energy supply through malate production. Meanwhile in USDA110, although malate content is similar to that observed in SFJ4-24, the preceding substrates (sucrose, fructose, and G-6-P) were at much higher concentrations indicating that part of this C was diverted to the production of other compounds, such as phenolic compounds that were in higher concentration than in the other strains (Figure 6), suggesting an active carbon metabolism in soybeans inoculated with USDA110. This depletion in Krebs intermediates could be replenished trough the GABA-shunt allowing the synthesis of succinate that can enter the Krebs cycle (Saiz-Fernández et al., 2020) maintaining high its activity and the levels of malate observed with this strain. This role of the GABA-shunt would be confirmed by the increase of the levels of GABA and polyamines (spermidine and spermine) observed in USDA110 under e[CO 2 ] (Figure 6) as it was proposed by Saiz-Fernández et al. (2020) in corn plants. The low malate content in plants inoculated with SFJ14-36 was also showing this C diversion in plants infected by this strain. A significant portion of carbon entering in the Krebs cycle in bacteroids is diverted, via anaplerotic reactions, into some amino acids: alanine (Ala), through pyruvate (Igamberdiev and Kleczkowski, 2018); aspartic acid (Asp), through oxaloacetate (OAA) (Melzer and O'Leary, 1991); and glutamic acid (Glu), through α-ketoglutarate (α-KG) Streeter, 1987, 1992; Figure 7). Aspartic acid and Glu were by far the two most abundant amino acids in nodules; whereas by contrast, the content of Ala was low (Figure 7). Aspartic acid is a common amino acid and, in nodulated soybean, has been shown to be a form of N-transport, especially under N-stress conditions (Lima and Sodek, 2003). In our work, the highest content of Asp was found in nodules of plants inoculated with SFJ14-36 at a[CO 2 ], in special when compared with SFJ4-24 strain. However, if a great part of OAA is transaminated to Asp, the Krebs cycle will be shut down because malate and citrate cannot be synthesized (Hayes, 2001), explaining the low content of Krebs cycle metabolites observed in SFJ14-36 strain. Therefore, in nodules, the observed differences in Bradyrhizobium strains between glycolysis metabolites and organic acids involved in Krebs cycle may be due to a diversion of C to Asp production via anaplerotic reaction; and as a result, lesser organic acids were involved in energy production for N 2 fixation in plants inoculated with SFJ14-36 strain. In the case of USDA 110, the diversion of C from Krebs cycle to phenolic compounds could be replenished through the GABA-shunt. Although soybean is a ureides exporter, previous studies have revealed that plants with impaired N 2 fixation have shown an enhancement of Asp transport through xylem sap, as a precursor of the products of NH 4 + assimilation (Puiatti and Sodek, 1999;Lima and Sodek, 2003). Additionally, when photoassimilates transport to nodules is restricted, as it seems to be the case in plants inoculated with SFJ14-36 strain grown at a[CO 2 ], plants may use C, N, and energy in a more efficient way, through the carboxylation of Phosphoenolpyruvate (PEP) into OAA, required for Asp synthesis, instead entering Krebs cycle through malate . This mechanism has been shown mainly in indeterminate nodules (pea); however, Silvente et al. (2003) showed that in ureides exporter nodules (bean), aspartate aminotransferase (AAT) may be acting as an important switching enzyme in driving the metabolic flow of fixed N through amide or ureides synthesis, helping to explain the significant higher concentration of Asp and Krebs cycle metabolites in nodules of plants inoculated with SFJ14-36 strain at a[CO 2 ] (Figure 7). The use of Asp for N assimilation is connected with the carboxylation of PEP, while any use of C through Krebs cycle for N assimilation is connected with a preceding decarboxylation of pyruvate. This means that, while at first the nodule is fixing CO 2 , later is losing it. In this sense, nodule CO 2 fixation may represent a C-saving mechanism particularly in occasions of limited C availability , such as the ones infected with SFJ14-36 strain. This Asp produced may be exported through xylem or used for glutamic acid formation and consequently the production of glutamine by glutamine synthetase (GS) and ureides (Lima and Sodek, 2003). The high levels of uric acid and allantoin in nodules of plants inoculated with SFJ14-36 strain at a[CO 2 ] could support the idea of Asp as an intermediate metabolite in N assimilation under limited C supply to the nodule. However, more studies are necessary to proof this N assimilation route in determinate nodules. These results are in accordance with the 15 N labeled data (Figures 2, 3A) which showed a good SFJ14-36 nodule performance (measured as nodule δ 15 N) at both CO 2 levels, similar to the one observed in the reference strain (USDA110) at e[CO 2 ], however, due to a deficient nodulation in SFJ14-36 at a[CO 2 ] the total amount of N 2 fixed by nodules was very low (Figure 3A). On the other hand, the accumulation of ureides, uric acid and allantoin (Serraj et al., 2001;Ladrera et al., 2007), and Asp (King and Purcell, 2005) has been proposed in soybean nodules to take part in the modulation of the symbiotic activity, acting as an N-feedback mechanism. In our study, allantoin content, similar to Asp, was strongly affected by CO 2 level, especially in plants infected with SFJ14-36 strain, suggesting that the greater content of this N-transporting compound under a[CO 2 ] could be related with a decline in shoot N demand (Serraj et al., 1999); however, N-feedback effect by these compounds had not been observed since nodule performance (measured as δ 15 N) was not reduced at a[CO 2 ] (Figure 2). Additionally, the levels of organic acids implied in Krebs cycle (malate, fumarate, citrate, α-KG) were similar at those observed in plants with this strain grown at e[CO 2 ], suggesting an active nodule N-fixation but a poor N transport to aerial tissues in SFJ14-36 grown at a[CO 2 ]. All this data could imply that poor plant fitness observed in SFJ14-36 at a[CO 2 ] was at the end caused by a N stress. N deficiency was due to a poor nodule implantation, in terms of biomass, to an insufficient C import from leaves (low levels of sucrose), and to a poor N transport of nodule-fixed N that affects N status at whole plant level reducing C fixation through photosynthesis and so total biomass. Nevertheless, these nodules consumed all the sucrose from the aerial part (Figure 7) and they try to fix N in a more efficient way through the carboxylation of PEP to produce Asp; however, more work is needed to confirm the last hypothesis. Photorespiration Enhance at a[CO 2 ] Causes a Reorchestration on Krebs Cycle and Glutamic Acid Production in Leaves Carbon dioxide and O 2 are competitive substrates for ribulosel,5-bisphosphate carboxylase/oxygenase (Rubisco), and their ratio at the site of catalysis affects the rates of ribulose-1,5bisphosphate (RuBP) carboxylation and oxygenation (Farquhar et al., 1980;Rachmilevitch et al., 2004). For this reason, increasing atmospheric [CO 2 ] is expected to promote photosynthesis (C reduction) over photorespiration (C oxidation) as showed in C3 plants (Lin and Wang, 2002;Woodward, 2002) and seen here with the RuBP oxygenation (v o ) estimation ( Figure 4A). Our results showed higher concentration of leaf metabolites involved in photorespiratory pathway (glycolate, Gly, Ser, and glycerate) in plants infected with SFJ4-24 and SFJ14-36 grown at a[CO 2 ], although the difference between CO 2 levels was greater in SFJ14-36. Additionally, all four metabolites showed a similar trend in the combined response to Bradyrhizobium strain and CO 2 level as shown in the heat map where they were grouped in the same cluster ( Figure 6B) indicating a clear relationship between them. These metabolic results were in accordance with the estimation of v o , where significant reductions in v o were found between CO 2 levels for plants inoculated with SFJ4-24 and SFJ14-36 strains but not for USDA110 (Figure 4A), and previous works with soybeans (Booker et al., 1997;Rogers et al., 2006;Ainsworth and Rogers, 2007). As in the case of photorespiration, some studies had shown that under a[CO 2 ], respiration and therefore Krebs cycle was up-regulated when compared with plants grown at e[CO 2 ] (Tcherkez et al., 2008;Soba et al., 2019a,b). Sanz-Sáez et al. (2019) showed that leaves of soybean inoculated with SFJ14-36 showed a significant increase of respiration at a[CO 2 ] when compared with e[CO 2 ]; meanwhile, plants inoculated with USDA110 and SFJ14-36 did not show respiratory differences between CO 2 levels. Nevertheless, under a[CO 2 ], respiration has been suggested to be inhibited by the high levels of mitochondrial NADH, from photorespiration, and ATP/ADP, from photosynthesis (Padan et al., 2005). On the contrary, as commented before, higher Glu quantities are demanded with increased photorespiration rates and, therefore, an increase in 2-oxoglutarate coming from Krebs cycle is expected. Consequently, a complex respiratory homeostasis between two opposing forces: mitochondrial energy requirements and photorespiratory Glu demand is observed in leaves. Our data showed a general increase of dicarboxylic acids involved in Krebs cycle (fumarate, succinate, citrate, and malate) under a[CO 2 ] that was especially marked in the case of SFJ14-36 strain, which is in accordance with previous respiratory data showed by Sanz-Sáez et al. (2019) and with the enhanced Glu demand by photorespiration in SFJ14-36 under a[CO 2 ]. Interestingly, α-KG was not significantly affected by CO 2 treatment in SFJ14-36 and may be due to the fact that is involved in the synthesis of Glu and Gln and, therefore, diverted at a[CO 2 ] to the production of these two amino acids. In summary, the observed enhancement of photorespiration in plants inoculated with SFJ14-36 strain grown at a[CO 2 ] alters the leaf respiratory homeostasis between the downregulation caused by more NADH and the upregulation by more Glu demand. Our data suggest that at a[CO 2 ], plants inoculated with SFJ14-36 strain showed an upregulation of Krebs cycle to compensate the demand of C skeletons for Glu production. We hypothesize that the imbalance of energy between production (through photosynthesis) and consumption (photorespiration and respiration) observed in SJF14-36 at a[CO 2 ] was compensated by a greater fatty acid synthesis as seen in the significant increase of all free fatty acids analyzed (Figures 6B, 8). This could also indirectly reflect a diversion of photoassimilates to the synthesis of organic acids produced by a lower sucrose export to the nodule especially at a[CO 2 ]. Soybean Inoculated With SFJ14-36 Strain Prematurely Induced Leaf Senescence When Grown at a [CO 2 ] Probably Caused by N Deficiency In addition to leaf ontogenic senescence that occurs during the normal aging, prematurely induced senescence can occur when plants are subjected to abiotic/biotic stresses (Troncoso-Ponce et al., 2013). One of the first events that happen is chlorophyll degradation and, as a result, free phytol is produced which can be used as a biomarker of the rapid loss of chlorophyll associated with the degeneration of chloroplast under stress (Lim et al., 2007). In our case, free phytol content in plants inoculated with SFJ14-36 strain at a[CO 2 ] was five times greater compared with e[CO 2 ] and with the other inoculation treatments (Figure 8), suggesting an additional stress in this treatment, probably due to a poor supply of N by the nodules. The resulting free phytol is highly toxic to proteins and membranes, so a large proportion of phytol is incorporated into α-tocopherol, fatty acid phytyl esters, and triacylglycerol (Peisker et al., 1989;Ischebeck et al., 2006 ; Figure 8). α-Tocopherol is the most important lipophilic antioxidant in leaves (Munné-Bosch, 2007), protecting membrane lipids against lipid peroxidation. In our work, α-tocopherol content observed in leaves of SFJ14-36 plants at a[CO 2 ] was not significantly higher compared with e[CO 2 ] despite the greater phytol content observed at a[CO 2 ]. Two possible explanations could be: (1) a poor α-tocopherol synthesis from phytol, in opposition to previous works (Lippold et al., 2012;von Dorp et al., 2015;Mach, 2015) or (2) high tocopherol degradation that provides an excess in their synthesis. The last option could occur when the stress is too severe and consequently lipid peroxidation increases (Munné-Bosch, 2005) and is likely to happen in our case. In addition to α-tocopherol, salicylic acid (SA) increased by 6.3-fold at a[CO 2 ] in comparison with e[CO 2 ] in SFJ14-36 plants. Several investigations indicated that SA increases accumulation of phenolics (Kováčik et al., 2008) and enhances the oxidative stress tolerance (Li et al., 2014) through stimulation of enzymatic and non-enzymatic antioxidant mechanism pathways (El-Esawi et al., 2017). In accordance to this, phenolic acids such as caffeic acid, ferulic acid, and coumaric acid were also upregulated at a[CO 2 ] in SFJ14-36 plants ( Figure 6B). Phenolic acids have been reported to be antioxidants that are implied in the scavenging of free radicals (Ghasemzadeh and Ghasemzadeh, 2011). These results indicated that, the antioxidant system was activated due to a severe oxidative stress specifically in leaves of soybeans inoculated with SFJ14-36 strain at a[CO 2 ] but not in other treatments. In plants, increased production of Reactive Oxygen Species and the accumulation of lipid peroxidation products have been associated with oxidation of membrane lipids and membrane catabolism during environmental stresses or senescence (Barclay and McKersie, 1994;Berjak and Pammenter, 2008). The amount of all the free fatty acid analyzed (capric, lauric, palmitic, palmitoleic, oleic, elaidic, arachidic, and linoleic acid) showed a significant increase at a[CO 2 ] when compared with e[CO 2 ] in plants inoculated with SFJ14-36 strain. However, in the other two strains, the values remained unaltered between CO 2 levels. During maturation, aging, and senescence, the catabolic enzymatic activities become activated, and free fatty acids have been found to be enhanced considerably (Mishra et al., 2006). Together with free fatty acids, sterols (β-sitosterol and stigmasterol) were enhanced at a[CO 2 ] only in SFJ14-36 plants, remaining unchanged in the other treatments. Sterols have also been found to be enhanced in senescing leaves (Duperon et al., 1984;Bouvier-Navé et al., 2010;Li et al., 2016) where they appear to participate in recycling these fatty acids released from senescing cell membranes to form sterol esters for subsequent transport to other tissues (Holmer et al., 1973;Chen et al., 2007). All these metabolomic data suggest that in leaves of soybeans inoculated with SFJ14-36 strain grown at a[CO 2 ], chlorophyll degradation and, therefore, lower photosynthetic ability was likely to occur at the same time as thylakoid membrane degradations. This was previously proposed by Li et al. (2016) during leaf senescence, in our case, probably due to enhancement of oxidative stress and N deficiency. However, this happened only when the plants were grown at current CO 2 conditions not at e[CO 2 ], showing a soybean-strain specific interaction with atmospheric [CO 2 ]. As N is essential for synthesis of Rubisco and chlorophylls, in soybean plants inoculated with SFJ14-36 strain, insufficient N-fixation at a[CO 2 ] ( Figure 3C) decreased leaf N concentration and, as a consequence, photosynthetic rates were decreased in this B. japonicum strains (Sanz-Sáez et al., 2019). In this respect, some studies have shown that a sufficient supply of N through BNF increases photosynthetic rates and delays leaf senescence in soybean (Abu-Shakra et al., 1978;Kaschuk et al., 2010). Therefore, observed induced leaf senescence in SFJ14-36 may be due to an insufficient supply of N from the BNF to the leaves as already observed in soybean (Egli et al., 1978). As stated before, the poor supply of N from nodules to leaves was not likely due to low nodule BNF efficiency (Figure 2) but probably to a deficient nodulation as observed by the significant reduction in nodule biomass in SFJ14-36 grown at a[CO 2 ] when compared with e[CO 2 ]. CONCLUSION In this study, with the use of metabolomics and their interaction with physiological measures in soybean nodules and leaves, we revealed alterations in metabolites response to CO 2 fertilization in plants inoculated with different Bradyrhizobium strains. This analysis clearly demonstrated that the soybeans inoculated with the strain isolated at e[CO 2 ] (SFJ14-36) when grown at a[CO 2 ] suffers changes in metabolic pathways that affect negatively plant growth and development such as a restricted photoassimilate content (sucrose, glucose, fructose). We hypothesize that under these conditions in the nodule, a more efficient use of C and N happened through a carboxylation of PEP to produce Asp instead of decarboxylation of PEP to produce Krebs dicarboxylic acids. In this way, nodule CO 2 fixation may represent a C-saving mechanism and Asp can be used as N-exported compound or to produce ureides. At leaf level, plants inoculated with SFJ14-36 and grown at a[CO 2 ] showed a complete rearrangement of processes such as photorespiration and Krebs cycle. This metabolic change at a[CO 2 ] in plants inoculated with SFJ14-36 that were originally isolated at e[CO 2 ], was probably due to a poor nodulation caused by a change in plant root exudates between elevated and ambient [CO 2 ], affecting legume-bacteria interaction and, as a result, reducing N 2 -fixation and affecting N metabolism. In parallel, induced senescence likely happened as a result of N deprivation and was shown by the enhanced levels of free phytol, free fatty acids, and other compound related with chlorophyll and membrane degradation that may be caused by an oxidative stress due to the poor N status. The metabolism in leaves of the plants inoculated with the SFJ14-36 strain and grown at a[CO 2 ] seems to swift from C assimilation to catabolism of chlorophyll and macromolecules such as fatty acids caused by N deficiency. However, more research is needed with other strains isolated at e[CO 2 ] and more soybean cultivars in order to confirm this observations. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS DS: formal analysis and writing original draft. IA: conceptualization, resource managing, review and editing. UP-L: experimentation, data curation, review, and editing. AM-P: experimentation, data curation, review, and editing. AM-R: resource managing, experimentation, review, and editing. ML: conceptualization, resource managing, experimentation, supervision, review, and editing. AS-S: conceptualization, experimentation, data curation, formal analysis, project administration, supervision, and writing original draft. All authors contributed to the article and approved the submitted version.
2021-05-20T13:40:53.988Z
2021-05-20T00:00:00.000Z
234786220
s2orc/train
v2
VPS33B suppresses lung adenocarcinoma metastasis and chemoresistance to cisplatin
VPS33B suppresses lung adenocarcinoma metastasis and chemoresistance to cisplatin The presence of VPS33B in tumors has rarely been reported. Downregulated VPS33B protein expression is an unfavorable factor that promotes the pathogenesis of lung adenocarcinoma (LUAD). Overexpressed VPS33B was shown to reduce the migration, invasion, metastasis, and chemoresistance of LUAD cells to cisplatin (DDP) in vivo and in vitro. Mechanistic analyses have indicated that VPS33B first suppresses epidermal growth factor receptor (EGFR) Ras/ERK signaling, which further reduces the expression of the oncogenic factor c-Myc. Downregulated c-Myc expression reduces the rate at which it binds the p53 promoter and weakens its transcription inhibition; therefore, decreased c-Myc stimulates p53 expression, leading to decreased epithelial-to-mesenchymal transition (EMT) signal. NESG1 has been shown to be an unfavorable indicator of non-small-cell lung cancer (NSCLC). Here, NESG1 was identified as an interactive protein of VPS33B. In addition, NESG1 was found to exhibit mutual stimulation with VPS33B via reduced RAS/ERK/c-Jun-mediated transcription repression. Knockdown of NESG1 activated EGFR/Ras/ERK/c-Myc signaling and further downregulated p53 expression, which thus activated EMT signaling and promoted LUAD migration and invasion. Finally, we observed that nicotine suppressed VPS33B expression by inducing PI3K/AKT/c-Jun-mediated transcription suppression. Our study demonstrates that VPS33B as a tumor suppressor is significantly involved in the pathogenesis of LUAD. Introduction Lung cancer is the most commonly occurring cancer worldwide and is the leading cause of cancer-related death. 1 The number of deaths related to lung cancer alone exceeds the total number of deaths caused by the next three most prevalent cancers (colon cancer, breast cancer, and prostate cancer). 2 Lung cancer is classified into two histological types: small-cell lung cancer and nonsmall-cell lung cancer (NSCLC). As a subtype of NSCLC, lung adenocarcinoma (LUAD) is the most common pathologic type of lung cancer and has a poor prognosis. 3,4 To date, the molecular mechanisms underlying its initiation and development remain unclear. Carcinogenesis results from an imbalance between tumor-activating genes and tumor suppressors. 5e10 The VPS33B gene is a member of the Sec-1 domain family and encodes the human ortholog of rat VPS33B, which is homologous to the yeast class C VPS33 protein. 11 It is a core component of two sorting complexes: the class C core vacuole/endosome tethering complex and the homotypic fusion and vacuole protein sorting complex. 12e14 Previous studies have indicated that VPS33B is correlated with renal dysfunction, cholestasis syndrome, and platelet activation. 15e17 Two studies have reported that VPS33B is a tumor suppressor in hepatocellular carcinoma (HCC) and nasopharyngeal carcinoma (NPC). 18,19 However, the role and molecular basis of VPS33B in tumor metastasis has yet to be determined. NESG1 and CCDC19 expressed in nasopharyngeal epithelium and trachea has been cloned and revised in a previous investigation. 4,20,21 Reduced NESG1 is an unfavorable factor that promotes the pathogenesis of nasopharyngeal carcinoma (NPC) and NSCLC. 22,23 However, the molecular basis for NESG1-mediated metastatic suppression remains unclear in LUAD. Nicotine, a major component in cigarette smoke, is extremely hazardous and causes various types of cancer, including gastric cancer, colorectal cancer and lung cancer. 24e30 Approximately 90% of deaths caused by lung cancer are attributable to cigarette smoking. 31 Nicotine induces lung cancer cell proliferation and angiogenesis via nicotinic acetylcholine receptors and b1-arrestin (ARRB1) or IGF2 exocytosis. 32e34 Thus, nicotine is a significant factor in inducing the pathogenesis of lung cancer. However, whether nicotine modulates VPS33B has not yet been determined. The current study demonstrates that reduced VPS33B protein promotes the pathogenesis of LUAD. VPS33B is also identified as a downstream negative regulator of nicotine. VPS33B interacts with NESG1 to modulate EGFR/Ras/ERK/c-Myc/p53-mediated EMT signal, thereby suppressing cell metastasis and chemoresistance to cisplatin (DDP) in LUAD cells. These data present the detailed mechanisms underlying the function of VPS33B as a tumor metastasis suppressor that prevents the pathogenesis of LUAD. Materials and methods Immunohistochemistry Two paraffin-embedded tissue arrays with different lung adenocarcinoma samples were purchased from Shanghai Outdo Biotech. Co., Ltd. For the use of these clinical materials for research purposes, prior consent was obtained from the patients, along with approval from the Ethics Committee of Shanghai Outdo Biotech. Co., Ltd. The tissue arrays were deparaffinized, and antigen retrieval was performed in citrate buffer for 3 min at 100 C. Endogenous peroxidase activity and nonspecific antigens were blocked with a peroxidase blocking reagent, followed by incubation with primary antibody overnight at 4 C. The antibody dilutions and sources are listed in Table S3. After washing, the sections were incubated with a biotin-labeled secondary antibody and were subsequently incubated with streptavidin-conjugated horseradish peroxidase. The peroxidase reaction was developed using a 3,3diaminobenzidine (DAB) chromogen solution in a DAB buffer substrate (Maixin, Fuzhou, China). The sections were visualized with DAB and counterstained with hematoxylin, mounted in a neutral gum, and analyzed by bright-field microscopy. To evaluate the VPS33B staining, a semiquantitative scoring criterion was used in which both the staining intensity and the percentage of positive cells were recorded. A staining index (ranging from 0 to 7) was obtained from the intensity of VPS33B staining (0 Z negative, 1 Z weakly positive, 2 Z moderately positive, 3 Z strongly positive) multiplied by the proportion of immunopositive tumor cells (<10% Z 1, 10% to <50% Z 2, 50% to <75% Z 3, !75% Z 4). A score of 6 or greater was classified as indicating VPS33B overexpression. For scoring, two independent pathologists were blinded to the clinicopathological information. Cell culture The lung cancer cell lines A549 and H1975 were purchased from The Cell Bank of Type Culture Collection of the Chinese Academy of Sciences and maintained in RPMI 1640 medium supplemented with 10% newborn calf serum (ExCell, Shanghai, China). A549-DDP cell lines were constructed in-house via progressively increasing the concentration of DDP. The cells were maintained at 37 C in a humidified atmosphere containing 5% CO 2 . Lentivirus production and infection Lentiviral particles carrying the VPS33B cDNA and GFP vector were constructed by GeneChem (Shanghai, China). A549 and H1975 cells were infected with lentiviral vectors, and the levels of VPS33B were measured using reverse transcriptase qPCR and Western blot analysis. RNA isolation, reverse transcription, and qRT-PCR RNA isolation, reverse transcription, and qRT-PCR were performed in LUAD cell lines according to the instructions provided by TAKARA, Co., Ltd. The specific qPCR primers for VPS33B, c-Jun, p53, c-Myc and NESG1 are listed in Table S1. All experiments were repeated at least three times. In vitro cell migration and invasion assays For cell migration assays, 1 Â 10 5 cells in a 100 mL of serumfree medium were seeded onto a fibronectin-coated polycarbonate membrane inserted in a Transwell apparatus (Corning, USA). On the lower surface, 500 mL of RPMI 1640 medium containing 10% fetal bovine serum was added as a chemoattractant. After incubating the cells for 10 h at 37 C in a 5% CO 2 atmosphere, Giemsa-stained cells adhering to the lower surface were counted under a microscope in five predetermined fields (100Â). All assays were repeated independently at least three times. The procedure for the cell invasion assays was similar to that of the cell migration assay, except that the Transwell membranes were precoated with 24 mg/mL Matrigel (R&D Systems, USA). In vivo metastasis in nude mice For in vivo metastasis assays, 50 mL of A549 and H1975 cells (5 Â 10 6 ) stably overexpressing VPS33B (or an equal number of their respective control cells) were injected under the liver capsule of each mouse (5 mice per group). All mice were sacrificed 4 weeks later. The liver and lungs were subjected to fluorescence image detection, which visualized primary tumor growth and metastatic lesion formation. The mice were maintained in a barrier facility with high-efficiency particulate air (HEPA)-filtered racks and fed with an autoclaved laboratory rodent diet. All animal studies were conducted in accordance with the principles and procedures outlined in the Southern Medical University Guide for the Care and Use of Animals. MTT cytotoxicity assay DDP (QiluPharmo Co., Ltd, Jinan, China) was resuspended in PBS (0.5 mg/mL) and stored at À20 C. Drug sensitivity was determined by MTT assay. The cells were seeded in 96-well plates in 100 mL of RPMI-1640 medium supplemented with 10% FBS at 2 Â 10 3 cells/well. Once attached, the cells were treated with 2.5, 5, 10, 20, or 40 mM DDP (0.5 mg/mL) and incubated at 37 C in 5% CO 2 for 48 h. Experiments were conducted three times. In vivo DDP sensitivity experiment To establish an LADC mouse model, 6 Â 10 5 VPS33Boverexpressing A549 cells or their controls were injected intraperitoneally into nu/nu mice (n Z 40) aged 4 weeks. Tumors were allowed to grow for 3 d, and animals were randomized into four groups, treated with either normal saline (NS) or DDP injected intraperitoneally every 3 d (NC þ NS, NC þ DDP, VPS33B þ NS, and VPS33B þ DDP, n Z 10/group). The survival time of the nude mice was observed, and survival curves were analyzed by KaplaneMeier analysis. Transient transfection with small-interfering RNAs and plasmids siRNAs for VPS33B and NESG1 were designed and synthesized at RiboBio Inc. (Guangzhou, China). The sequences of each siRNA, mimic and inhibitor are listed in Table S2. The VPS33B and NESG1 cDNA was constructed (in-house) in pCMV vectors and contained an HA or Myc tag, respectively. The c-Jun, c-Myc and p53 pcDNA3.1 plasmids were purchased from Vigene Biosciences (Shandong, China). A549 and H1975 cells were plated in 6-well and 96-well plates (Nest, Biotech, China) at 30%e50% confluence 24 h prior to transfection. Subsequently, siRNAs, miRNAs, or the plasmids with different genes were respectively transfected or co-transfected at a working concentration of 100 nM using the TurboFectä siRNA Transfection Reagent (Fermentas, Vilnius, Lithuania) and Lipofectamine 2000 Transfection Reagent (Thermo Fisher Scientific, Waltham, USA) according to the protocol provided by the manufacturers. Cells were collected after 48e72 h for further experiments. Western blot analysis, reagents, and antibodies Western blot analyses were performed as described in a previous study, with primary antibodies against VPS33B, EGFR, NESG1, PI3K, p-PI3K, AKT, p-AKT, ERK, p-ERK, K-Ras, c-Myc, E-cadherin, N-cadherin, Vimentin, Snail, and p53. b-actin and GAPDH were used as loading controls. Dilutions and the sources of antibodies are listed in Table S3. Images were captured using Minichemi Chemiluminescence Imaging System, Beijing Sage Creation Science Co., Ltd., China. Luciferase reporter assay To evaluate the effects of c-Jun on NESG1 and VPS33B promoters, as well as c-Myc on the p53 promoter activity, fragments containing the c-Jun binding sites in NESG1 and VPS33B promoters, as well as the c-Myc binding sites in the p53 promotor, were cloned into the pGL3-Basic luciferase reporter vector. In addition, the c-Jun binding site mutation VPS33B suppresses lung adenocarcinoma metastasis and chemoresistance vectors and c-Myc binding site mutation vectors were constructed. These vectors, the c-Jun plasmid, and the c-Myc plasmid were co-transfected into A549 and H1975 cells. The luciferase activity of these promoters was examined 48 h after transfection. Co-Immunoprecipitation (Co-IP) Co-IP was conducted using the Pierce Co-Immunoprecipitation Kit (Thermo Scientific, USA) according to the instructions provided by the manufacturer. Total proteins were extracted and quantified. A total of 3000 mg of protein in 200 mL of supernatant was incubated with 10 mg of anti-HA and anti-MYC for 12 h at 4 C. The resin was washed and the sample was eluted in sample buffer, followed by boiling for 10 min at 100 C. Immune complexes were subjected to Coomassie Brilliant Blue staining and Western blot analysis. Confocal laser scanning microscopy A549 cells were co-transfected with the VPS33B plasmid carrying an HA-tag and the NESG1 plasmid carrying a myctag in a six-well plate. The cells were cultured overnight and then fixed with 3.5% paraformaldehyde and permeabilized with 0.2% Triton X-100 at room temperature. The cells were incubated with anti-HA and anti-myc antibodies for 30e45 min at 37 C. After incubation for 30e45 min 37 C with secondary antibody, the coverslips were mounted onto slides using a mounting solution containing 0.2 mg/mL DAPI. Images were captured by Zeiss LSM 800 laser confocal microscope. Chromatin immunoprecipitation assay A ChIP assay was performed as previously described using the ChIP Assay Kit 38,39 (Millipore, Catalog: 17e371) to determine whether c-Jun combined with the NESG1 and VPS33B promoters and whether c-Myc combined with the p53 promoter in A549 and H1975 cells. Crosslinked DNA was sheared to 200e1000 base pairs in length with sonication and then subjected to immunoselection with an anti-c-Jun antibody. Finally, qPCR was conducted to measure the enrichment of DNA fragments in the putative transcription binding sites in these gene promoters on the basis of the specific primers. The specific primers are listed in Table S1. Electrophoretic mobility shift assay (EMSA) The binding activity on the promoter region of p53, NESG1, and VPS33B were detected using the EMSA Kit (Roche, Switzerland) according to the instructions provided by the manufacturer. The probes used are listed in Table S4. Preformed c-Jun-recognized probes (Biosense Bioscience Co., Ltd., Guangzhou, China) were used as a positive control, and samples without nucleoprotein were used as negative controls. For competition experiments, a 100-fold specific oligonucleotide competitor (unlabeled wild-type or mutant gene probes) was added to the binding mixture 10 min before the labeled probe was added. Visualized bands were analyzed using a BioSens Gel Imaging System (BIOTOP, China). The EMSA analysis was performed at Biosense Bioscience Co., Ltd. (Guangzhou, China). Yeast two-hybrid screening NESG1 bait screening was conducted using the Mate&Plateä Library-Universal Human and Screening Kit (Clontech) according to the guidelines provided by the manufacturer. The NESG1 bait constructed in the DNAbinding domain (DBD)-containing pGBKT7 vector was transformed into the yeast haploid strain Y2H. Y2H was mated with yeast Y187 cells (which contain a human cDNA library) and formed diploid yeast colonies. The DNA was isolated from individual colonies after selection on media lacking leucine, tryptophan, histidine, and adenine (SD/-Trp/-Leu/-His/-Ade). The isolated yeast DNA was transformed into electrocompetent DH5a cells. NESG1 interacting partners were determined by sequence analysis using the pGADT7-Rec3 0 internal sequencing primer (upstream 5 0 -AGGCTGAGCTGCGAAAAAG-3 0 ) and BLAST searches against the human genome (NCBI: http://blast. ncbi.nlm.nih.gov/Blast.cgi). Statistical analysis Statistical analyses were performed using SPSS ver. 20.0 (SPSS Inc. Chicago, IL, USA) and GraphPad Prism v5.0 (GraphPad Software, Inc., La Jolla, CA, USA). The data were expressed as the mean AE SD of at least three independent experiments. Differences were considered statistically significant at P < 0.05 by Student's t-test for two groups, one-way ANOVA for multiple groups, and parametric generalized linear model with random effects. Chisquared testing was used to examine the correlation of VPS33B expression with clinical features. A survival analysis was performed using the KaplaneMeier method. All statistical tests were two-sided, and asterisks indicate statistical significance. Results Reduced VPS33B protein expression acts as an unfavorable prognostic factor Immunohistochemical staining was conducted to determine VPS33B protein expression in 155 LUAD cases with prognosis information (and in 42 lung tissues). The results showed the expression of cytoplasmic VPS33B in LUAD and their matched lung tissues (Fig. 1A). VPS33B protein expression in LUAD tissues was significantly downregulated relative to the control lung tissues (P Z 0.001) ( Table 1). Reduced VPS33B protein expression also exhibited a significantly positive correlation with the overall survival time of the patient (Fig. 1B) (P Z 0.016). These data demonstrated that reduced VPS33B protein is an unfavorable factor promoting LUAD pathogenesis. VPS33B suppresses cell metastasis and chemoresistance to DDP To explore the biological role of VPS33B in LUAD, endogenous VPS33B expression was detected in H460 and H446 cells, whereas the other three lines (A549, SPCA1, and H1975) showed undetectable or very low levels of endogenous VPS33B; these cells were selected for subsequent experiments (Fig. S1A). Lentivirus-carrying VPS33B cDNA was injected into H1975 and SPCA1-1 cells with low VPS33B mRNA and protein expression. Quantitative polymerase chain reaction and Western blot analysis showed markedly higher VPS33B mRNA and protein expression levels in A549 and H1975 cells than in their respective empty vector control cells ( Fig. S1B and C). Transwell ( Fig. 2A) (Fig. S1D) and Boyden assays ( The data indicated that overexpressed VPS33B markedly reduced migration and invasion in A549 and H1975 cells relative to their respective controls. The in vivo metastasis assay indicated that intrahepatic dissemination and lung metastasis were significantly lower in nude mice that had been injected under the liver capsule with Lv-VPS33B-GFP A549 and H1975 cells compared to their respective control groups (Fig. 2C, Fig. S1F). In a subsequent study, we observed that VPS33B A549 and H1975 cells exhibiting stable overexpression had significantly increased sensitivity to DDP in vitro (Fig. S1G). The inhibition rates 48 h after treatment with DDP at different concentrations were evaluated for cells transfected with VPS33B and its empty control. The 50% inhibitory concentration (IC50) of DDP decreased from 5.13 mM to 2.42 mM in A549 cells after VPS33B transfection. A similar IC50 reduction, from 20.25 mM to 8.83 mM, was observed in H1975 cells (Fig. S1L). These decreases were verified in vivo by applying VPS33B-overexpressing A549 xenografts in nude mice. The KaplaneMeier survival analysis confirmed that DDP treatment (NC þ DDP) or VPS33B overexpression (VPS33B þ normal saline (NS)) alone extended the lifespan of the mice relative to that of the untreated normal controls (NC þ NS). However, overexpressed VPS33B coupled with DDP treatment (VPS33B þ DDP) markedly prolonged the survival time beyond those of the other three groups (Fig. 2D). Knockdown of VPS33B protein expression via a specific small interfering RNA (siRNA) in VPS33B-overexpressed A549 and H1975 cells showed that silencing VPS33B reverses its suppressive effect on EGFR protein expression (Fig. S2A), cell migration and invasion, as determined by Transwell (Fig. S2B) and Boyden assays (Fig. S2C) in LUAD cells. To clarify the molecular mechanism by which VPS33B functions as a tumor suppressor related to LUAD, the key regulators of the cell cycle and epithelial-to-mesenchymal (EMT) signals were analyzed by Western blot. VPS33B overexpression was found to downregulate c-Myc and c-Jun expression; in addition, it enhanced p53 expression in A549 and H1975 cells. In the EMT pathway, overexpressed VPS33B suppressed Snail, N-cadherin, and Vimentin and upregulated E-cadherin. We also observed that the EGFR, p-ERK, and K-Ras levels decreased in VPS33Boverexpressing A549 and H1975 cells, but total ERK remained unchanged in these cells (Fig. 2E). Taken together, these data demonstrated that VPS33B acts as a metastasis-suppressor by modulating EGFR/Ras/ERK, p53,c-Myc,c-Jun, and EMT pathways. C-Myc directly suppresses p53 To explore whether c-Myc directly suppresses p53, c-Myc was first knocked down by siRNA (Fig. 3A). The data showed that the p53 mRNA expression level was markedly increased in A549 and H1975 cells (Fig. 3B). Notably, c-Myc was a potential binding transcription factor to the p53 promoter, as predicted using the PROMO program and UCSC online database (Fig. 3C). The ChIP and EMSA assays indicated that c-Myc was bound to the p53 promoter (Fig. 3DeF). We then found that c-Myc transduction markedly decreased the luciferase activity of the p53 promoter relative to that of its control group (Fig. 3G). These results confirmed that c-Myc directly suppresses p53 expression by binding to its promoter. EGFR antagonizes VPS33B to modulate Ras/ERK/c-Myc/p53 To investigate the role of EGFR in VPS33B-mediated suppression, EGFR was transfected into VPS33B-overexpressing LUAD cells. Increased EGFR levels restored the expression of the Ras/ERK/c-Myc signal and reduced p53 protein expression (Fig. S3A). Elevated EGFR increased the combination of c-Myc with the p53 promoter (Fig. S3B) and reduced p53 mRNA expression (Fig. S3C). Finally, overexpressed EGFR antagonized the VPS33B-mediated repression of migration and invasion in LUAD cells ( Fig. S3D and E). These data indicated that EGFR antagonizes VPS33B to modulate Ras/ERK/c-Myc/p53 in LUAD. VPS33B interacts with NESG1 to suppress EGFR/ RAS/ERK/c-Jun To explore whether NESG1 interacts with VPS33B, the yeast two-hybrid (Y2H) system was used to identify the interactive protein of NESG1. The coding sequence (CDS) of the NESG1 gene (Fig. S4A) was cloned into a pGBKT7 Y2H bait expression vector (Fig. S4B). PCR was used to identify the successful recombinants of pGBKT7-NESG1 (Fig. S4C). The recombinant bait plasmid exerted no self-activating effect and showed no toxicity to the Y2H cells (Fig. S4D). The human cDNA library in yeast Y187 cells was mated with pGBKT7-NESG1 Y2H cells (Fig. S4E) and further screened in the SD/-Ade/-His/-Leu/-Trp(QDO)/X-a-Gal/AbA plate ( Fig. S4F and G). Finally, 30 positive clones were obtained (Fig. S3H) and were analyzed by sequencing. Four potential NESG1-interacting proteins, including RNF2, VPS33B, ENKUR, and CCDC65, were identified. In subsequent coimmunoprecipitation experiments, we confirmed the interactive combination of NESG1 with VPS33B ( Fig. 4A and B) but not with RNF2, ENKUR, or CCDC65 (data not shown). Confocal laser scanning microscopy verified the cytoplasm and vesicles colocalization between NESG1 and VPS33B in LUAD cells (Fig. 4C). Overexpressed VPS33B upregulated the expression of NESG1 mRNA (Fig. S4I). Similarly, overexpressed NESG1 upregulated the expression of VPS33B mRNA (Fig. S4J). Bioinformatic assays indicated the presence of binding sites for c-Jun in the promoter regions of NESG1 and VPS33B (Fig. S4K). Increased c-Jun expression significantly reduced VPS33B and NESG1 mRNA expression (Fig. S4L). Moreover, ChIP, EMSA, and luciferase activity assays revealed that c-Jun combined with the NESG1 (Fig. 4DeG) and VPS33B (Fig. 4HeK) promoters and reduced the activity of these promoters. Transfecting EGFR cDNA into VPS33Boverexpressing LUAD cells markedly restored K-Ras/ERK/c-Jun signaling and thus downregulated NESG1 protein expression (Fig. S4M). These results demonstrated that VPS33B interacts with NESG1 to reduce EGFR/RAS/ERK/c-Jun signal. NESG1 knockdown reverses VPS33B-mediated migration and invasion inhibition of LUAD cells To investigate whether NESG1 mediates VPS33B-induced suppression, siRNAs were used to reduce NESG1 expression (Fig. 5A) in VPS33B-overexpressed LUAD cells. NESG1 knockdown markedly restored the migration and invasion ability of VPS33B-overexpressing LUAD cells, as shown by Transwell (Fig. 5B) and Boyden (Fig. 5C) assays. We further observed that the VPS33B-modulated signals were reversed after silencing NESG1 in VPS33B-overexpressing LUAD cells. These reversed signals included the upregulation of EGFR-induced Ras/ERK signal, c-Myc and EMTrelated protein factors (such as N-cadherin, vimentin, Snail, and c-Jun), and the downregulation of p53 and Ecadherin (Fig. 5D). In addition, reduced p53 mRNA levels were also observed (Fig. 5E). Finally, the ChIP assays indicated that the ability of c-Myc to bind the p53 promoter was enhanced in VPS33B-overexpressing LUAD cells upon NESG1 knockdown (Fig. 5F). There data showed that knocking down NESG1 reversed VPS33B-induced migration and invasion inhibition for LUAD cells. Nicotine downregulates VPS33B via PI3K/AKT/c-Jun signaling To determine whether nicotine modulates VPS33B and its mediated signaling, LUAD cells were incubated in nicotinecontaining media. We observed that the levels of VPS33B mRNA were downregulated in LUAD cells treated with nicotine for 72 h at different concentrations (0.1, 1, 10, and 100 mmol/L) and over different lengths of time (48, 60, 72, 100, 132, and 144 h when treated with 10 mmol/L). The results showed that VPS33B mRNA was markedly reduced in LUAD cells at nicotine concentrations up to 10 mmol/L and when the treatment time was increased to 72 h in LUAD cells with a nicotine treatment concentration of 10 mmol/L (Fig. 6A). Moreover, PI3K/AKT/c-Jun signaling was reduced, whereas VPS33B protein expression (Fig. 6B) and mRNA expression (Fig. 6C) were elevated in the nicotine-treated LUAD cells transfected with the PI3K-specific inhibitor Ly294002 compared to the nicotine-treated cells, which was similar to the control cells. The interaction of c-Jun with the VPS33B promoter was significantly decreased when the PI3K-specific inhibitor Ly294002 was transfected into the nicotine-treated LUAD cells, but this interaction was similar to control cells ( Fig. 6D and E). These results demonstrated that nicotine downregulates VPS33B expression via suppressing PI3K/AKT/c-Jun signaling. Discussion In a previous study, Wang et al used a VPS33B-knockout mouse model to report that VPS33B acted in a tumor suppressor role during the carcinogenesis of hepatocellular carcinoma. Subsequently, we found that VPS33B suppresses NPC growth, which further supports VPS33B as being a potential tumor suppressor. In the current study, we explored the role of VPS33B in LUAD. Immunochemical staining indicated that VPS33B protein in LUAD tissues was signifi- cantly downregulated relative to that in lung bronchial epithelium tissues. Reduced VPS33B protein expression also indicated a poor prognosis for LUAD patients. These data suggest that VPS33B protein has a potential suppressive role in LUAD. Previous studies have indicated VPS33B as a tumor suppressor involved with hepatocarcinogenesis and NPC growth. However, whether VPS33B suppresses the tumor metastases of LUAD cells is still undetermined. In the current study, we observed that VPS33B acts as a antitumormetastasis factor to reduce cell migration, invasion and metastasis in vitro and in vivo. Furthermore, the overexpression of VPS33B also markedly reduced NPC cell resistance to DDP chemoresistance in vitro and in vivo. These findings suggest that VPS33B protein is a potential suppressor of tumor metastasis in LUAD. EGFR influences the initiation and promotion of tumor pathogenesis. 4,35e37 Most non-small-cell lung cancers express epidermal growth factor receptor and its natural ligand. The distribution and expression of EGFR in lung cancer has been well established. In 2002, 38 Piyathilake et al detected the expression of EGFR in tumor tissues and normal lung tissues of 60 lung cancer patients by immunohistochemistry. The expression of EGFR in lung cancer tissues was significantly higher than that in normal lung tissues. The expression in pre-lesions and lung cancer tissues is progressively elevated. The activation of EGFRinduced Ras/ERK and its downstream EMT signals are key elements that promote metastasis and resistance to chemotherapy. 39e41 In this study, we examined the changes in EGFR/Ras/ERK signaling and its downstream c-Myc, p53, c-Jun and EMT signaling in VPS33B-overexpressed LUAD cells. Notably, the overexpression of VPS33B reduced EGFRinduced Ras/ERK signaling, which further suppressed its downstream c-Myc and EMT pathways, including the upregulated expression of E-cadherin and the downregulated expression of Snail, N-cadherin, and Vimentin in LUAD cells. Furthermore, p53 was elevated in VPS33B-overexpressed LUAD cells. Finally, the expression of the oncogenic transcription factor c-Jun was also reduced in VPS33B-overexpressing LUAD cells. Taken together, the aforementioned findings suggest that VPS33B is a suppressor of tumor metastasis and downregulates EGFR/Ras/ERK signaling and its downstream c-Myc, c-Jun, and EMT signaling and induces p53 expression. C-Myc is a key oncogenic factor and had been reported to suppress p53 expression in tumors. 42e44 However, its detailed molecular basis has not been reported. In this study, bioinformatics analysis predicted that c-Myc could bind to the p53 promoter. Further, we confirmed that c-Myc did indeed bind to the p53 promoter and suppressed its expression at the transcription level. Finally, EGFR transfection led to decreased p53 and upregulated the EGFR/Ras/ERK/c-Myc pathway in VPS33B-overexpressing LUAD cells. These findings demonstrate that VPS33B suppresses EGFR/Ras/ERK/c-Myc signaling and thus induces p53 expression. P53 is a classical tumor suppressor and has been reported to bind to Snail protein, thereby suppressing EMT signaling. 45 Thus, VPS33B suppresses EMT signaling by reducing the EGFR/Ras/ERK/c-Myc-induced transcriptional suppression of p53. Proteineprotein interactions are vital when exploring cellular signals. 46e48 In prior studies, we cloned and revised the NESG1 coding sequence. Further, we identified this protein as a tumor suppressor in NPC and NSCLC. 22,23 In the current study, we used the yeast two-hybrid approach to screen interaction proteins of NESG1. Interestingly, VPS33B was found to be a potential interacting protein of NESG1. We then validated the interaction of VPS33B and NESG1 and co-located the complex in the cytoplasm and vesicles using Co-IP and laser confocal fluorescence microscopy in LUAD cells. In addition, we found that VPS33B and NESG1 exhibit mutually stimulated expression by reducing Ras/ERK signaling to downregulate c-Jun expression. C-Jun, as an oncogenic transcription factor, was identified to bind directly to the VPS33B and NESG1 promoters and thus suppressed the expression of both genes. Knockdown of NESG1 in VPS33B-overexpressing LUAD cells markedly reversed the VPS33B-modulated signaling, including upregulating the expression levels of EGFR/Ras/ERK/c-Myc, EMT signals and the oncogenic transcription factor c-Jun, and decreasing p53 expression. These findings demonstrated that NESG1 interacts with VPS33B and participates in VPS33B-mediated LUAD metastasis suppression. Cigarette smoke is considered a high-risk factor for various cancers, including lung cancer, gastric cancer, and pancreatic cancer. 49 Nicotine is the most important component of smoking. After cigarette smoke inhalation, nicotine is rapidly absorbed into the lungs, resulting in a relatively high nicotine concentration in the volume of blood leaving the heart. A previous study showed that blood or plasma nicotine concentrations that were sampled in the afternoon in smokers generally ranged from 10 to 50 ng/ mL À1 (0.06e0.31 mM) for long-term smoking populations. 50 Stable low nicotine concentrations continue to damage the bronchial epithelium and lead to the loss of expression of some genes that maintain the normal function of bronchial epithelial cells, which eventually induces lung cancer pathogenesis. In prior immunohistochemistry assays, we confirmed that VPS33B protein is highly expressed in human bronchial epithelial cells. We speculated that long-term smoking would cause the respiratory tract accumulation of harmful substances found in tobacco, including nicotine, which would destroy bronchial epithelial cells and inhibit VPS33B expression. In the current study, due to the lack of a smoking-induced lung cancer model, we observed the effect of nicotine on the expression of VPS33B at the cellular level. Different from the long-term impact of low nicotine concentrations in humans, we used higher nicotine concentrations to treat LUAD cells to quickly observe the impacts. The mRNA and protein levels of VPS33B were downregulated in nicotine-treated LUAD cells by stimulating PI3K/AKT/c-Jun-mediated transcription suppression. These findings demonstrate that a reduction of VPS33B is involved in the smoking-induced pathogenesis of LUAD. This suggests that patients who are regular smokers with low VPS33B expression levels should quit smoking to possibly improve their survival prognosis. Conclusion Together, these findings indicate that downregulated VPS33B protein is an unfavorable factor in LUAD and that VPS33B is suppressed by nicotine. VPS33B interacts with NESG1 and modulates EGFR/Ras/ERK/c-Myc/p53 and its downstream EMT signals, which thus reduces cell migration, invasion, metastasis and chemoresistance to DDP in LUAD (Fig. 7). Our study is the first to provide insights into the significance of VPS33B as an antitumor-metastasis factor in LUAD.
2020-01-09T09:14:37.040Z
2020-01-08T00:00:00.000Z
214146420
s2orc/train
v2
Repository Approaches to Improving the Quality of Shared Data and Code
Repository Approaches to Improving the Quality of Shared Data and Code : Sharing data and code for reuse has become increasingly important in scientific work over the past decade. However, in practice, shared data and code may be unusable, or published results obtained from them may be irreproducible. Data repository features and services contribute significantly to the quality, longevity, and reusability of datasets. This paper presents a combination of original and secondary data analysis studies focusing on computational reproducibility, data curation, and gamified design elements that can be employed to indicate and improve the quality of shared data and code. The findings of these studies are sorted into three approaches that can be valuable to data repositories, archives, and other research dissemination platforms. Introduction Research data, defined as collected or generated information used as evidence for original research findings [1], have become a vital component of the scholarly record and a primary asset of new inquiry. The increased value of scientific data and concerns over a reproducibility crisis [2][3][4] have led funders and journals to require data sharing as conditions of grant funding and publication. Data repositories are considered a primary venue for data sharing as they implement systematic stewardship to foster curation, dissemination, access, and preservation of research data [5][6][7]. However, published data will only be reused if a researcher trusts in its quality [8]. Data quality is a broad term that includes many elements and can be defined from many different perspectives. In 2015 Cai & Zhu developed a framework for evaluating data quality along five axes: availability, usability, reliability, relevance, and presentation quality [9]. While some of the dimensions defined in their framework largely refer to intrinsic qualities of data files such as accuracy, integrity, and completeness, several important features can be improved by data repositories. These features include presentation quality, documentation, metadata, and accessibility, which contribute to the overall quality of a published dataset. Similarly, Martin et al. enumerate a series of data quality properties that emphasizes the importance of intrinsic data quality, contextual data quality, metadata quality, characteristics of data and data users, platform promotion and user training [10]; many of which are within the influence of data repositories. Table 1 shows an approximate alignment of these properties described in Refs. [9,10], together with examples of typical data repository features and functionalities. This paper presents three approaches, or categories of repository feature enhancements, beyond those highlighted in Table 1 that repository staff can apply to improve the overall quality of data published on their platforms. Table 1. An approximate alignment of data quality properties described in Refs. [9,10], together with typical data repository features and functionalities. Cai & Zhu Martin et al. Examples of Common Data Repository Features Availability: accessibility, timeliness, authorization Data repositories are designed to meet the needs of different scientific communities, and as such, can be broadly classified as either domain-specific or general-purpose. The former is located within domain communities, such as physics (CERN Analysis Preservation (CAP)) or genetics (GenBank). The latter emerged with the increased demand for data repositories to support the "long-tail" of science, where a large number of relatively small labs and individual researchers collectively produce the majority of results [5,11,12]. Examples of generalist repositories are Harvard Dataverse, Figshare, Dryad, and Zenodo. The heterogeneous nature of collected data contributes to this long-tail effect [13,14], and creates a need for versatile data repositories such as the Harvard Dataverse repository. Harvard Dataverse is a multi-disciplinary research data repository that allows members of the worldwide scientific community to deposit, publish, and share their datasets. The repository infrastructure supports various file formats and requires that depositors provide citation-level metadata, including a dataset title, author, and date. Additional features, including support for subject-specific metadata built on community and domain standards, file versioning, persistent object identifiers, and custom rights statements, contribute to the quality of published data by making it easier to discover, understand, and reuse them. There are more than 60 independent Dataverse repository installations worldwide at the time of writing this article. Each installation runs a version of the open-source Dataverse software platform developed and maintained by Harvard's Institute for Quantitative Social Sciences (IQSS) and open source contributors. Domain-specific repositories are often required for sharing large or very complex data that generalist repositories may not have the infrastructure, specialized curatorial skills, or domain expertise to support. Optimizations for data description, file formats, storage, and exploration features are not feasible or necessary for generalist repositories to support a heterogeneous collection. Large-scale "Big Science" datasets (measured in terabytes and petabytes) are often produced by large coordinated teams with extensive instrumentation and are designed to be shared with many researchers for a variety of purposes and projects. CERN Analysis Preservation (CAP) is a good example of such a domain-tailored repository that offers specialized research data management services [15]. The platform maps research workflows of its four largest experiments in customized analysis description and submission templates, thereby easing and supporting documentation, sharing, and reusing research conducted within those experiments. One of the major benefits of publishing research datasets in a domain repository is the built-in designated community of users who understand jargon, descriptions and metadata, collection protocols, and potential errors or flaws in a given dataset. Users looking for data within a specific domain can better evaluate a dataset described with precise domain metadata than those described in general terms or with specific descriptions from an unfamiliar field. Nevertheless, communities and audiences are porous and often ill-defined. As research becomes increasingly interdisciplinary, researchers establish their agendas at the nexus of multiple disciplines and communities. Communities of practice are formed around data, but also software packages, methodologies (i.e., computational, experimental, theoretical), geographic regions, and more [16]. Therefore, it is important to consider more intersectional research communities and interdisciplinary standards to enable data reuse in a larger variety of research contexts. Published research data are routinely used by these diverse communities for calibration, control, comparison, testing, and conducting meta-analyses [17], but they are not always well-documented, understandable, and reusable. A lack of data-sharing conventions, incentives, and infrastructure to support publishing for long-tail, heterogeneous data, have historically affected the quality of data as a research output. [11,[18][19][20]. Additionally, irreproducibility of published results is often caused by missing files or documentation and has increased the importance of transparent and available research datasets, which recently have come to include data, code, documentation, and other Supplementary Files [2- 4,21]. This paper addresses the following questions: how can data repositories improve data and code quality, and how can they signal data and code quality to external researchers? While there are important elements of data quality that repositories cannot affect, we focus on specific features of published research datasets: including data, code, metadata, documentation, and their presentation that data repositories can identify, strengthen, and highlight. Improving and signaling the quality of research data along the axes identified by Cai & Zhu and Martin, et al. by enhancing data curation, code completeness, and data publishing incentives in data repositories will contribute to the transparency, reproducibility, and reuse of research products. Approaches for Advancing Dataset Quality We present three approaches to improve and signal the quality of published datasets in research data repositories based on a combination of original and secondary studies from computer science, information science, and human-computer interaction. In this paper we use the term "approach" to collect a set of activities into a general strategy designed to improve the quality of a dataset. We analyze data from these studies to evaluate effects on data and code quality. These approaches are designed to support data repository managers and curators in identifying and effectively stewarding high-quality datasets. In particular, we explore applying our proposed approaches to Dataverse repositories. Ensure Research Code Completeness Shared research code is increasingly a common element of many datasets, and it should be comprehensible, executable, and reusable to be of high quality. However, disseminating such code can be complex as it is often written for a specific environment, like software, operating system, or hardware, meaning that it will not execute unless all required dependencies are available. Even a small change in these dependencies can sometimes result in errors or discrepancies in the execution outputs. Computing methods and artifacts are often not sufficiently documented in data repositories, which later hinders reproducibility and reuse. To illustrate the challenge of code re-execution that is necessary for reproducibility and reuse, we conducted an original study in which we re-execute Python code files from Harvard Dataverse [22]. We successfully retrieved 92 publicly available replication datasets that contain Python files. 1 The re-execution study was executed in a clean anaconda environment in Docker containers running Debian GNU/Linux 10 in the following steps: 1. We look for files such as "requirements.txt" or "environment.yml" inside the dataset because these filenames are common conventions for documenting needed code dependencies for Python. If such files were not found, we scan the Python code looking for the used libraries, and create a new requirements file. We attempt to install all libraries from the requirements file. 2. We automatically (naively) re-execute the Python files first with Python 2.7 and then with Python 3.5 with a time limit of 10 minutes per each Python file. If the file executes successfully in the allocated time, we record a success; if it crashes with error, we record the error; and if it exceeds the allocated time, we record 'time limit exceeded' (TLE) or null result (which are ignored in the success analysis as we cannot be certain whether the file would successfully execute or not). Our results show that about 27% of the files (102 out of 379) are re-executable using either Python 2.7 or Python 3.5. The success rate with each of the versions independently is lower than the combined result (see Figure 1), showcasing the importance of reusing the code with the right software version. In particular, it is likely that older code was more compatible with Python 2.7 and recent with Python 3.5. In that vein, we observe that the most common errors in Python 3.5 execution were Syntax Error (missing parentheses in print or invalid syntax) that appeared in 110 cases or 28%, and Import Error. The most common errors in Python 2.7 execution were Import Error (unavailable library) and Syntax Errors. The type of errors further emphasizes the differences between the two Python versions (notably syntax) and the importance of recording it in the preserved code. The observed high rate of Import Errors attests to how hard it is to reconstruct a working runtime environment even when all used libraries are pre-identified. This is because the version of the used libraries is essential for code re-execution. We observe a significantly higher re-execution success rate of 38% (17 out of 44 files) in datasets where the requirements file (likely containing the library versions) was present. However, files that could accurately reconstruct the runtime environments (such as environment.yml, requirements.txt, and Dockerfile) were rarely present (6 out of 92, 6%). Another possible explanation for the low re-execution rate might be due to the order in which Python files should be executed. Sometimes, each Python file in a dataset recreates one published result or figure, and it can be executed independently. However, in some cases, it is necessary to execute multiple Python files in a sequence to obtain the right result. In our study, the files were executed in a random order, which would favor the first type of datasets. We aggregate the obtained results and label a dataset successful if at least one Python file executes with success. The success rate of about 44% signals that likely some of the Python files in the datasets were meant to be executed in a sequence. It also shows that about 56% of the datasets do not contain a single Python file that is easily re-executable. This result points to the lack of code support in data repositories, the existence of common code errors (like fixed paths), or the need for another version of Python. Finally, we examined other code quality indicators, like documentation, within the replication dataset. Data and code are likely to be more understandable and reusable if a user-friendly instructions file is available. We can observe that 57 out of 92 (62%) replication datasets contain a README, codebook, or an instructions file ( Table 2). Half of the datasets (46 out of 92) contained code in other programming languages (Stata, R, Java, C++, Matlab, SAS, Ruby), and we observe a higher re-execution rate in datasets containing only Python code (68 out of 196 files or about 35%), than in those containing files in other languages (34 out of 183 or about 18%). In this case, Python code might depend on the output of the code in other programming languages, which may explain the difference in the re-execution rates. The average number of files in a replication dataset is 43, and the average dataset size was 248 MB. Our Python study shows that further support is needed to adequately document research code and their computing environments when publishing them in data repositories. Several approaches could help facilitate reproducibility of research code. First, we observe an increase in re-execution rate if a requirements file is present in the datasets. Therefore it would be helpful to encourage storing such a file to capture needed dependencies. Second, we observe that a documentation file is sometimes missing, which could be improved in the repository. In practice, data repositories could support depositing these files (like requirements, environment, readme, codebook, and others) either through the User Guide or a pop-up window if they detect Python files. Finally, reproducibility could be achieved by using virtual containers that were deemed irrepressible for capturing the runtime environment. Reproducibility platforms such as Code Ocean, Whole Tale, or Renku natively provide research portability through virtual containers and the cloud. These platforms automatically capture the runtime environment and often facilitate code automation. These proposed approaches are being considered by the Dataverse open-source community, and there is already ongoing work in the Dataverse software project that aims to capture virtual containers through integration with these reproducibility platforms [23]. Encourage Use of Curation Features and Pre-Submission Dataset Review Though anyone can deposit data in an unmediated, self-service fashion to their Dataverse collection in Harvard Dataverse, groups such as journals, laboratories, and project teams often restrict who can contribute to their collections and actively curate new datasets deposited to those collections. The Dataverse software supports pre-and post-publication data curation workflows that allow curators to ensure that deposited datasets meet group-defined expectations for characteristics such as metadata completeness, approved file formats, or accompanying code or documentation. A number of academic journals perform reproducibility verification through the curation workflow. For instance, the American Journal of Political Science (AJPS) requires their authors to provide all necessary research materials for verification and research reproducibility. 2 Upon a paper's acceptance, the research datasets are reviewed to confirm that they produce the reported analytic results before they are published at the AJPS collection at Harvard Dataverse. The Harvard Dataverse features for data curation can be classified into three categories ( Table 3) for convenience of discussion. The Dataverse platform automatically enforces a baseline of curation (Category I) through features such as required metadata fields to support data citations and smart defaults for components like data use agreements. Dataverse tools do not, however, automatically inspect the contents of data files to, for instance, confirm that data values are valid or are not missing, leaving that responsibility to data depositors. Data depositors may also use optional features to improve data curation quality from basic to Category II, which includes the use of custom metadata blocks, dataset versioning, and supporting documentation. The use of managed data curation processes, together with the reputation and reliability of a repository, can influence researchers' perception of data quality [24]. Therefore, the extent to which depositors and data curators use optional repository features can be considered an indicator of the overall quality of a dataset. The more extensive the use of optional features, the more FAIR (Findable, Accessible, Interoperable, and Reusable) [25] the dataset is likely to be. Category III data curation requires that the dataset contents be inspected either manually or using software tools to ensure that they meet subject-area standards for data sharing and reuse, as demonstrated by the AJPS data curation workflow mentioned above. To learn about the impact of review and curation services, we conduct an analysis of previously collected data [26], that captures the presence of the following characteristics [27] in the Harvard Dataverse datasets: 1. Optional metadata blocks. A well-curated dataset should have at least one optional metadata block to support its discoverability and reuse. 2. Keywords. A well-curated dataset should also have at least one keyword. 3. Description. A well-curated dataset should have a description. Like keywords, descriptions help to facilitate its discovery and reuse. 4. Open file formats. A well-curated dataset should use open file formats, where possible. 5. Discipline standard file formats. Not all disciplines use open standards, but at minimum, datasets should adhere to best practices for discipline file formats. 6. Supplemental Files. A well-curated dataset should have either a codebook or a readme file that provides insight into the datasets' internals, such as descriptions of its variables. 7. Submission review. A well-curated dataset may undergo an additional review by the collection owner prior to publication. In contrast to the previous six, this characteristic might be considered a direct indicator of dataset quality [24]. The presence of these characteristics suggests that a dataset depositor or curator has taken additional steps to facilitate its sharing and reuse, and therefore it indirectly signals that the dataset may be more FAIR, has higher dataset quality, and greater fitness for use. For instance, we observe that most datasets had a text description (n = 24,661, 84.2%), though this field became mandatory in 2015, leading to a drastic increase in its use of 99% and 100% in the subsequent years. The keywords field remains optional, explaining why the number of datasets with keywords (n = 14,593, 49.8%) is close to those without keywords (n = 14,702, 50.2%). A summary of all data characteristics is shown in Table 4. By examining the datasets that had a prior review, we find lengthier descriptions, higher keywords count, increased number of versions, and higher use of optional metadata than in datasets released without review (Figure 2). In prior review (or submission review), the collection owner or manager may inspect datasets' metadata for completeness, ensure that Supplementary Materials, data files, and code adhere to best practices, or assess how well the datasets meet other established publication criteria. For example, both metadata records and descriptive data fields [28] are essential inputs to indexers and web crawlers used by search engines like Google's Dataset Search to make data discoverable across domains. Therefore, our result suggests that, on average, a prior review effectively improved the curation quality of a deposited dataset. Though only 23% of the datasets were linked to a publication (had the "related publication" field), datasets with prior review were again better performing than the rest. 3 A paper publication is often seen as primary documentation for open data and as "the official version of record, as officially peer-reviewed and published, that will explain background, context, methodology, and possibilities for further analysis in the best possible way, and express the intentions of the person who helped collect the data" [29]. Without the unstructured but necessary context provided by the literature, researchers may reject data rather than risk misinterpretation [30]. Therefore, a reference to the original publication is essential for external researchers when evaluating data reuse. We find that data depositors often do not adequately document their datasets. Prior review and mandatory fields can improve the quality of curation and, thus, likely the quality of deposited datasets. Though dataset description, keywords, and optional metadata are not ubiquitously used in Harvard Dataverse, this could also be improved with curation review. Harvard Dataverse provides advanced curation services (shown in the three categories) and publication curation workflows that other repositories can look up to. It is important to establish a curation baseline, to ensure that all published datasets comply with a certain quality standard. Finally, we find that articles linked to published data often include contextual information that metadata cannot sufficiently capture and transmit. Also, academic literature remains the primary avenue through which researchers find and evaluate secondary data [16,29]. By building adequate citation infrastructure, repositories can encourage bidirectional linking between publications and datasets to facilitate direct access between them. Therefore, citing datasets across the scholarly record makes them both more findable and better documented. Incorporate Gamified Design Elements Gamification, defined as the use of game design elements in non-game contexts [31], is a promising approach that can be used to improve data sharing and signal high-quality data. Badges, points, and leaderboards are some of the most common game design elements [32]. They are used to motivate actions and behaviors across a wide range of domains, from health applications [33] to work environments [34]. Gamification in science was traditionally used in teaching [35] and in citizen science [36,37]. However, when the gamified design is implemented in concert with researchers' values and interests, it could be a powerful tool in encouraging open science practices such as dataset sharing [38,39]. To showcase the potential of gamification on quality of shared research data, we conduct a secondary interpretation of published results. A study at CERN [40], carried out to drive the design of the CAP portal, investigated how scientists perceive the use of various gamification elements. Two interactive prototypes were designed for the study (Figure 3), one (Simple Game Elements Design) making use of the most common game design ele-ments, including points and leaderboards, and the other (Rational-Informative Design) focusing on communication (i.e., group activities log), resource sharing, and providing an overall research dataset management status of the group. The study found that some of the gaming elements were more desirable than others. In particular, several participants opposed the use of leaderboards, as they could encourage comparisons and competition. The gamified badges were identified as the most suitable elements to incentivize dataset sharing. Such a result is corroborated by the successful use of the Open Science Badges (OSB) 4 that proved to incentivize data sharing for submissions to a medical and health journal [41]. Rowhani-Farid et al. [42] even concluded, based on their systematic review in the health and medical domain, that OSB is the "only one evidence-based incentive to promote data sharing." Therefore, gamified badges allow promoting best practices that are considered highly important in the community while still providing attainable goals for the authors. They represent an incentive for researchers as the papers with such rewards (badges) might have improved visibility and higher citation rate. Game design elements such as badges motivate research dissemination by providing recognition to the authors, but they can also be used to identify resources of high quality within an available resource pool. An example of this type of gamification application is the GitHub repository star system, where software developers "star" a repository if they find it valuable, or in contrast, assess its quality based on the number of existing stars [43]. A similar design element could be implemented in data repositories that would provide a more nuanced peer assessment of a dataset. For example, such an element would allow a dataset to be rated 'novel', 'educational', 'fundamental', or similar. In a similar vein, Harvard Dataverse displays a number of downloads for each dataset, which may appear as a comparable peer-enabled reuse metric. Reuse, however, is a flawed method for assessing the quality of a dataset. A high number of downloads signals the popularity of a dataset and may seem to confer high scientific, educational, or reuse value. Data products from Big Science endeavors are significant investments and are intended to serve a broad audience, yet small datasets that are much more common may only be reused by a few specialists. Download and reuse counts do not reliably measure value or quality for a given dataset. Improving the reusability of data improves its quality, but reuse metrics cannot give a complete picture of the quality or value of an individual dataset. This problem will be improved when the data repository community starts using reliable and standard counts of data citations for their datasets. This effort is being addressed by the Make Data Count project, including collaboration with DataCite and CrossRef. 5 But currently, there is no yet widespread scholarly practice of using data citations in published articles, so data citation counts do not yet reflect how a dataset is reused. In addition to providing data citation counts, data repositories could improve the quality of datasets by implementing gamification elements developed to create incentives for researchers to be more open and thorough when sharing data. Through our secondary study, we find that scientific badges have high potential, as they not only motivate the author to obtain them but also present a positive signal, or quality indicator, to external researchers looking to reuse data. Finally, elements that allow peer assessment, such as the 'star' system on GitHub, can also be viewed as a resource quality indicator. Such a system could be employed at a later stage after the resource is published, as it does not require direct input from the original authors. Conclusions Data repository features and services can contribute significantly to the quality and reusability of shared datasets. They may also advertise datasets to multiple communities through various quality indicators proposed in this paper. The three presented approaches for data repositories are based on three different studies that provide guidance for how datasets may be improved. Runtime environment components for code can be encouraged by the repository infrastructure to improve research reproducibility. Repositories can support a deposit workflow with prior review of dataset submissions, which we have shown often results in better-curated data. Finally, including gamification elements like badges and peer-assessments in a repository system promote data sharing by providing recognition for authors and useful metrics for data reusers. When authors are incentivised to share data in a repository and are held accountable for its quality through open metrics and peer-evaluation, the resulting data products are often better quality. We defined data quality along a number of axes, highlighting the importance of both intrinsic elements and features that data repositories can affect. Each study investigated a suite of strategies, combined into three more general "approaches," in order to determine whether the activities impacted the overall dataset quality. The approaches discussed identify three categories of repository features and services that improve the overall quality of a published dataset: code reproducibility, data curation, and quality incentives. Developing strategies to implement aspects of these approaches depends on various repository constraints and community needs, but each likely contributes to improved dataset quality. As data repositories and data sharing practices continue to evolve, the connection between data quality and repository infrastructure presents significant possibilities for further research.
2021-03-18T13:12:04.316Z
2021-02-03T00:00:00.000Z
232264520
s2orc/train
v2
A Public-Private Partnership Develops and Externally Validates a 30-Day Hospital Readmission Risk Prediction Model
A Public-Private Partnership Develops and Externally Validates a 30-Day Hospital Readmission Risk Prediction Model Introduction: Preventing the occurrence of hospital readmissions is needed to improve quality of care and foster population health across the care continuum. Hospitals are being held accountable for improving transitions of care to avert unnecessary readmissions. Advocate Health Care in Chicago and Cerner (ACC) collaborated to develop all-cause, 30-day hospital readmission risk prediction models to identify patients that need interventional resources. Ideally, prediction models should encompass several qualities: they should have high predictive ability; use reliable and clinically relevant data; use vigorous performance metrics to assess the models; be validated in populations where they are applied; and be scalable in heterogeneous populations. However, a systematic review of prediction models for hospital readmission risk determined that most performed poorly (average C-statistic of 0.66) and efforts to improve their performance are needed for widespread usage. Methods: The ACC team incorporated electronic health record data, utilized a mixed-method approach to evaluate risk factors, and externally validated their prediction models for generalizability. Inclusion and exclusion criteria were applied on the patient cohort and then split for derivation and internal validation. Stepwise logistic regression was performed to develop two predictive models: one for admission and one for discharge. The prediction models were assessed for discrimination ability, calibration, overall performance, and then externally validated. Results: The ACC Admission and Discharge Models demonstrated modest discrimination ability during derivation, internal and external validation post-recalibration (C-statistic of 0.76 and 0.78, respectively), and reasonable model fit during external validation for utility in heterogeneous populations. Conclusions: The ACC Admission and Discharge Models embody the design qualities of ideal prediction models. The ACC plans to continue its partnership to further improve and develop valuable clinical models. Introduction Curbing the frequency and costs associated with hospital readmissions within 30 days of inpatient discharge is needed to improve the quality of health care services (1)(2)(3). Hospitals are held accountable for care delivered through new payment models, with incentives for improving discharge planning and transitions of care to mitigate preventable readmissions (4,5). Consequently, hospitals must reduce readmissions to evade financial penalties by the Centers for Medicare & Medicaid Services (CMS) under the Hospital Readmissions Reduction Program (HRRP) (6). In 2010, Hospital Referral Regions (HRRs) in the Chicago metropolitan area had higher readmission rates for medical and surgical discharges when compared with the national average (7), and were among the top five HRRs in Illinois facing higher penalties (8). Although penalizing high readmission rates has been debated since the introduction of the policy (9), there has been consensus on the need for coordinated and efficient care for patients beyond the hospital walls to prevent unnecessary readmissions. Augmenting transitions of care during the discharge process and proper coordination between providers across care settings are key drivers needed to reduce preventable readmissions (10)(11)(12). Preventing readmissions must be followed up with post-discharge and community-based care interventions that can improve, as well as sustain, the health of the population to decrease hospital returns. While several interventions have been developed that aim to reduce unnecessary readmissions by improving the transition of care process during and post-discharge (13)(14)(15)(16)(17), there is a lack of evidence on what interventions are most effective with readmission reductions on a broad scale (18). One approach to curtailing readmissions is to identify high risk patients needing effective transition of care interventions using prediction models (19). Ideally, the design of prediction models should offer clinically meaningful discrimination ability (measured using the C-statistic); use reliable data that can be easily obtained; utilize variables that are clinically related; be validated in the populations in which use is intended; and be deployable in large populations (20). For a clinical prediction model, a C-statistic of less than 0.6 has no clinical value, 0.6 to 0.7 has limited value, 0.7 to 0.8 has modest value, and greater than 0.8 has discrimination adequate for genuine clinical utility (21). However, prediction models should not rely exclusively on the C-statistic to evaluate utility of risk factors (22). They should also consider bootstrapping methods (23) and incorporate additional performance measures to assess prediction models (24). Research also suggests that prediction models should maintain a balance between including too many variables and model parsimony (25,26). A systematic review of 26 hospital readmission risk prediction models found that most tools performed poorly with limited clinical value (average C-statistic of 0.66), about half relied on retrospective administrative data, a few used external validation methods, and efforts were needed to improve their performance as usage becomes more widespread (27). In addition, a few parsimonious prediction models were developed after this review. One was created outside the U.S. and yielded a C-statistic of 0.70 (28). The other did not perform external validation for geographic scalability and had a C-statistic of 0.71 (29). One of the major limitations of most prediction models is that they are mostly developed using administrative claims data. Given the myriad of factors that can contribute to readmission risk, models should also consider including variables obtained in the Electronic Health Record (EHR). OJPHI Fostering collaborative relationships and care coordination with providers across care settings is needed to reduce preventable readmissions (18). Care collaboration and coordination is central to the Health Information Technology for Economic and Clinical Health (HITECH) Act in promoting the adoption and meaningful use of health information (30). Therefore, health care providers should also consider collaborating with information technology organizations to develop holistic solutions that improve health care delivery and the health of communities. Advocate Health Care, located in the Chicago metropolitan area, and Cerner partnered to create optimal predictive models that leveraged Advocate Health Care's population risk and clinical integration expertise with Cerner's health care technology and data management proficiency. The Advocate Cerner Collaboration (ACC) was charged with developing a robust readmission prevention solution by improving the predictive power of Advocate Health Care's current manual readmission risk stratification tool (C-Statistic of 0.69), and building an automated algorithm embedded in the EHR that stratifies patients at high risk of readmission needing care transition interventions. The ACC developed their prediction models taking into consideration recommendations documented in the literature to create and assess their models' performance, and performed an external validation for generalizability using a heterogeneous population. While previous work relied solely on claims data, the ACC prediction models incorporated patient data from the EHR. In addition, the ACC team used a mixed-method approach to evaluate risk factors to include in the prediction models. Objectives The objectives of this research project were to: 1) develop all-cause hospital readmission risk prediction models for utility at admission and prior to discharge to identify adult patients likely to return within 30-days; 2) assess the prediction models' performance using key metrics; and 3) externally validate the prediction models' generalizability across multiple hospital systems. Methods A retrospective cohort study was conducted among adult inpatients discharged between March 1, 2011 and July 31, 2012 from 8 Advocate Health Care hospitals located in the Chicago metropolitan area (Figure 1). An additional year of data prior to March 1, 2011, was extracted to analyze historical patient information and prior hospital utilization. Inpatient visits thru August 31, 2012, were also extracted to account for any readmissions occurring within 30 days of discharge after July 31, 2012. Encounters were excluded from the cohort if they were observation, inpatient admissions for psychiatry, skilled nursing, hospice, rehabilitation, maternal and newborn visits, or if the patient expired during the index admission. Clinical data was extracted from Cerner's Millennium® EHR software system and Advocate Health Care's Enterprise Data Warehouse (EDW). Data from both sources was then loaded into Cerner's PowerInsight® (PIEDW) for analysis. The primary dependent variable for the prediction models was hospital readmissions within 30days from the initial discharge. Independent variables were segmented into 8 primary categories: Figure 2. ACC Readmission Risk Prediction Conceptual Model Risk factors considered for analysis were based on literature reviews and a mixed-method approach using qualitative data collected from clinical input. Qualitative data were collected from site visits at each Advocate Health Care hospital through in-depth interviews and focus groups with clinicians and care mangers, respectively. Clinicians and care mangers were asked to identify potential risk factors that caused a patient to return to hospital. Field notes were taken during the site visits. Information gleaned was used to identify emerging themes that helped inform the quantitative analyses. All quantitative statistical analyses were conducted using SAS® version 9.2 (SAS Institute). Descriptive and inferential statistics were performed on the primary variable categories to identify main features of the data and any causal relationships, respectively. The overall readmission rate was computed using the entire cohort. For modeling, one consecutive encounter pair (index admission and readmission encounter) was randomly sampled from each patient to control for bias due to multiple admissions. Index encounters were restricted to a month prior to the study period's end date to capture any readmissions that occurred within 30 days ( Figure 3). Figure 3. Multiple Readmission Sampling Methodology To develop and internally validate the prediction models, the cohort was then split into a derivation dataset (75%) and a validation dataset (25%). Model fitting was calculated using bootstrapping method by randomly sampling two-thirds of the data in the derivation dataset. The procedure was repeated 500 times and the averaged coefficients were applied to the validation dataset. Stepwise logistic regression was performed and predictors that were statistically significant using a p-value ≤ 0.05 were included in the model. Two predictive models were developed: one at admission and one prior to discharge using readily available data for the patient. The admission prediction included baseline data available for a patient once admitted to the hospital. The discharge prediction model was more comprehensive, including additional data that became available prior to discharge. The performance of each prediction model was assessed by 3 measures. First, discrimination ability was quantified by sensitivity, specificity, and the area under the receiver operating characteristic (ROC) curve, or C-statistic that measures how well the model can separate those who do and do not have the outcome. Second, calibration was performed using the Hosmer-Lemeshow (H&L) goodness-of-fit test, which measures how well the model fits the data or how well predicted probabilities agree with actual observed risk, where a p-value > 0.05 indicates a good fit. Third, overall performance was quantified using Brier's score, which measures how close predictions are to the actual outcome. External validation of the admission and discharge prediction models were also performed using Cerner's HealthFacts® data. HealthFacts® is a de-identified patient database that includes over 480 providers across the U.S. with a majority from the Northeast (44%), having more than 500 beds (27%), and are teaching facilities (63%). HealthFacts® encompasses encounter level demographic information, conditions, procedures, laboratory tests, and medication data. A sample was selected from HealthFacts® data consistent with the derivation dataset. The fit of both prediction models was assessed by applying the derivation coefficients, then recalibrating the coefficients with the same set of predictors and the coefficients by using the HealthFacts® sample. The performance between models was then compared. Results A total of 126,479 patients comprising 178,293 encounters met the cohort eligibility criteria, of which 18,652 (10.46%) encounters resulted in readmission to the same Advocate Health Care hospital within 30 days. After sampling, 9,151 (7.25%) encounter pairs were defined as 30-day readmissions. Demographic characteristics of the sample cohort are characterized in Table 1. External validation of the ACC Admission and Discharge model resulted in C-statistics of 0.76 and 0.78, H&L goodness-of-fit tests of 6.1 (p=0.641) and 14.3 (p=0.074), and Brier Scores of 0.061 (8.9% improvement from random prediction) and 0.060 (9.1% improvement from random prediction) after recalibrating and re-estimating the coefficient using Healthfacts® data, respectively. The ACC Admission and Discharge Models' performance measures are represented in Table 3. The probability thresholds for identifying high risk patients (11%), was determined by balancing the tradeoff between sensitivity (70%) and specificity (71%) by maximizing the area under ROC curves for the prediction models ( Figure 4). Discussion We observed several key findings during the development and validation of our ACC Admission and Discharge Models. Both our all-cause models performed reasonably better than most predictive models reviewed in the literature used to identify patients at risk of readmission (27)(28)(29). Both our models yielded a C-statistic between 0.7 and 0.8 during derivation, internal validation, and external validation after recalibration-a modest value for a clinical predictive rule. When comparing C-statistics between the Admission (C-statistic of 0.76) and Discharge Models (C-statistic 0.78), the Discharge Model's discrimination ability improved because Conditions and Procedures, LOS, and Discharge Disposition variables were included; which helped further explain a patient's readmission risk since medical conditions and surgical procedures accounts for immediate health needs, LOS represents severity of illness, and discharge disposition to a post-acute setting that doesn't meet their discharge needs could result in a return to the hospital. We also observed the same C-statistic for our ACC Discharge Model on the development and external validation sample, suggesting that it performs well both in the intended population and when using a heterogeneous dataset. Our ACC Discharge Model also had a somewhat higher C-statistic during derivation when compared to the C-statistic observed during internal validation (C-statistic of 0.77), which is typically higher when assessing predictive accuracy using the derivation dataset to develop the model (21). Our ACC Admission and Discharge Models also demonstrated reasonable model fit during external validation after recalibrating the coefficient estimates. A non-significant H&L p-value indicates the model adequately fits the data. However, caution must be used when interpreting H&L statistics because they are influenced by sample size (31). Our models did not demonstrate adequate model fit during derivation and internal validation due to a large sample. Yet during OJPHI external validation with a smaller sample, the H&L statistics for both the ACC Admission and Discharge Model improved to a non-significant level. Since the H&L statistic is influenced by sample size, the Brier Score should also be taken into account to assess prediction models because it captures calibration and discrimination features. The closer the Brier Score is to zero, the better the predictive performance (24). Both our prediction models had low Brier Scores, with the ACC Discharge Model's 0.06 representing consistent percent (9.1%) improvement over random prediction during derivation, internal validation, and external validation after recalibration. There was concern that too many independent variables would increase the possibility of building an over-specified model that only performs well on the derivation dataset. Thus, validating a comprehensive model using an external dataset to replicate the derivation results would be challenging. Our findings indicate that our model's performance slightly diminished during the ACC Admission Model's external validation when compared with the more comprehensive ACC Discharge Model. When we externally validated our ACC Admission Model, the C-statistic was 0.74 on the development dataset, but reduced to 0.66 when using the initial derivation coefficients on the external dataset, and then increased to 0.70 after recalibrating the coefficients. The C-statistic for our ACC Discharge Model decreased from 0.78 on the development dataset, to 0.71 using the unchanged derivation coefficients, and then increased back to 0.78 after recalibration using the external validation sample dataset. It is expected to see performance decrease from derivation to validation, but our models had no more than 10% shrinkage from derivation to the validation results (32). We further tested our ACC Admission Model using only baseline data available for a patient (e.g., demographic and utilization variables). The C-statistic for a more parsimonious admission model was 0.74 on the development dataset, decreased to 0.66 when using the derivation coefficients on the external dataset, but then increased to 0.70 after recalibrating the coefficients. Our findings suggest that including additional variables in the model is more likely to generalize better in comparison with a parsimonious model during external validation post-recalibration. Overall, our Admission and Discharge Models' performance indicates modest discrimination ability. While other studies relied on retrospective administrative data, our models incorporated data elements from the EHR. We utilized a mixed-method approach to evaluate clinically-related variables. Our models were internally validated in the intended population and externally validated for utility in heterogeneous populations. Our Admission Model offers a practical solution with data available during hospitalization. Our Discharge Model has a higher level of predictability according to the C-statistic and improved performance according to the Brier Score once more data is accessible during discharge. Creating a highly accurate predictive model is multifaceted and contingent on copious factors, including, but not limited to, the quality and accessibility of data, the ability to replicate the findings beyond the derivation dataset, and the balance between a parsimonious and comprehensive prediction model. To facilitate external validation, we discovered that a compromise between a parsimonious and comprehensive model was needed when developing logistic regression prediction models. We also found that utilizing a mixed-method approach was valuable and additional efforts are needed when selecting risk factors that are of high-quality data, easily accessible, and generalizable across multiple populations. We also believe that OJPHI bridging statistical acumen and clinical knowledge is needed to further develop decision support tools of genuine clinical utility, by soliciting support from clinicians when the statistics does not align with clinical intuition. Limitations Our findings should be considered under the purview of several limitations. There might be additional research conducted on readmission risk tools developed after the systematic review performed by Kansagara. Additional readmission risk prediction models were developed (33,34), but they did not publish their performance statistics to help us compare our prediction models. Our readmission rate was limited to visits occurring at the same hospital. Readmission rates based on same-hospital visits can be unreliable and dilute the true hospital readmission rate (35). One promising approach is using a master patient index (MPI) to track patients across hospitals. Data is being linked across hospitals and to outside facilities through MPI and claims data. Using our own method to create a MPI match, we performed some preliminary analysis and were able to identify 5% more readmissions across the other Advocate Health Care hospitals, increasing the readmission rate by approximately 1%. We also assessed the utility of claims data to match the other hospitals with Millennium® encounters to gauge a more representative readmission rate. The claims data allowed us to track approximately 8% more readmissions, increasing the readmission rate by approximately 1%. Overall, using both approaches we were able to identify a more representative readmission rate that increased from 10.46% to 12.5%. We are currently working to see how this impacts our models' performance. Data captured through EHRs is growing, but are incomplete with respect to data relevant to hospital readmission prediction and the lack of standard data representations limits generalizability of predictive models (36). As a result, we could not include certain data elements into our models due to data quality issues, a large percentage of missing data, and since some of the information is difficult to glean. Therefore, we could not include social determinants identified by clinicians and care managers during qualitative interviews such as social isolation (i.e., living alone) and living situation (e.g., homelessness) known to be salient factors and tied to hospital readmissions (37,38). Initially, we only mined a single source for this information in the EHR. However, new data sources have been identified in the EHR and the utility of these risk factors are currently being assessed in our prediction models. Additional factors are also being considered in our models such, as functional status (37,39), medication adherence and availability of transportation for follow-up visits post-discharge (40). Our prediction models do not distinguish between potentially preventable readmissions (PPR) (41,42). We did perform some preliminary analysis and found that the overall PPR readmission rate for Advocate Health Care in 2012 was about 6% of all admissions. We estimated around 60% of all readmissions were deemed avoidable. This is higher than the median proportion of avoidable readmissions (27.1%), but falls within the range of 5% to 79% (43). We plan to further assess PPR methodology and test our models' ability to recognize potentially avoidable readmissions to help intervene where clinical impact is most effective. OJPHI Our initial analysis plan proposed to include observation patients (n=51,517) in the entire inpatient cohort. We performed some preliminary analysis and found the overall readmission rate increased to 10.72%, but the C-statistics for our Admission and Discharge Models reduced to 0.75 and 0.77, respectively. Our models' discrimination ability probably diminished due to improved logic needed in making a distinction in situations where observation status changes to inpatient and vice-versa. Further assessment of observation patients is needed to better understand their importance in an accountable care environment. Steps are underway to mitigate limitations and continue to improve the clinical utility of our readmission risk prediction models. Data is being linked across hospitals and to outside facilities through MPI and claims data. Additional data sources in the EHR that encompass social determinants and other risk factors were identified are being assessed for use in our models. Also, we are researching potentially preventable readmissions so that the models can focus on cases where clinical impact is most needed. Conclusions The ACC Admission and Discharge Models exemplify design qualities of ideal prediction models. Both our models demonstrated modest predictive power for identifying high-risk patients early during hospitalization and at hospital discharge, respectively. Performance assessment of both our models during external validation post-recalibration indicates reasonable model fit and can be deployed in other population settings. Our Admission Model offers a practical and feasible solution with limited data available on admission. Our Discharge Model offers improved performance and predictability once more data is presented during discharge. The ACC partnership offers an opportunity to leverage proficiency from both organizations to improve and continue in the development of valuable clinical prediction models, building a framework for future prediction model development that achieves scalable outcomes.
2017-03-30T22:38:07.453Z
2013-06-30T00:00:00.000Z
25878200
s2orc/train
v2
Spatiotemporal Variations in Water Flow and Quality in the Sanyang Wetland, China: Implications for Environmental Restoration
Spatiotemporal Variations in Water Flow and Quality in the Sanyang Wetland, China: Implications for Environmental Restoration : Spatiotemporal modeling of wetland environments’ hydrodynamics and water quality characteristics is key to understanding and managing these ecologically important areas’ physical and environmental properties. We developed a two-dimensional numerical model based on the MIKE 21 module to analyze flow and pollution dynamics in the island-dominated Sanyang wetland of eastern China. Three simulation periods representing annual precipitation cycles were used to model freshwater discharge and water quality in the wetland. The results showed that the flow velocity in the study area had hydrodynamic characteristics typical of such a setting, with an average monthly flow velocity ranging from 0.01 to 0.04 m/s, contributing to an increased risk of serious eutrophication. The water quality problems (represented by ammonia nitrogen, NH 3 -N, and total phosphorus, TP, levels) peaked during the early summer peak rain season, followed by a gradual decline during a later flood period and the lowest values during the fall/winter dry period. Moreover, the spatial distribution of NH 3 -N and TP levels decreased from northwest to east, reflecting the influence of a highly polluted source. Our results provide a useful context for restoration efforts in the Sanyang wetland and other similar areas. Introduction Wetlands, sometimes described as "the kidneys of the landscape" [1,2], play an important role in providing ecological services to both humans and wildlife. These include shelter, habitats, food, protection from catastrophic flooding, irrigation, and carbon sequestration [3][4][5]. However, wetlands have rapidly degraded globally due to intensifying anthropogenic activities, such as pollutant loading from agricultural practices and industrial wastewater discharge [6,7]. The development of new, powerful, and data-based techniques is fundamental to urgent wetland restoration efforts. Rapidly developing computer technology, databases, and information/image analysis techniques have been applied to the planning and restoration of new and degraded wetlands. For example, Klemas [8] reviewed remote sensing techniques used in wetland management and found that analysis of satellite and aircraft imagery, combined with on-the-ground observations, allowed researchers to accurately and cost-effectively determine short-term changes and long-term trends in wetland vegetation and hydrology. Weston et al. [9] used GIS (Geographic Information System) to optimize hydraulic analysis of macro-and micro-scale flow paths for wetland restoration, showing that this approach was capable of developing a useful representation of physical sites for further modeling. Huang et al. [10] used remote sensing and GIS techniques to determine regions most suited for wetland preservation/restoration. This approach was especially helpful in guiding the conversion of farmland to wetland. However, although many studies have shown that hydrology is a critical factor in wetland restoration, the hydrodynamics and water quality characteristics of many wetlands are not clearly defined or understood due to limited datasets for numerical modeling. Traditional methods used to investigate the spatiotemporal characteristics of water quality are based on field measurements and laboratory analysis, but such data are often limited in scope while being time-consuming and costly to collect. Thus, various numerical models have been developed to study the hydrodynamic and environmental characteristics of wetlands and lakes. For instance, Somes et al. [11] developed a two-dimensional depth-averaged model based on MIKE 21 to examine factors controlling flow in different wetland zones. This approach was capable of reproducing wetland flow distributions far more efficiently than field investigations could. Hossainzadeh et al. [12] studied the spatiotemporal distribution characteristics of wetland chemical oxygen demand using the MIKE 21 hydrodynamic and pollutant convection diffusion model. Gargallo et al. [13] developed a mechanistic model for treating eutrophic water in free water surface constructed wetlands to simulate the removal of total suspended sediment and its relationship to phytoplankton and total phosphorus (TP). Wester et al. [14] proposed an enhanced quasi-two-dimensional modeling strategy that can accurately simulate river and wetland dynamics for large wetland areas to better understand their hydrodynamics. Dou and Jia [15] established a two-dimensional hydrodynamic and water quality coupling model by using MIKE 21 to evaluate different engineering measures aimed at improving wetland water quality. Although such research has verified that numerical models improve the scientific understanding of water quality variations in different types of wetlands, few studies have reported on the spatiotemporal characteristics of water quality in highly channelized wetlands with many islands. In this study, we focused on the island-dominated Sanyang wetland in coastal China, using a two-dimensional hydrodynamic and water quality MIKE 21 model to investigate the unique hydrodynamic characteristics of this distinct setting and better define the area's spatiotemporal distribution of water quality. Our study made up for the deficiency of previous studies in the spatial and temporal characteristics of water quality in highly channelized wetland with many islands. It provides an example for the research on water quality characteristics of the similar island-dominated wetlands around the world. At the same time, it can also help to identify the influencing factors of water quality deterioration in the management of these wetlands, and provide a scientific context for further local wetland management. Study Area The Sanyang wetland (120 • 40 -120 • 44 E and 27 • 56 -27 • 58 N, Figure 1) is located in the southeastern region of Wenzhou City, Zhejiang Province, a location that is well-known for its fast economic development over the last two decades. The wetland area covers 12.5 km 2 , and there are about 161 islands in the complex river network. The average water depth is~2.5 m with a flat bed gradient. In terms of biodiversity, there are 83 families and 168 species in the wetlands. The number of woody plant species is about 106, and most aquatic plants are emerged plants. The Sanyang wetland is generally called the West Lake Cultural Landscape of Wenzhou, as it plays an important role for local people, such as providing sightseeing opportunities and edible fruits. The potential value of the Sanyang wetland has been estimated at 55,332 yuan ha −1 yr −1 [16]. However, with rapid economic development, a large amount of pollutants from agriculture and industry has been discharged into the wetland, and the wetland environment has been severely damaged [17]. 1468.7 mm, respectively), and is occasionally impacted by typhoons, heavy rain, and floods. The peak rain season in late spring and early summer (April 16 to July 15) typically has the largest number of consecutive precipitation days in the year. The flood period in summer and autumn (July 16 to October 15), which is dominated by Pacific subtropical high pressure, has the highest rainfall. The dry period from October 16 to April 15 is controlled by cold high pressure. The frost-free period is 226-241 days, and the annual amount of sunshine ranges between 1442-2264 h. Numerical Models Sanyang wetland is a large shallow water wetland; the water vertical mixing is relatively uniform, the uneven distribution of space plane is more significant. Therefore, in order to reflect the overall variation characteristics of water quality in the study area, a two-dimensional mathematical equation with average water depth is used to describe the characteristics of water quality movement in Sanyang wetland. In this study, MIKE 21 model was selected to construct the mathematical model of water environment in Sanyang wetland. The MIKE 21 model is a widely used hydrodynamic model [18], and it was developed by the Danish Hydraulic Institute. The model is based on the cell-centered finite volume method, preferring the unstructured flexible mesh approach using triangular grid elements over a fixed grid system (i.e., quadrilateral elements of equal dimensions). It has the ability to provide variable grid resolutions to represent the much smaller dimensions of the study area. It includes hydrodynamic, transport, ecological/oil spill, particle tracking, mud transport, sand transport, and inland flooding modules [19]. MIKE 21 can allow longer strides, which can greatly reduce the calculation time when the accuracy is not high. In this study, MIKE 21 model is preferred to simplify the modeling process, ensure the robustness of the simulation method, and reduce the cost of simulation calculation. The region is located in a subtropical monsoon climate zone with alternating winter and summer monsoons. It has a moderate climate, sufficient sunshine, four distinct seasons, abundant rainfall (annual precipitation and evaporation are 1113-2494 mm and 1468.7 mm, respectively), and is occasionally impacted by typhoons, heavy rain, and floods. The peak rain season in late spring and early summer (April 16 to July 15) typically has the largest number of consecutive precipitation days in the year. The flood period in summer and autumn (July 16 to October 15), which is dominated by Pacific subtropical high pressure, has the highest rainfall. The dry period from October 16 to April 15 is controlled by cold high pressure. The frost-free period is 226-241 days, and the annual amount of sunshine ranges between 1442-2264 h. Numerical Models Sanyang wetland is a large shallow water wetland; the water vertical mixing is relatively uniform, the uneven distribution of space plane is more significant. Therefore, in order to reflect the overall variation characteristics of water quality in the study area, a two-dimensional mathematical equation with average water depth is used to describe the characteristics of water quality movement in Sanyang wetland. In this study, MIKE 21 model was selected to construct the mathematical model of water environment in Sanyang wetland. The MIKE 21 model is a widely used hydrodynamic model [18], and it was developed by the Danish Hydraulic Institute. The model is based on the cell-centered finite volume method, preferring the unstructured flexible mesh approach using triangular grid elements over a fixed grid system (i.e., quadrilateral elements of equal dimensions). It has the ability to provide variable grid resolutions to represent the much smaller dimensions of the study area. It includes hydrodynamic, transport, ecological/oil spill, particle tracking, mud transport, sand transport, and inland flooding modules [19]. MIKE 21 can allow longer strides, which can greatly reduce the calculation time when the accuracy is not high. In this study, MIKE 21 model is preferred to simplify the modeling process, ensure the robustness of the simulation method, and reduce the cost of simulation calculation. Equations The mathematical model was based on the shallow-water equations of the twodimensional numerical solution method and the Navier-Stokes equations of the threedimensional incompressible Reynolds value uniformity. This approach is subject to the Boussinesq hypothesis and the assumption of hydrostatic pressure. The finite volume and computational grid unstructured methods were used in the mathematical model along with a flexible boundary surface and dynamic complex terrain. The following equations were used: (1) Hydrodynamic control equations: Continuous equation: ∂ξ ∂t Momentum equations: (2) Water quality equation: where h(x, y, t) is the water depth (m); d(x, y, t) is the water depth variation over time (m); ξ(x, y, t) is the free surface elevation; p, q(x, y, t) are the discharge per unit width in the x and y directions (m 3 /s·m), respectively; g is the gravitational acceleration (m/s 2 ); C(x, y) is the Chezy resistance (m 1/2 /s); f = 2Ω sin φ is the Coriolis force coefficient (Ω is the angular velocity of earth rotation, and φ is the latitude of Earth); V is the flow velocity (m/s); u, v are the mean flow velocity along the water depth in the x and y directions (m/s), respectively; V x , V y are the flow velocity components in the x and y directions (m/s), respectively; Ω(x, y) is the coefficient of the Coriolis force; ρ w is the water density (kg/m 3 ); P a (x, y, t) is the atmospheric pressure (kg/m·s); x and y make up the cartesian coordinate system; τ xx , τ xy , τ yy are the tangential stresses; C is the pollutant concentration (mg/L); E x , E y are the turbulent diffusion coefficient and dispersion coefficient in the x and y directions (m/s 2 ), respectively; S-are the source or sink terms for flow; and F(C) is the biochemical reaction. Conditions and Parameters Conditions and parameters were set default, and the observational data are as follows: (1) Boundary conditions: Rivers feeding and draining the wetland respectively represent sources and sinks of water and pollution. To simplify and shorten the computational time, 19 main rivers (Table 1, Figure 2) were selected for use in the model with reference to the continuity equation and mass conservation equation of pollutants. The location of these rivers with reference to the study's monitoring and analysis points is shown in Figure 2. (2) Initial conditions: The initial water level, discharge, and water quality were determined from 2016 monitoring data. The initial velocity of the flow field was set to zero, the initial water level was set to 4.7 m, and the initial concentrations of ammonia nitrogen (NH 3 -N) and TP were set to 5.2 and 0.31 mg/L, respectively. and mass conservation equation of pollutants. The location of these rivers with reference to the study's monitoring and analysis points is shown in Figure 2. (2) Initial conditions: The initial water level, discharge, and water quality were determined from 2016 monitoring data. The initial velocity of the flow field was set to zero, the initial water level was set to 4.7 m, and the initial concentrations of ammonia nitrogen (NH3-N) and TP were set to 5.2 and 0.31 mg/L, respectively. (3) Hydraulic parameters: The critical Courant-Friedrichs-Lewy condition was set as 0.8 to ensure stable operation of the model. The Manning number in the rivers, determined by the sediment particle size and (3) Hydraulic parameters: The critical Courant-Friedrichs-Lewy condition was set as 0.8 to ensure stable operation of the model. The Manning number in the rivers, determined by the sediment particle size and water depth of the river bed, was set to 38 m 1/3 /s. The eddy viscosity coefficient was calculated using the Smagorinsky formula, where the Smagorinsky factor, Cs, was set as 0.28. The Smagorinsky formula: where µ t is the turbulent viscosity at sublattice scale; ∆ i is the mesh dimensions along axis i; C s is the Smagorinsky factor; C k is Kolmogorov constant. Water levels between the water and land boundaries can be determined by the difference between dry and flood conditions. If the calculated water level had a good fit with the observed level, it was used in the calculation; otherwise, it was discarded. The dry, flood, and wetting depths were set to 0.005, 0.05, and 0.1 m, respectively. The wind friction coefficient is a weak function of wind speed. For medium and strong winds in open seas, a value of 0.0026 produces good results, but, for gentler breezes, a smaller coefficient is needed. If wind speed changes are included in a model, the friction coefficient must be set as a change coefficient. In this case, as the average wind speed in the area is 2.3 m/s, the friction coefficient was set as 0.0026. (4) Water quality parameters: The degradation coefficients K NH3-N and K TP of NH 3 -N and TP were drawn from results for similar wetlands and set to 0.0069/d and 0.001/d, respectively, according to previous studies in similar wetlands [20][21][22][23]. Hydrodynamic Model Validation The hydrodynamic parameters were calibrated and verified by the measured water level data from the Sanyang wetland. The water level monitoring point C13 ( Figure 2) is the only water level monitoring point in this area, so we selected this point for the verification of observed and simulated water level values ( Figure 3). Due to the limitation of data collection, the calibration time we choose is from 1 January to 31 December 2016. The relative error (δ) was selected to evaluate the fitting effect of the simulated and measured values. where δ is the relative error; ∆ is the absolute error; L is the true value. where t μ is the turbulent viscosity at sublattice scale; i Δ is the mesh dimensions along axis i; s C is the Smagorinsky factor; k C is Kolmogorov constant. Water levels between the water and land boundaries can be determined by the difference between dry and flood conditions. If the calculated water level had a good fit with the observed level, it was used in the calculation; otherwise, it was discarded. The dry, flood, and wetting depths were set to 0.005, 0.05, and 0.1 m, respectively. The wind friction coefficient is a weak function of wind speed. For medium and strong winds in open seas, a value of 0.0026 produces good results, but, for gentler breezes, a smaller coefficient is needed. If wind speed changes are included in a model, the friction coefficient must be set as a change coefficient. In this case, as the average wind speed in the area is 2.3 m/s, the friction coefficient was set as 0.0026. (4) Water quality parameters: The degradation coefficients KNH3-N and KTP of NH3-N and TP were drawn from results for similar wetlands and set to 0.0069/d and 0.001/d, respectively, according to previous studies in similar wetlands [20][21][22][23]. Hydrodynamic Model Validation The hydrodynamic parameters were calibrated and verified by the measured water level data from the Sanyang wetland. The water level monitoring point C13 (Figure 2) is the only water level monitoring point in this area, so we selected this point for the verification of observed and simulated water level values (Figure 3). Due to the limitation of data collection, the calibration time we choose is from January 1 to December 31, 2016. The relative error ( δ ) was selected to evaluate the fitting effect of the simulated and measured values. The result shows that δ = 0.03% in the validation of water level. This suggests that the model produced a well-fitted curve which could accurately reflect the hydrodynamic characteristics of the Sanyang wetland and be confidently used for further water quality simulations. Water Quality Model Verification The observed and simulated values for NH 3 -N and TP throughout 2016 were verified at points C1, C3, C5, C11, and C13 with an error level below 20% (Figure 4). These points were selected because they each represent a direction so that makes sure that all directions are covered. The calculation results of relative error in the validation of water quality shows in the Table 2. The relative error of each point is basically controlled within 20%, suggesting that the simulation result of the model is good. The model accurately reflected the tendencies of the hydrodynamic and water quality trends and could thus be applied to the simulation and analysis of the water environment in the study area. The relative error of each point is basically controlled within 20%, suggesting that the simulation result of the model is good. The model accurately reflected the tendencies of the hydrodynamic and water quality trends and could thus be applied to the simulation and analysis of the water environment in the study area. The relative error of each point is basically controlled within 20%, suggesting that the simulation result of the model is good. The model accurately reflected the tendencies of the hydrodynamic and water quality trends and could thus be applied to the simulation and analysis of the water environment in the study area. Hydrodynamic Characteristics The flow velocity in the Sanyang wetland was 0-0.150 m/s (mean: 0.013 m/s), 0-0.154 m/s (mean: 0.014 m/s), and 0-0.593 m/s (mean: 0.022 m/s) during the dry, peak rain season, and summer flood periods, respectively. The annual mean velocity was relatively slow (0.013 m/s). The higher flow velocity was located on the southwest sides of the wetland, and the flow pattern showed the typical flow characteristics of a river flow regime. The results showed that there were some small current circulations in some of the open water areas, and the flow was stagnant in some channels without outlets ( Figure 5). The main flow direction of the Sanyang wetland runs from west to northeast, with a certain amount of circulation inside the wetland. The seasonality within the same year is the same across the historic years in the Sanyang wetland, so we analyzed the results based on the simulation results in 2016. The period division is shown in the following Table 3. representing one period. Overall, the average flow velocity in the Sanyang wetland did not change significantly throughout the year with a variation range within 0.5 cm/s ( Figure 5). The flow patterns are consistent with the regional precipitation patterns. The flow velocity was lowest during the dry period (Figure 6a), increased during the peak rain season (Figure 6b) to its peak during the summer flood period (Figure 6c), and gradually decreased and remained relatively stable during the next dry period. The maximum flow velocity of Sanyang wetland is about 5.7 cm/s, and the flow velocity of most waters is below 2.5 cm/s. The velocity is higher in the western entrance area, the interchange of rivers in the middle area, and the eastern exit area. In the severely insufficient hydrodynamic reaches (the flow velocity below 0.5 mm/s), the water body is easily affected by the input of organic matter from the outside, which may cause local water quality deterioration. Ammonia Nitrogen (NH 3 -N) The average monthly mean NH 3 -N concentration in the Sanyang wetland varied from 3.47 to 7.05 mg/L, indicating serious eutrophication (Figure 7). The NH 3 -N concentration in the dry period fluctuated within a narrow range from January to March, reached its maximum value in May before decreasing steadily to its minimum value in October, and slowly increased again. Figure 8 shows the result of the spatial distribution of the NH 3 -N concentration. Each figure uses the same color band, with one color representing one NH 3 -N concentration and one figure representing one period. Concentration of NH 3 -N in Sanyang wetland is at a high level of 3.8-6.4 mg/L, and the variation is obvious in different periods: During the peak rain season, the concentration of NH 3 -N in the water body is high, which is mainly higher than 5.6 mg/L (Figure 8b). During the wet season, the concentration of NH 3 -N in water body is relatively low, mainly in the range of 3.8-4.4 mg/L (Figure 8c). During the dry season, the NH 3 -N concentration in the water body is relatively low, mainly in the range of 3.8-4.4 mg/L (Figure 8a). The concentration of NH 3 -N in Sanyang wetland is high in the peak rain season, and the concentration of NH 3 -N in the seriously polluted area is at a higher level in each section. The seasonality within the same year is the same across the historic years in the Sanyang wetland, so we analyzed the results based on the simulation results in 2016. The period division is shown in the following Table 3. Figure 6 shows the results of the spatial distribution of the flow velocity. Each figure uses the same color band, with one color representing one flow velocity and one figure representing one period. Overall, the average flow velocity in the Sanyang wetland did not change significantly throughout the year with a variation range within 0.5 cm/s (Figure 5). The flow patterns are consistent with the regional precipitation patterns. The flow velocity was lowest during the dry period (Figure 6a), increased during the peak rain season (Figure 6b) to its peak during the summer flood period (Figure 6c), and gradually decreased and remained relatively stable during the next dry period. The maximum flow velocity of Sanyang wetland is about 5.7cm/s, and the flow velocity of most waters is below 2.5 cm/s. The velocity is higher in the western entrance area, the interchange of rivers in the middle area, and the eastern exit area. In the severely insufficient hydrodynamic reaches (the flow velocity below 0.5 mm/s), the water body is easily affected by the input of organic matter from the outside, which may cause local water quality deterioration. Ammonia Nitrogen (NH3-N) The average monthly mean NH3-N concentration in the Sanyang wetland varied from 3.47 to 7.05 mg/L, indicating serious eutrophication (Figure 7). The NH3-N concentration in the dry period fluctuated within a narrow range from January to March, reached its maximum value in May before decreasing steadily to its minimum value in October, and slowly increased again. Figure 8 shows the result of the spatial distribution of the NH3-N concentration. Each figure uses the same color band, with one color representing one NH3-N concentration and one figure representing one period. Concentration of NH3-N in Sanyang wetland is at a high level of 3.8-6.4 mg/L, and the variation is obvious in different periods: During the peak rain season, the concentration of NH3-N in the water body is high, which is mainly higher than 5.6 mg/L (Figure 8b). During the wet season, the concentration of NH3-N in water body is relatively low, mainly in the range of 3.8-4.4 mg/L (Figure 8c). During the dry season, the NH3-N concentration in the water body is relatively low, mainly in the range of 3.8-4.4 mg/L ( Figure 8a). The concentration of NH3-N in Sanyang wetland is high in the peak rain season, and the concentration of NH3-N in the seriously polluted area is at a higher level in each section. Total Phosphorus (TP) The spatiotemporal distribution of TP was similar to that of NH 3 -N. The monthly mean concentration ranged from 0.38 to 0.49 mg/L, again confirming serious eutrophication ( Figure 9). The TP concentration in the dry period remained at around 0.38 mg/L, began to increase during the onset of the peak rain season in April, reached its maximum value in May, and then slowly decreased. The TP concentration remained stable during the summer flood period and decreased during the transition into the dry season. Figure 10 shows the result of the spatial distribution of the TP concentration. Each figure uses the same color band, with one color representing one TP concentration and one figure representing one period. Similar to NH 3 -N, TP reached its widest spatial distribution during the peak rain season (Figure 10b), followed by the flood period ( Figure 10c) and dry period (Figure 10a). During the peak rain season, the concentration of TP in water body was high, which was higher than 0.4 mg/L. During the wet season, the concentration of TP was high, mainly in the range of 0.32-0.36 mg/L. During the dry season, the concentration of TP is relatively low, mainly in the range of 0.24-0.30 mg/L. The TP concentration in Sanyang wetland was high in the peak rain season and wet season, and the TP concentration in the severely polluted area and the wetland outlet area was at a higher level in each section. Sustainability 2021, 13, x FOR PEER REVIEW 13 of 18 9). The TP concentration in the dry period remained at around 0.38 mg/L, began to increase during the onset of the peak rain season in April, reached its maximum value in May, and then slowly decreased. The TP concentration remained stable during the summer flood period and decreased during the transition into the dry season. Figure 10 shows the result of the spatial distribution of the TP concentration. Each figure uses the same color band, with one color representing one TP concentration and one figure representing one period. Similar to NH3-N, TP reached its widest spatial distribution during the peak rain season (Figure 10b), followed by the flood period ( Figure 10c) and dry period (Figure 10a). During the peak rain season, the concentration of TP in water body was high, which was higher than 0.4 mg/L. During the wet season, the concentration of TP was high, mainly in the range of 0.32-0.36 mg/L. During the dry season, the concentration of TP is relatively low, mainly in the range of 0.24-0.30 mg/L. The TP concentration in Sanyang wetland was high in the peak rain season and wet season, and the TP concentration in the severely polluted area and the wetland outlet area was at a higher level in each section. 9). The TP concentration in the dry period remained at around 0.38 mg/L, began to increase during the onset of the peak rain season in April, reached its maximum value in May, and then slowly decreased. The TP concentration remained stable during the summer flood period and decreased during the transition into the dry season. Figure 10 shows the result of the spatial distribution of the TP concentration. Each figure uses the same color band, with one color representing one TP concentration and one figure representing one period. Similar to NH3-N, TP reached its widest spatial distribution during the peak rain season (Figure 10b), followed by the flood period ( Figure 10c) and dry period (Figure 10a). During the peak rain season, the concentration of TP in water body was high, which was higher than 0.4 mg/L. During the wet season, the concentration of TP was high, mainly in the range of 0.32-0.36 mg/L. During the dry season, the concentration of TP is relatively low, mainly in the range of 0.24-0.30 mg/L. The TP concentration in Sanyang wetland was high in the peak rain season and wet season, and the TP concentration in the severely polluted area and the wetland outlet area was at a higher level in each section. Discussion The results showed that our model reflected the flow patterns and water quality in the Sanyang wetland well. In contrast to the traditional field monitoring method, our model showed a much longer time scale as well as more comprehensive spatial scale flow and water quality information, which can help managers better understand the wetland conditions and design better and more economical restoration schemes [24,25]. The flow velocity was low in the Sanyang wetland, indicating that the residence time in the wetland is long. As previous studies have shown, hydrodynamic characteristics are essential factors in the transport of pollutants, and less water exchange and long residence times increase the risk of eutrophication [26][27][28]. The main reason for the low flow and circulation in the wetland is that the river network is densely interconnected by channels surrounding the numerous islands. Moreover, the topography is relatively flat, and the wind in this area is not strong [29]. Water quality is an important factor in managing wetlands. Our results indicated that the water quality status in the Sanyang wetland is serious, which corroborates previous findings that the water quality in the study area is extremely poor [30]. This suggests that N and P sources from inlet rivers need to be better controlled. Both NH 3 -N and TP concentrations in the Sanyang wetland were highest in the northwest and lowest in the east, mainly because the predominant water supply comes from the Wenruitang River network to the west [31]. The Wenruitang River causes the wetland water body to become black and emit a strong, foul odor [17]. N and P concentrations gradually decreased in the inner areas of the wetland, possibly because the wetlands are able to remove some pollutants from the flow, which is similar to a pattern documented in China's Taihu Lake [32]. The water quality in the wetland was poorer in the peak rain season, with increased N and P flow in the wetland, thereby increasing the NH 3 -N and TP concentrations. The NH 3 -N and TP levels in the central part of the wetland were lower, which may have resulted partly from dramatic rainfall over a short period of time diluting the NH 3 -N and TP levels. This result was similar to that of Ji et al. [33], who found that copious rainfall played a significant role in improving the water quality of the Wenruitang River by decreasing N and P concentrations. Rainfall can also increase the dissolved oxygen in the water and contribute to reducing pollutant concentrations. Similarly, the NH 3 -N and TP concentrations were lower during the dry season, which might have resulted primarily from lower pollutant levels entering the wetland. The results from our modeling study revealed spatiotemporal variations in the wetland environment. The flow velocity of the wetland was small, and this finding can help managers to select mitigation methods, such as the diversion of water from other rivers or lakes, as has been used in other studies [24,[34][35][36][37]. This approach can increase the water velocity and shorten the water retention time; for example, the studied wetland can be connected with the Oujiang River nearby (Figure 1) to shorten the water retention time [38]. In addition, the spatiotemporal variations in NH 3 -N and TP could indicate where and when some methods should be carried out for improving wetland water quality. We can analyze the hydrodynamic and water quality characteristics of wetlands from the simulation results, and help identify the key causes and factors leading to the pollution. According to the spatiotemporal variation of NH 3 -N and TP, targeted measures can be taken to improve wetland water quality. We can identify exactly when and where reductions need to be made to what pollutants. For example, in some stagnant water areas, the concentration of NH 3 -N and TP is easy to be too high, then measures such as ecological floating beds [39][40][41] and water-lifting aeration can be treated. In this sense, the modeling results provide valuable hydrodynamic and water quality information on the Sanyang wetland, which can help wetland managers to restore and implement relevant measures accurately. Conclusions In this study, the MIKE 21 model was adopted to analyze the spatiotemporal distribution of water and pollutants (NH 3 -N and TP) in the Sanyang wetland during 2016. The results showed that the model can improve understanding of the hydrodynamics and water quality of the wetland. The spatiotemporal characteristics of the flow pattern could be useful for carrying out restoration measures and identifying the main reasons for poor water quality. However, the water quality is related to many factors (e.g., temperature), and further research should therefore take more factors into account. Future work should also incorporate more basic ecological data just like temperature, humidity, and species for a more powerful model setup, which can be even more useful for restoring degraded wetlands.
2021-05-22T00:02:52.392Z
2021-01-01T00:00:00.000Z
234964800
s2orc/train
v2
Cartesian-closedness and subcategories of (L, M)-fuzzy Q-convergence spaces
Cartesian-closedness and subcategories of (L, M)-fuzzy Q-convergence spaces In this paper, we first construct the function space of (L, M)-fuzzy Q-convergence spaces to show the Cartesian-closedness of the category (L, M)-QC of (L, M)-fuzzy Q-convergence spaces. Secondly, we introduce several subcategories of (L, M)-QC, including the category (L, M)-KQC of (L, M)-fuzzy Kent Q-convergence spaces, the category (L, M)-LQC of (L, M)-fuzzy Q-limit spaces and the category (L, M)-PQC of (L, M)-fuzzy pretopological Q-convergence spaces, and investigate their relationships. Introduction In general topology, function spaces of topological spaces cannot be constructed in a satisfactory way. This means the category of topological spaces with continuous mappings as morphisms is not Cartesian-closed. In order to overcome this deficiency, the concept of filter convergence spaces (convergence spaces in short) was proposed and discussed (Choquet 1948;Fischer 1959;Kent 1964;Kowalsky 1954). In Preuss (2002), Preuss gave a systematical collection of convergence structures, including function spaces and subcategories of convergence spaces as well as their connections with topological spaces. With the development of fuzzy set theory, many mathematical structures have been generalized to the fuzzy case (Arqub and Al-Smadi 2020;Arqub et al. 2016Arqub et al. , 2017Li and Wang 2020;Xiu 2020;Zhang and Pang 2020). In the theory of fuzzy topology (Chang 1968;Kubiak 1985;Šostak 1985), many types of fuzzy convergence structures have been proposed, such as stratified L-generalized convergence structure (Jäger 2001(Jäger , 2016bJin 2012, 2014) (Pang 2018;Xu 2001;Yao 2009), L-convergence tower structure (Flores et al. 2006;Jäger 2016a;Pang 2019), L-ordered convergence structure (Fang 2010a, b), (Enriched) (L, M)-fuzzy (Q-)convergence structure (Pang 2014a, b;Zhao 2016, 2017),convergence structure Yue 2017, 2021;Jin et al. 2019;Yu and Fang 2017;Yue and Fang 2020) and so forth. Fuzzy convergence structures are usually discussed from two aspects. On the one hand, the categorical relationship between fuzzy convergence structures and fuzzy topologies is discussed. For example, Yu and Fang (Yu and Fang 2017) showed that the category of strong L-topological spaces can be embedded in the category of -convergence spaces as a reflective subcategory and the category of topological -convergence spaces is isomorphic to that of strong Ltopological spaces. On the other hand, the categorical properties of fuzzy convergence spaces are investigated. Zhang et al. (2019) showed the monoidal closedness of the category of L-generalized convergence spaces. Pang and Zhao (2017) established the categorical properties among subcategories of enriched (L, M)-fuzzy convergence spaces. Recently, Pang (Pang 2018(Pang , 2019 discussed the Cartesian-closedness, extensionality and productivity of quotient mappings of subcategories of L-fuzzifying convergence spaces and stratified L-generalized convergence tower spaces. In the theory of fuzzy convergence spaces, many researchers usually show the Cartesian-closedness of fuzzy convergence spaces by constructing the corresponding function space, i.e., the power object in the category of fuzzy convergence spaces. Actually, there are different approaches to show the Cartesian-closedness of a category (Preuss 2002). For example, a topological category A is Cartesian-closed if and only if the functor A × − : A −→ A : B −→ A × B preserves final epi-sinks for each object A in A. In this approach, Pang and Li showed the Cartesian-closedness of the categories of (L, M)-fuzzy convergence spaces (Pang 2014b) and L-fuzzy Q-convergence spaces (Li 2016), respectively. Later, (Pang and Zhao 2016) introduced the concept of stratified (L, M)fuzzy Q-convergence spaces and proved that the resulting category is Cartesian-closed. From a theoretical aspect, Cartesian-closedness of a category ensures the existence of its corresponding function space. However, the researchers failed to construct the corresponding function space although they showed the Cartesian-closedness of the categories of their corresponding fuzzy convergence spaces (Li 2016;Pang 2014b;Pang and Zhao 2016). By this motivation, we will focus on the function space of (L, M)-fuzzy Q-convergence spaces (called stratified (L, M)-fuzzy Q-convergence spaces in Pang and Zhao (2016)), which is an essential part of the theory of (L, M)-fuzzy Q-convergence spaces. Concretely, we will construct the concrete form of the corresponding function space of (L, M)-fuzzy Q-convergence spaces. Moreover, as generalizations of Kent convergence spaces, limit spaces and pretopological convergence spaces, we will introduce several types of (L, M)-fuzzy Q-convergence spaces, including (L, M)-fuzzy Kent Q-convergence spaces, (L, M)-fuzzy Q-limit spaces, and (L, M)-fuzzy pretopological Q-convergence spaces, and then study their mutual relationships from a categorical aspect. This paper is organized as follows. In Sect. 2, we recall some necessary concepts and notations. In Sect. 3, we construct the function space of (L, M)-fuzzy Q-convergence structures to show the Cartesian-closedness of the resulting category. In Sects. 4-6, we propose the concepts of (L, M)fuzzy Kent Q-convergence spaces, (L, M)-fuzzy Q-limit spaces, and (L, M)-fuzzy pretopological Q-convergence spaces and investigate their categorical relationships. Preliminaries Throughout this paper, both L and M denote completely distributive lattices and is an order-reversing involution on L. The smallest element and the largest element in L (M) are denoted by ⊥ L (⊥ M ) and L ( M ), respectively. For a, b ∈ L, we say that a is wedge below For a nonempty set X , L X denotes the set of all L-subsets on X . L X is also a complete lattice when it inherits the structure of the lattice L in a natural way, by defining ∨, ∧ and ≤ pointwisely. The smallest element and the largest element in L X are denoted by ⊥ X L and X L , respectively. For each x ∈ X and a ∈ L, the L-subset x a , defined by x a (y) = a if y = x, and x a (y) = ⊥ L if y = x, is called a fuzzy point. The set of nonzero coprime elements in L X is denoted by J (L X ). It is easy to see that J (L X ) = {x λ | x ∈ X , λ ∈ J (L)}. We say that a fuzzy point x λ quasi-coincides with A, denoted by The family of all (L, M)-fuzzy filters on X is denoted by F L M (X ). (Pang 2014a) For each x λ ∈ J (L X ), we definê q(x λ ) : L X −→ M as follows: Example 2.2 Then,q(x λ ) is an (L, M)-fuzzy filter on X . On the set F L M (X ) of all (L, M)-fuzzy filters on X , we define an order by F ≤ G if F(A) ≤ G(A) for all A ∈ L X . Then for a family of (L, M)-fuzzy filters {F j | j ∈ J }, the infimum is given by For an (L, M)-fuzzy Q-convergence structure q on X , the pair (X , q) is called an (L, M)-fuzzy Q-convergence space. A continuous mapping between (L, M)-fuzzy Qconvergence spaces (X , q X ) and (Y , q Y ) is a mapping f : It is easy to check that (L, M)-fuzzy Q-convergence spaces and their continuous mappings form a category, denoted by (L, M)-QC. Example 2.4 (Pang and Zhao 2016) Let X be a nonempty set. (1) Define qc * : F L M (X ) −→ L X as follows: It is easy to verify that qc * is an (L, M)-fuzzy Qconvergence structure on X . (2) Define qc * : F L M (X ) −→ L X as follows: It is easy to check that qc * is an (L, M)-fuzzy Qconvergence structure on X . In order to provide an example from the aspect of fuzzy topology, we first recall the following definition. Definition 2.5 (Höhle and Šostak 1999) A stratified (L, M)fuzzy topology on X is a mapping τ : L X −→ M which satisfies: For a stratified (L, M)-fuzzy topology τ on X , the pair (X , τ ) is called a stratified (L, M)-fuzzy topological space. Example 2.6 (Pang and Zhao 2016) Let (X , τ ) be a stratified (L, M)-fuzzy topological space and define qc τ : F L M (X ) −→ L X as follows: Notice that (L, M)-fuzzy Q-convergence structures in Definition 2.3 are exactly stratified (L, M)-fuzzy Qconvergence structures in Pang and Zhao (2016). In this paper, we will focus on this kind of fuzzy convergence structures and explore the concrete form of its function spaces as well as its subcategories. Definition 2.7 (Pang and Zhao 2016) Let {(X j , q j )} j∈J be a family of (L, M)-fuzzy Q-convergence spaces and { p k : Function space of (L, M)-fuzzy Q-convergence spaces In this section, we will construct the function space of (L, M)-fuzzy Q-convergence spaces. By means of the constructed function space, we will show the Cartesianclosedness of (L, M)-QC. In order to guarantee the existence of the product of (L, M)-fuzzy filters, we assume that ⊥ L is prime in this section. Let , we denote two subsets of L as follows: In order to show q [X ,Y ] is an (L, M)-fuzzy Q-convergence structure on [X , Y ], the following lemma is necessary. For (1), take each μ ∈ J (L) with μ ≤ λ and a ∈ L with μ a . Then it follows that λ a , which means f λq a, By (1) and (2), we have Then, there exists ν a ∈ J (L) such that ν a a and for each This shows the continuity of ev. where the third equality holds since p Y •x = id Y . Now for each A ∈ L X with x μq A, i.e., μ A (x), it follows from y μ ≤ q Y (G) and (LMQC3) that G(A(x)) = M . Then, where the second quality holds since , we obtain f x = f •x (as the composition of two continuous mappingŝ x and f ) is continuous, as desired. Theorem 3.7 The category (L, M)-QC is Cartesian-closed. Actually, Pang and Zhao (2016) showed the Cartesianclosedness of the category of (L, M)-fuzzy Q-convergence spaces (which is called stratified (L, M)-fuzzy Qconvergence space in Pang and Zhao (2016)). However, they failed to construct the corresponding function spaces. In this section, we provide the concrete form of the corresponding function spaces, which gives an answer to the question proposed in Pang and Zhao (2016). (L, M)-fuzzy Kent Q-convergence spaces In this section, we will generalize the notion of Kent convergence spaces to the (L, M)-fuzzy case and study its relationship with (L, M)-fuzzy Q-convergence spaces. Then q r is an (L, M)-fuzzy Kent Q-convergence structure on X . Proof It is enough to show that q r satisfies (LMQC1)-(LMQC3) and (LMKQC). Indeed, (LMQC1) and (LMQC2) are straightforward. (LMQC3) Take each x λ ∈ J (L X ), F ∈ F L M (X ) and a ∈ L such that x λ ≤ q r (F) and λ a . This implies Then there exists λ a ∈ J (L) such that λ a a and there exists G ∈ F L M (X ) such that x λ a ≤ q(G) and G ∧q(x λ a ) ≤ F. Since q satisfies (LMQC3), it follows from x λ a ≤ q(G) and λ a a that G(a) = M , and further F(a) ≥ G(a) ∧ q(x λ a )(a) = M . (2) For each (L, M)-fuzzy Kent Q-convergence space (Y , q Y ) and each mapping f : X −→ Y , the continuity of f : For (1), it is easy to verify that q(F) ≤ q r (F) for each F ∈ F L M (X ). For (2), take each x λ ∈ J (L X ) and F ∈ F L M (X ) such that This implies f (x) μ ≤ q Y ( f ⇒ (F)). By the arbitrariness of μ, we obtain f (x) λ ≤ q Y ( f ⇒ (F)). This proves the continuity of f : (X , q r ) −→ (Y , q Y ). Then, q c is an (L, M)-fuzzy Kent Q-convergence structure on X . Proof (LMQC1) and (LMQC2) are easy to be verified and omitted. (LMQC3) Take each x λ ∈ J (L X ), F ∈ F L M (X ) and a ∈ L such that x λ ≤ q c (F) and λ a . It follows that Then, there exists λ a ∈ J (L) such that λ a a and for each μ ≺ λ a , x μ ≤ q(F ∧q(x μ )). Since λ a a , there exists μ a ≺ λ a such that μ a a . This implies x μ a ≤ q(F ∧q(x μ a )) and μ a a . Since q satisfies (LMQC3), we Then, there exists λ 1 ∈ J (L) such that ν ≤ λ 1 and for each μ ≺ λ 1 , x μ ≤ q(F ∧q(x μ )). Thus, for each μ ∈ J (L) with μ ≺ ν, it follows that This implies By the arbitrariness of ν, we obtain λ ≤ q c (F ∧q(x λ ))(x), that is, x λ ≤ q c (F ∧q(x λ )), as desired. (1) id X : (X , q c ) −→ (X , q) is continuous. (2) For each (L, M)-fuzzy Kent Q-convergence space (Y , q Y ) and each mapping f : Y −→ X , the continu- For (1), it is easy to show q c (F) ≤ q(F) for each F ∈ F L M (X ). For (2), take each G ∈ F L M (Y ) and y λ ∈ J (L Y ) such that y λ ≤ q Y (G). Then, for each μ ≺ λ, it follows that ( f (y) μ )). From the definition of q c , we get This shows f (y) λ ≤ q c ( f ⇒ (G)), as desired. Lemma 4.6 (Preuss 2002) Suppose that A is a topological category. If B is a bicoreflective (full and isomorphic closed) subcategory of A which is closed under formation of finite products in A, then B is Cartesian-closed whenever A is Cartesian-closed. (L, M)-fuzzy Q-limit spaces In this section, we will propose the concept of (L, M)-fuzzy Q-limit spaces, which is a generalization of limit spaces in general topology. Then, we will study its relationship with (L, M)-fuzzy Kent Q-convergence spaces from a categorical aspect. Definition 5.1 An (L, M)-fuzzy Q-convergence structure q on X is called an (L, M)-fuzzy Q-limit structure if it satisfies For an (L, M)-fuzzy Q-limit structure q on X , the pair (X , q) is called an (L, M)-fuzzy Q-limit space. The full subcategory of (L, M)-QC, consisting of (L, M)fuzzy Q-limit spaces, is denoted by (L, M)-LQC. In order to show the further relationship between (L, M)fuzzy Kent Q-convergence spaces and (L, M)-fuzzy Q-limit spaces, we first give the following lemma. Lemma 5.2 Let (X , q) be an (L, M)-fuzzy Kent Qconvergence space and define q l : Then, q l is an (L, M)-fuzzy Q-limit structure on X . (LMQC3) Take each F ∈ F L M (X ), x λ ∈ J (L X ) and a ∈ L such that x λ ≤ q l (F) and λ a . Then, q l (F)(x) a . By the definition of q l (F), there exists λ ∈ J (L) such that λ a and there exist F 1 , it follows that μ ≺ q l (F)(x) and μ ≺ q l (G)(x). Then, there exist λ 1 , λ 2 ∈ J (L) and F 1 , F 2 , . . . , By the arbitrariness of μ, we obtain λ ≤ q l (F ∧ G)(x), that is, x λ ≤ q l (F ∧ G), as desired. Proof Let (X , q) be an (L, M)-fuzzy Kent Q-convergence space. By Lemma 5.2, we know q l is an (L, M)-fuzzy Qlimit structure on X . Next we claim that id X : (X , q) −→ (X , q l ) is the (L, M)-LQC-bireflector. For this, it suffices to verify (1) id X : (X , q) −→ (X , q l ) is continuous. (2) For each (L, M)-fuzzy Q-limit space (Y , q Y ) and each mapping f : X −→ Y , the continuity of f : For (1), it follows immediately from q(F) ≤ q l (F) for each F ∈ F L M (X ). For (2), take each F ∈ F L M (X ) and x λ ∈ J (L X ) such that x λ ≤ q l (F). Then, for each μ ≺ λ, there exists λ μ ∈ J (L) such that μ ≤ λ μ and there exist F 1 , . . . , Next we discuss the Cartesian-closedness of (L, M)-LQC. To this end, the following two lemmas are necessary. Lemma 5.7 (Preuss 2002) Suppose that A is a topological category. If B is a bireflective (full and isomorphic closed) subcategory of A which is closed under formation of power objects in A, then B is Cartesian-closed whenever A is Cartesian-closed. Proof It follows immediately from Theorems 2.8, 5.3 and 5.6, and Lemma 5.7. Remark 5.9 It is required that ⊥ L should be prime in several conclusions. This requirement seems to be strong. However, the real unit interval I = [0, 1] at least fulfils this requirement. Moreover, I fulfills the assumption of being completely distributive lattice with an order reversing involution. (L, M)-fuzzy pretopological and topological Q-convergence spaces In this section, we will introduce the concept of (L, M)fuzzy pretopological Q-convergence spaces and discuss its relationship with (L, M)-fuzzy Q-limit spaces and (L, M)fuzzy topological Q-convergence spaces (Pang and Zhao 2016). For this, we first recall the following notation. Then F q x λ is an (L, M)-fuzzy filter on X satisfying F q x λ ≤ q(x λ ). Definition 6.1 An (L, M)-fuzzy Q-convergence structure q on X is called pretopological if it satisfies For an (L, M)-fuzzy pretopological Q-convergence structure q on X , the pair (X , q) is called an (L, M)-fuzzy pretopological Q-convergence space. Lemma 6.3 Let (X , q) be an (L, M)-fuzzy Q-limit space and define q p : F L M (X ) −→ L X by Then q p is an (L, M)-fuzzy pretopological Q-convergence structure on X . Proof (LMQC1) and (LMQC2) are straightforward. (LMQC3) Take each F ∈ F L M (X ), x λ ∈ J (L X ) and a ∈ L such that x λ ≤ q p (F) and λ a . It follows that Then, there exists λ a ∈ J (L) such that F q x λa ≤ F and λ a a . This implies (LMPQC) For each x λ ∈ J (L X ) and F ∈ F L M (X ) with x λ ≤ q p (F), take each μ ∈ J (L) such that μ ≺ λ. It follows that Then, there exists ν ∈ J (L) such that μ ≤ ν and F q x ν ≤ F. This implies F q By the arbitrariness of μ, we get λ ≤ q p (F q p x λ )(x), i.e., x λ ≤ q p (F q p x λ ), as desired. Theorem 6.4 (L, M)-PQC is a bireflective subcategory of (L, M)-LQC. Proof Let (X , q) be an (L, M)-fuzzy Q-limit convergence space. By Lemma 6.3, we know q p is an (L, M)-fuzzy pretopological Q-convergence structure on X . Next we claim that id X : (X , q) −→ (X , q p ) is the (L, M)-PQCbireflector. For this, it suffices to verify (1) id X : (X , q) −→ (X , q p ) is continuous. For (1), take each x λ ∈ J (L X ) and F ∈ F L M (X ) such that x λ ≤ q(F). Then it follows that F q x λ ≤ F, which means x λ ≤ q p (F). This shows q(F) ≤ q p (F). For (2), take each F ∈ F L M (X ) and x λ ∈ J (L X ) such that x λ ≤ q p (F). Then, for each μ ∈ J (L) with μ ≺ λ, it follows that This means there exists ν ∈ J (L) such that F q x ν ≤ F and μ ≤ ν. Then it follows that By the arbitrariness of μ, we obtain f (x) λ ≤ q Y ( f ⇒ (F)). This proves f : (X , q p ) −→ (Y , q Y ) is continuous. Next let us recall the definition of (L, M)-fuzzy topological Q-convergence structures in Pang (2014b). For an (L, M)-fuzzy topological Q-convergence structure q on X , the pair (X , q) is called an (L, M)-fuzzy topological Q-convergence space.
2021-05-21T16:57:36.205Z
2021-04-05T00:00:00.000Z
234883000
s2orc/train
v2
Scanning the Global Literature
Scanning the Global Literature In each issue of Global Advances in Health and Medicine, we publish summaries of and commentaries on select articles from journals our editors and other contributors to the journal are reading. The association of chemotherapy and quality of life near death was investigated in a longitudinal, multi-institutional cohort study. 1 Three hundred twelve patients with end-stage solid cancer (progressive, previously treated, metastatic) and a life expectancy of 6 months or less were asked to provide sociodemographics, health status, and performance status (ECOG). Postmortem, the caregiver most knowledgeable about each patient was interviewed about the patient's quality of life before death. Chemotherapy use was determined from reviewing charts. At baseline, patients receiving chemotherapy had been younger, more likely treated in an academic medical center, more often had pancreatic and breast cancer, and had better performance than patients not receiving chemotherapy (multiple logistic regression). Patients with an initially good performance status (ECOG 1) who had received chemotherapy were found to have a significantly lower quality of life toward the end of life than patients with an initially good performance status who had not received chemotherapy (OR 0.35,95% CI,). The quality of life of patients with an initially poorer performance (ECOG 2 and 3) did not differ with regard to chemotherapy. Chemotherapy was not associated with better survival. Commentary by Gunver Kienle, Dr med The trend of patients receiving aggressive chemotherapy in terminal stages of cancer increases far beyond recommendations and evidence. 2 Patients with good performance status are the ones most likely to receive and to be referred for palliative chemotherapy. However, astonishingly, particularly in these patients, chemotherapy seems to impair quality of life. This is reminiscent of the randomized controlled trial by Temel et al wherein patients with newly diagnosed non-small cell lung cancer received either early palliative care along with standard oncological care or standard oncological care alone. 3 Here, patients who received early palliative care not only had significantly less aggressive chemotherapy, they also had significantly better quality of life and lived significantly longer. For many patients' friends and family members, chemotherapy may be equated with hope and fighting cancer, and this perception may lead to great pressure for inappropriate or even harmful treatments. Placing patients in a supportive integrative, holistic cancer care environment that addresses not only the cancer but also vitality, emotional, mental, and spiritual issues and focuses on the unmet needs of cancer patients may reduce the pressure that could result in decisions that may do more harm than good. THe eFFeCT oF WHoLe-BoDY MASSAGe oN THe PRoCeSS AND PHYSIoLoGICAL oUTCoMe oF TRAUMA INTeNSIve CARe UNIT PATIeNTS: A DoUBLe-BLIND RANDoMIZeD CLINICAL TRIAL Hospital patients treated for traumatic injuries in intensive care units may experience emotional stress and anxiety that can negatively affect hemodynamic stability, resulting in increased blood pressure (BP), increased heart and respiration rates, and behaviors such as restlessness and agitation. This study examined the effects of a single session of massage therapy on a convenience sample of patients in a trauma intensive care unit (ICU) at an Iranian academic medical center. Patients hospitalized more than 7 days, with Glasgow Coma Scale (GCS) score of 7 to 12, intracranial pressure below 20 mmHg, hemodynamic stability, and without infectious disease, hepatitis, skin condition, a history of psychiatric disorder, or contraindication for changes of body position were eligible. Of those, 108 patients were randomly assigned to standard care alone or to standard care plus massage therapy, resulting in an equal number (54) of participants per group. Exclusion criteria were recent loss of consciousness or cardiac event. The 45-minute massage treatment encompassing the back, neck and shoulders, arms and hands, legs and feet, chest and abdomen was provided by family members who had undergone training. Outcomes measured included systolic and diastolic BP, temperature, heart and respiratory rates, GCS score, and arterial blood gases. All data were recorded by a nurse before the intervention, at 1 hour, and at 3 hours after the intervention. Six participants became ineligible following enrollment and were excluded from data analysis. ScaNNINg ThE gLOBaL LITERaTURE Results showed statistically significant changes in systolic BP and GCS score at 1 and 3 hours post-intervention and in diastolic BP and heart and respiratory rates at 1 hour post-intervention. In arterial blood gases, statistically significant differences were observed in blood pH, oxygen saturation (a measure of hemoglobin transport of oxygen), and partial pressure of oxygen (a measure of oxygen dissolved in the blood). No significant differences were observed in bicarbonate or partial pressure of carbon dioxide. Commentary by Martha Menard, PhD, LMT Massage appeared to be a safe intervention with temporary benefits for ICU patients in this study. The use of family members to provide massage in the ICU is a potentially cost-effective method to make massage therapy more available to hospitalized patients, and one that could also reduce the sense of helplessness caregivers may experience. Future studies should however, describe in more detail the content and duration of the training given to the family members who provide massage interventions as well as a more thorough description of the massage intervention itself. Other research has shown conflicting evidence regarding the effects of massage therapy on vital signs in healthy volunteers and across different disease conditions, possibly due to variations in massage protocols used across studies. A strength of this study was the use of arterial blood gases as an outcome measure, which has seldom been included in previous studies. ReFeReNCe Hatefi M, Jaafarpour M, Khani A, Khajavikhan J, Kokhazade T. The effect of whole body massage on the process and physiological outcome of trauma ICU patients: a double-blind randomized clinical trial. J Clin Diagn Res. 2015 Jun;9(6):UC05-8. CoST-eFFeCTIveNeSS oF TAI CHI FoR FALL PReveNTIoN IN PARKINSoN'S DISeASe 1 Clinical research supports that tai chi, along with other conventional forms of exercise, offer multiple potential benefits for people with Parkinson's disease (PD), including improved balance and reductions in the rate of falls. However, little research has evaluated the relative cost-effectiveness of tai chi vs other exercise-based programs for preventing falls. This study represents a secondary analysis of a 3-arm randomized controlled trial of exercise for PD. 2 The original trial consisted of a 6-month active intervention period (with 60-minute classes conducted 2 times weekly) and a 3-month post-intervention follow-up and compared tai chi and resistance training to a stretching control. The study included individuals who were diagnosed with mild to moderate PD. This new study presents findings based on a cost-effectiveness analysis of the tai chi program compared with stretching and resistance training programs on the primary outcome of falls prevented; it also evaluated relative impacts on quality-adjusted life years (QALY) gained as a secondary outcome. Over the 9-month study period, 526 falls were documented on the basis of participant monthly selfreports. Participants in tai chi had the lowest average number of falls (87 vs 172 vs 267, compared with resistance and stretching, respectively; P=.01) and the lowest fall incidence rate (per 100 person-months, P=.005). Tai chi also had the lowest average per-person use costs ($1238) compared with stretching ($1721) and resistance training ($1368), respectively. Not surprisingly, economic analyses supported that when compared with stretching, tai chi cost an average of $175 less for each additional fall prevented and produced a substantial improvement in QALY gained at a lower cost. Because of the inferiority of the resistance training program in cost (ie, more costly) and effectiveness (less effective), it was removed from analyses. The authors conclude that tai chi represents a costeffective strategy for optimizing spending to prevent falls and maximize health gains in people with Parkinson's disease. They also appropriately caution that while these results are promising, they warrant further validation. Commentary by Peter M. Wayne, PhD Building on a landmark study published in The New England Journal of Medicine, 2 this study extends findings on the clinical benefits of tai chi for balance and fall prevention to the pragmatic domain of cost-effectiveness. This study represents one of only a handful of cost-effectiveness analyses of mind-body exercise for secondary rehabilitation or management of chronic or degenerative conditions. In combination with ongoing basic research informing mechanisms underlying the therapeutic effects of mind-body interventions, pragmatic cost-effectiveness research as exemplified in this study is essential to inform the translation and integration of such practices into healthcare and to inform the policies that will guide this integration. Breeher et al conducted this public health investigation after identifying an index case of a person with elevated blood lead levels (BLLs) associated with Ayurvedic medicine use. The individual lived in a small community in Iowa where Ayurvedic medicine use was common. Ayurvedic medicines were typically obtained either while traveling in India or through direct importation from an Indian clinic. The investigators placed advertisements in local newspapers to recruit potentially affected individuals. One hundred fifteen participants responded and subsequently underwent blood testing for lead and other heavy metals using atomic absorption spectroscopy through the University of Iowa State Hygienic Laboratory. Forty percent (n=46) of the individuals had BLLs ≥10.0 µg/ dL. Thirty percent had BLLs ≥25.0 µg/dL. In addition, individuals were asked to submit their Ayurvedic medicines for analysis. Of 182 Ayurvedic supplements submitted for testing, 27.5% had lead levels exceeding California and US Food and Drug Administration maximum permitted limits. BLLs were also found to be associated with intake of those Ayurvedic medicines containing lead. Individuals with BLLs ≥10.0 µg/ dL were estimated to consume a mean 0.03 g lead/day compared to only 0.001 g/day for participants with BLLs ≥10.0 µg/dL (P<.0001). Commentary by Robert Saper, MD, MPH Rasashastra is a class of Ayurvedic medicines that contain compounds called bhasmas. Bhasmas are made from minerals, metals, and/ or gems through elaborate ancient preparation protocols. Metals such as lead, mercury, arsenic, gold, iron, and zinc are commonly and intentionally used. Ayurvedic experts claim that any toxic properties of metals such as lead are removed in the preparation process if it is done appropriately and that bhasmas are safe and therapeutic. Bhasmas can be formulated to be taken alone or in combination with herbs. This investigation is consistent with previous investigations of Ayurvedic medicines that found that approximately one-fifth of Ayurvedic medicines contain potentially elevated levels of lead. Prior to this investigation, more than 100 case reports of lead toxicity associated with Ayurvedic medicine use have been reported since the 1970s in North America, Europe, Asia, Africa, and Oceania. This cluster of 46 patients is the largest group of patients with elevated BLLs associated with Ayurvedic medicine reported to date. Lead is a well-established toxin to multiple organ systems. Consequences of lead toxicity are many and include developmental delay, intellectual and cognitive impairment, seizures, anemia, renal insufficiency, hypertension, constipation, and abdominal pain. Mean lead levels in the US general population are approximately 2 µg/dL. Deleterious impacts of even relatively modestly elevated BLLs in the 5 to 10 µg/dL range are well documented. Given the preponderance of evidence of harm from lead, the intentional use of lead compounds in traditional medicine preparations, no matter how they are prepared, is unacceptable and should be stopped.
2018-04-03T06:04:41.078Z
2015-11-01T00:00:00.000Z
31680000
s2orc/train
v2
Long-Term Study of Corneal Stroma and Endothelium on Structure and Cells After Genipin Treatment of Rabbit Corneas
Long-Term Study of Corneal Stroma and Endothelium on Structure and Cells After Genipin Treatment of Rabbit Corneas Purpose To study the long-term safety of genipin treatment using a vacuum device with or without epithelial cells at different crosslinking times. Methods Twenty-five healthy New Zealand white rabbits were separated into five treatment groups: 0.25% genipin with epithelial cells for 5 minutes (G1), 0.25% genipin without epithelial cells for 5 minutes (G2), 0.25% genipin without epithelial cells for 10 minutes (G3), ultraviolet A–riboflavin collagen crosslinking (UVA), and controls (C). Before and 2, 4, 6, and 8 weeks after crosslinking treatment, anterior segment optical coherence tomography (ASOCT), in vivo confocal microscopy (IVCM), and the Pentacam system were used to evaluate the right eyes. Results A demarcation line (DL) was observed in the corneal stroma in the G2, G3, and UVA groups. The DL depths in the G2 and G3 groups were stable but decreased in the UVA group over time. The density of keratocytes in these groups increased. Endothelial cell density was decreased in the UVA group. There were no differences in the endothelium before and after treatment in the G1, G2, G3, and C groups. The densitometry, as determined using the Pentacam system, significantly increased in the G2, G3, and UVA groups and was positively correlated with keratocyte densities. Conclusions A vacuum ring assisting local genipin immersion crosslinking without corneal epithelium can activate the keratocytes in the corneal stroma and was safe enough for the thin cornea. Translational Relevance Genipin can not only crosslink the collagen fibers but also activate the keratocytes and even may promote collagen fiber secretion. Introduction Over the past 20 years, strengthening the biochemical properties of the cornea and producing new chemical bonds by corneal crosslinking have become one of the most effective treatments to stop progression of keratoconus, corneal ectasia after refractive corneal surgery, 1 and even in the treatment of some forms of resistant infectious keratitis. 2 This clinical method includes light-initiated crosslinking, such as the ultraviolet (UV)-riboflavin crosslinking, [3][4][5] Rose Bengal and green light method, [6][7][8] and chemical-initiated crosslinking, such as genipin. [9][10][11] Genipin, obtained from geniposide, induces intramolecular and intermolecualr crosslinking of cyclic structures within collagen fibers by spontaneously reacting with the amino acid chains or proteins. 12 As it demonstrates superior biocompatibility, 13,14 lower cytotoxicity, 9,15,16 and better crosslinking property, 11,13 it has been used in crosslinking on cornea 11 and sclera. 17 Genipin crosslinking is still at the experimental stage. The previous study and our study showed that genipin could significantly strengthen the stiffnesss of porcine cornea and sclera in vivo and vitro. [18][19][20][21] Corneal collagen crosslinking induced with genipin on porcine cornea in vitro produced a significant increase in biomechanical strength and resistance to bacterial collagenase, 13,22 and the effect of crosslinking increased with dose. 13 It was also found that genipin was similar to ultraviolet-riboflavin crosslinking (UV-CXL). The cytotoxic effect in the endothelial cells is similar between UV-CXL and genipin. 9 In vivo, genipin induced corneal flattening in rabbit eyes after 60 days, with a mean flattening of the corneas of 4.4 D. 11 In conclusion, genipin might have potential for management of corneal ectasia and keratoconus. Although genipin has good prospects for corneal crosslinking, researchers and our previous study paid more attention to the short-term safety evaluation of genipin crosslinking. There have been fewer long-term evaluations about it. The modes of genipin administration are soaking cornea in vitro, 13,22 eye drops in vivo, 15,16 or topical injection. 23 Genipin is a natural crosslinker for molecules with primary amino groups that is widely distributed in tissue. The crosslinking effect could act on tissues around cornea. In addition, some mysteries need to be solved, such as the epithelial cells' effect during the crosslinking progress. Avila et al. 9 said that genipin's effect was similar in corneas with or without epithelium with similar biomechanical effects, but he did not provide data to support his opinion. In his follow-up studies, the epithelial cells of experimental animal cornea were scarped during genipin crolinking. Therefore, the effect of epithelium on genipin crosslinking needs to be studied. Avila et al. 11 used a vacuum device designed to prevent drops in the conjunctiva, providing a new approach for genipin crosslinking. But his vivo experiment observed only the slit-lamp evaluation and intraocular pressure, and he chose only 5 minutes as the work time. More details, such as the keratocytes and endothelium, need to be evaluated. In this study, based on our previous research, 15,16 we chose 0.25% genipin solution as the effective and safe work liquid. We used a topical soaking method with a vacuum ring to study the long-term effect of genipin crosslinking on rabbit cornea at different times with or without epithelia. Animals All of the animal experiments were performed in accordance with the Chinese Ministry of Science and Technology Guidelines on the Humane Treatment of Laboratory Animals (Vgkfcz-2006-398) and the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. This study was approved by the Laboratory Animal Ethics Committee of Peking University First Hospital (J201425). Twentyfive healthy female New Zealand white rabbits (3.0-3.5 kg) were used in the study. All of the animals were provided by the Peking University First Hospital Animal Center. The animals were subdivided into five groups: 0.25% genipin crosslinking with epithelial cells for 5 minutes (G1 group), 0.25% genipin crosslinking without epithelial cells for 5 minutes (G2 group), 0.25% genipin crosslinking without epithelial cells for 10 minutes (G3 group), ultraviolet A-riboflavin collagen crosslinking (UVA group), and control group (C group, only scraped the epithelial cells), with five rabbits in each group. Right eyes were experimental eyes. Genipin Crosslinking Rabbits were anesthetized with intravenous injections of 5% pentobarbital. GP (Wako Pure Chemical Industry, Osaka, Japan) was dissolved in an isotonic medium (phosphate-buffered saline; ZSGB-BIO, Beijing, China) to concentrations of 0.25%. In the G1 group, corneal epithelial cells were kept. In the G2 and G3 groups, the right eye of each rabbit was deepithelialized. The experimental eyes were treated with 500 μL 0.25% genipin in a custom vehicle (Fig. 1) for a corresponding time at room temperature (20°C), using a vacuum device to prevent solution diffusing into the conjunctiva. After the surgery, the genipin solution was then removed by cotton swabs and the corneas were rinsed with 0.9% sodium chloride solution, followed by application of a levofloxacin gel (Sinqi Pharmaceutical, Shenyang, China) to the operated eye to protect the cornea from infection. UVA Crosslinking Rabbits were anesthetized with an intravenous injection of 5% pentobarbital. The corneal epithelium of 8 mm was removed by scraping the corneal surface, and then 0.1% riboflavin (Sigma-Aldrich, Darmstadt, Germany) dissolved in 20% dextran (Adamas, Shanghai, China) was applied to the cornea as a droplet every 5 minutes for 30 minutes, followed by 30 minutes of UVA exposure (365 ± 5 nm, 3 mW/cm 2 ) using a light-emitting diode (Lamplic Technology, Shenzhen, China). After surgery, 0.9% sodium chloride solution was used to wash the corneal surface and conjunctival sac. Levofloxacin gel was then applied to the operated eye to protect the cornea from infection. Control Group Rabbits were anesthetized in the same manner as in the other groups, and only the corneal epithelium was removed. Levofloxacin gel was then applied to the operated eye to protect the cornea from infection. Measurements Before and 1 day after the surgery, we observed rabbit corneas, and then we observed the animals every 2 weeks. All rabbits underwent anterior segment optical coherence tomography (ASOCT) (Heidelberg Engineering, Heidelberg, Germany), in vivo confocal microscopy (IVCM) (HRT3 RCM; Heidelberg Engineering), and Pentacam (Oculus, Optikgerate GmbH, Wetzlar, Germany) scan in vivo to evaluate changes in corneal morphology before and after genipin and UVA crosslinking treatment every 2 weeks. If a well-defined demarcation line (DL) was observed on the ASOCT images, the depth from the corneal surface to the DL at the center cornea was measured with the software on ASOCT. Three to five nonoverlapping images of the cornea stroma and endothelial cells were selected from IVCM for quantified analysis. The average cell count of keratocytes and endothelium cells was calculated with software on IVCM. Densitometry and corneal thinnest thickness were obtained by the software on Pentacam. Statistical Analysis The depth of DL, corneal stromal cell density, endothelial cell density, corneal densitometry, and corneal thinnest thickness had a normal distribution by the W test. The data were presented as the mean ± standard deviation using SPSS software (version 20; IBM Corp., Armonk, NY, USA). The differences among groups were assessed by one-way analysis of variance (Bonferroni analysis). Repeated-measures single-factor analysis of variance was used to analyze the differences before and after the treatment, and multiple comparisons were performed using the Least-Significant difference (LSD) method. The correlation analysis used the Pearson method. P<0.05 indicated a statistically significant difference. Changes of Operated Eyes One week after surgery, no obvious eyelid conjunctival edema or congestion was seen in the G1, G2, G3, and C groups. The corneal epithelium had recovered in the G2 and G3 groups. Conjunctival congestion and edema were obvious in the UVA group. At 8 weeks postoperatively, the corneal transparency of G1, G2, G3, and C groups was good. The corneas of G1, G2, and G3 groups had mild blue staining. Most of the UVA group returned to normal. In one experimental animal, due to the slow healing of the corneal epithelium, corneal scarring ( Fig. 2) was formed until the end of the follow-up. ASOCT Under ASOCT, 8 weeks after treatment, the corneas of group C showed as a smooth arch, with uniform thickness, intact epithelium, uniform grayish white matrix, and smooth endothelial surface (Fig. 3d). There was no significant difference between the G1 and C groups. However, high-reflective crosslinking demarcation lines in the corneal stroma could be seen in the experimental animals of G2, G3, and UVA groups There was no significant difference in performance between group G1 (a) and group C (d). Crosslinking demarcation lines ( ) were visible in groups G2 (b), G3 (c), and UVA (e), but the crosslinking demarcation lines of group G3 showed significant reflection. (Fig. 3), of which the G3 group was more reflective and the border slightly blurred. After determining the midpoint of the pupil, we chose the ASOCT tomographic image that passed through the midpoint of the pupil or closest to the midpoint. We used the ASOCT software to measure the distance from the epithelium to the crosslink line or cross-band boundary in the middle of the pupil. Demarcation line depth (DL depth, unit: μm) was measured at the center of the cornea at each time point, shown in Table 1. Only the DL could be seen in the G2, G3, and UVA groups. In-depth comparison among three groups showed the following: G2>UVA>G3. There was a statistically significant difference in DL depth at weeks 6 and 8. From the change of DL depth, we found that the change of the crosslinking line in the G2 group was relatively stable, and the DL depth was still maintained at 239.20 ± 37.53 μm at the eighth week. The DL depth of the G3 group decreased slightly compared with the previous one and remained at 164.00 ± 19.00 μm. In the UVA group, as time progressed, the DL gradually became shallower, and the DL depth decreased from 227.40 ± 48.35 μm in the second week to 171.60 ± 40.56 μm in the eighth week. IVCM The corneal stroma structure and stromal cell morphology of each experimental group were observed 2 weeks after treatment, as shown in Figure 4. In the C group, the distribution of the superficial stromal cells was slightly uneven, and the number of cells was relatively increased. No obvious abnormalities were found in the deep stromal structure (Figs. 4a, 4b). In the G1 group, the background reflection of the superficial and deep stroma of the cornea increased, the density of stromal cells increased, and the reflection of the filamentous highly reflective structure between the cells increased slightly (Figs. 4c, 4d). There was no significant difference between groups C and G1. In the G2 group, the background reflection of the superficial and deep cornea increased, and the density of stromal cells increased and aggregated into highly reflective cell clusters. A large number of irregular high-reflective structures could be seen in the matrix, which could have been thick rods or clouds, with unclear boundaries. Vaguely visible matrix cells (indicated by the arrow in Fig. 4h) were indistinguishable from the surrounding highly reflective materials. In the G3 group, the background reflection of the superficial and deep stroma increased, and a large number of amorphous high-reflective structures could be seen in the stroma, which could be like pine needles, thick rods, or broad Figure 4. Changes in corneal stroma in each group 2 weeks after treatment. Compared with the normal corneal structure, the density of stromal cells in group C (a, b) increased slightly. In group G1 (c, d), there were more filamentous hyperreflective structures between stromal cells. In the G2 group, the stromal cells increased in density and aggregated into highly reflective cell clusters (e-g). A large number of amorphous highly reflective structures (h) were visible, and stromal cells were faintly visible (indicated by the arrow in h). In the G3 group, a large number of amorphous high-reflective structures could be seen in the shallow (i, j) and deep (k, l) stroma, and stromal cells could be seen (indicated by arrows in j). The shallow (m, n) and deep (o) stroma of UVA had a relatively uniform low-reflective structure. Occasionally, a structure similar to the stromal cell activation morphology was not seen (p). bands, with unclear borders and faintly visible stromal cells (Fig. 4j, arrow showed in the picture), and it was not easy to distinguish them from the surrounding highly reflective materials. As the depth increased, the arrangement of the highly reflective materials gradually became more organized. In the UVA group, the background reflection of the corneal stroma gradually darkened from light to deep, the stroma was a relatively uniform low-reflective structure, and the cell structure was invisible. Occasionally, a structure similar to the activated morphology of stromal cells (Fig. 4p) was observed, but the nucleus of the stromal cells was not seen. In summary, 2 weeks after treatment, no stromal cells activation was observed in the G1 and C groups. A large number of highly reflective structures and stromal Figure 5. Changes in corneal stroma in each group 8 weeks after treatment. The corneal stroma structure of group C (a, b) and group G1 (c, d) basically returned to normal. In the G2 group (e, f, g), the density of the corneal stromal cells increased, and in the superficial layer (g), a hyperreflective group of stromal cells was seen. In the G3 group (h, i, j), the density of corneal stromal cells increased, and deep (i, j) visible hyperreflective clusters formed by the aggregation of stromal cells, and rod-like and filamentary highly reflective structures were seen between the stromal cells. In the UVA group, a cell-free structure area (k) and a relatively normal area (l) could both be observed, and a deep reflective cell mass (m) could be observed in the deep matrix. cell aggregation were seen in the G2 and G3 groups. The shallow and deep stroma of the UVA group were relatively low reflective, and stromal cell structure was missing. The corneal stroma structure and stromal cell morphology change at 8 weeks after treatment of each experimental group are shown in Figure 5. The corneal stroma structures of groups C and G1 basically returned to normal. In the G2 group, the density of corneal stromal cells increased, stromal cells with highreflective masses were visible in the superficial layer, and rod-like and filamentary high-reflective structures were seen between the stromal cells. In the G3 group, the density of corneal stromal cells increased, and the superficial stromal structure was basically normal. In the deep layer, high-reflective clusters formed by the aggregation of stromal cells were visible, and rod-like and filamentary high-reflective structures were seen between the stromal cells. In the UVA group, areas with no cell structure and relatively normal cell structures were both observed, but the number of cells was less than that in the G2 and G3 groups. High-reflective cell clusters were observed in the deep matrix. As the acellular structure area was still observed at the eighth week in the UVA group, the stromal cell count was difficult, and only the G1, G2, G3, and C groups were counted for stromal cell density. According to the ASOCT measurement of the crosslinking line depth range, 100 μm (about half DL depth) and 200 μm (about full DL depth) were selected as target depths for stromal cell density counting. We selected an image in the range of the target depth ±20 μm for each experimental animal, preferably the smaller absolute value of the target depth. We then chose three to five pictures for each target depth of each experimental animal and calculated the average value to represent the stromal cell density. The corneal stromal cell densities and their change values before and after treatment for 100 μm and 200 μm depth in each group of experimental animals are shown in Tables 2 and 3. Comparing the groups, there was no significant difference in the density of corneal stromal cells at 100 μm and 200 μm depth before treatment, and there were statistically significant differences in the density of corneal stromal cells at 100 μm and 200 μm depth after treatment (100 μm: P = 0.001; 200 μm: P = 0.004). Corneal stromal cell density changes in each group are shown in Figures 6a and 6b. In the G1 group, at 100 μm and 200 μm depth, there was no significant change in the density of corneal stromal cells before , and G3 (c) groups were basically normal, and the borders of the corneal endothelial cells in the G3 group were slightly blurred. The UVA group showed diverse performances: damaged and swollen endothelial cells (e, ); endothelial cells with enlarged hexagonal morphology disappeared, and the endothelial surface was uneven (f ); depressions left after endothelial cells were damaged and shed (g); endothelial cell morphology was not complete, only bumpy reflective interface (h). At 8 weeks, the morphology and structure of corneal endothelial cells in groups C (i), G1 (j), and G2 (k) were basically normal, and the border of corneal endothelial cells in group G3 (l) was slightly blurred. In the UVA group, relatively normal cell morphology was seen (m), but endothelial cells were still damaged, swollen, and enlarged (n). and after treatment at 8 weeks. In the G2 group at 100 μm and 200 μm depth, the density of corneal stromal cells increased significantly before and after treatment, and the difference was statistically significant (100 μm: P = 0.028; 200 μm: P = 0.003). In the G3 group, at 100 μm and 200 μm depth, the density of corneal stromal cells increased significantly before and after treatment, and the difference was statistically significant (100 μm: P = 0.023; 200 μm: P = 0.015). In group C at a depth of 100 μm, the density of corneal stromal cells before treatment and after treatment was mildly significant, and the difference was statistically significant (P = 0.037). The changes of corneal endothelial cells in each group at 2 weeks were shown in Figures 7a-h. The corneal endothelial cells in groups C, G1, and G2 were arranged closely, and their morphology was basically normal. The border of corneal endothelial cells in group G3 was slightly blurred, but relatively normal hexagonal cell could be seen. The corneal endothelial cells in the UVA group showed different individual characteristics. Relatively normal cell morphology could be observed, but the boundary of the endothelial cells was blurred. The enlarged, damaged, and expanded endothelial cells could be seen (Fig. 7e). In some subjects, endothelial cells were significantly damaged, the enlarged hexagonal morphology of endothelial cells disappeared, and the endothelial surface was uneven (Fig. 7f). The depression left after the endothelial cells were damaged and detached could be observed (indicated by the arrow in Fig. 7g). In severe cases, the morphology of endothelial cells was completely absent, and only the uneven reflective interface was seen (Fig. 7h). The changes of corneal endothelial cells in each group after treatment at 8 weeks are shown in Figures 7i-n. The corneal endothelial cells in groups C, G1, and G2 were arranged closely, and their morphology was basically normal. The border of corneal endothelial cells in group G3 was slightly blurred, but relatively normal hexagonal cell could be seen. In the UVA group, corneal endothelial cells could be relatively normal, but some endothelial cells had been significantly damaged. The cells were swollen and enlarged, hexagonal morphology disappeared, and the endothelial surface was uneven (Fig. 7n). Corneal endothelial cell density of each group before and after treatment is shown in Table 4. Comparing these groups, there was no significant difference in corneal endothelial cell density counts before treatment, and there was a statistically significant difference in corneal endothelial cell density after treatment (P < 0.001). Before and after treatment, the G1, G2, G3, and C groups had no significant changes in endothelial cell density. But in the UVA group, 8 weeks after the treatment, the corneal endothelial cell density decreased significantly compared with that before treatment. The difference was statistically significant (P = 0.010). Pentacam The corneal optical densitometry of each group before and 8 weeks after treatment is shown in Table 5. The densitometry of G2, G3, and UVA groups increased significantly 8 weeks after treatment, and the difference was statistically significant. There was no significant change in the densitometry of the G1 and C groups 8 weeks after treatment. Correlation analysis of stromal cell density and densitometry at 8 weeks after treatment in each group found that stromal cell density at 100 μm and 200 μm depth had a positive correlation with densitometry. The corresponding P value and correlation coefficient r are shown in Figures 6c and 6d. The thinnest corneal thickness measured by Pentacam before and 8 weeks after treatment in each group is shown in Table 6. Comparing the groups, the thickness of the thinnest part of the cornea before treatment was not statistically significant, and the thickness of the thinnest part of the anterior cornea after treatment was statistically significant (P = 0.002). After treatment in the G3 and UVA groups, the thinnest part of the cornea became significantly thinner, and the difference was statistically significant (G3: P = 0.031; UVA: P = 0.009). The thinnest part of the cornea in group C was slightly thickened, and the difference was statistically significant (P = 0.046). There was no significant difference between the G1 and G2 groups before and after treatment. Discussion Genipin, an aglycone derived from an iridoid glycoside called geniposide, present in the fruit of Gardenia jasminoides Ellis, has long been used as a traditional Oriental medicine for the treatment of hepatic disorders and inflammatory diseases. 24 Sung et al. 25,26 found that genipin-crosslinked gelatin mixtures have better biocompatibility and lower cytotoxicity, induce less inflammatory response, and recover sooner than common chemical crosslinking agents such as formaldehyde, glutaraldehyde, and epoxy resin. The research on the application of genipin crosslinking mainly includes several aspects. On one hand, genipin-crosslinked biomacromolecular materials have potential pharmaceutical applications, especially in controlling drug delivery from diverse formulations. 27 One the other hand, genipincrosslinked acellular biological tissues or biomacromolecules can be used as bioreplacement materials, such as using genipin-crosslinked acellular porcine corneal stroma for cosmetic corneal lens implants, 28 genipin-fixed vascular 29 and pericardium 30 as grafts, and genipin crosslinking of cartilage to enhance resistance to biochemical degradation and mechanical wear. In addition, in ophthalmology, genipin also has been used to directly crosslink biological tissues to enhance their biomechanical properties. Studies have found that crosslinking the sclera and corneal tissue can strengthen the biomechanical strength of the sclera and the cornea. 11,[18][19][20][21] By this, it can prevent the pathologic growth of the axial axis and the expansion of the corneal tissue, as well as provide new treatment options for pathologic myopia and corneal dilatation diseases. At present, direct soaking is the main method to complete the genipin crosslinking progress on biomaterials. Researchers have mainly considered the biocompatibility, duration, cytotoxicity, and inflammatory response of crosslinked materials. For these aspects, genipin has shown excellent crosslinking ability and safety. 27,28,[31][32][33][34] Genipin crosslinking includes ex vivo crosslinking and in vivo crosslinking. For ex vivo corneal and scleral tissues, researchers usually choose the direct immersion method. 13,18,35 For in vivo crosslinking, researchers will choose different methods depending on the location of the tissue. Liu and Wang 23 and Wang and Corpuz 21 injected genipin solution directly under Tenon's capsule for scleral crosslinking. In our previous study, we used genipin as a droplet to crosslink the deepithelialized cornea. 15,16 These methods show that genipin has broad application prospects in improving the biomechanical properties of sclera and cornea, but using genipin for direct crosslinking needs to consider many aspects. The direct immersion method is easy to operate in vitro, but it is difficult to achieve crosslinking in vivo. Proteins and amino acids are the biochemical structures for genipin crosslinking. Therefore, the crosslinking effect is not selective. Droplet and injection will inevitably expose surrounding nontarget tissues to the genipin solution, causing stimulation of surrounding tissues and crosslinking of nontarget tissues, with obvious side effects. In addition, the exudation from the conjunctiva exudate can dilute the drug concentration, but evaporation of the ocular surface can concentrate the drug concentration. Therefore, the predictability of the crosslinking effect is poor, as the genipin effect is concentration dependent. 13 Avila et al. 11 used a vacuum device to prevent drops in the conjunctiva and increase the permeability of the drug. Since the genipin solution did not contact other surrounding tissues and the crosslinking time was only 5 minutes, the side reaction was slight. This method provides a new approach for genipin crosslinking. In this study, a corneal vacuum ring was used as an auxiliary drug delivery device. The genipin solution was limited to the center of the cornea within 8 mm, so that genipin was only in full contact with the cornea. At the end of the treatment, the liquid was fully dried with a cotton swab first, and then the negative pressure ring was removed and flushed with saline to maximize the isolation of the genipin solution from nontarget tissues. The entire crosslinking process is only 5 to 10 minutes, with less pain. Therefore, the ocular surface reaction is slight after surgery, and the corneal epithelium recovers quickly. The classic UV-riboflavin crosslinking method takes about an hour, and the postoperative epithelial recovery is slow. In our study, one animal eventually developed a corneal scar due to the delayed healing of the epithelium. Therefore, compared with UV-riboflavin crosslinking, this genipin crosslinking method is superior to traditional UV-riboflavin crosslinking methods for patient tolerance and postoperative recovery. In the clinic, after crosslinking by UV riboflavin, the presence of a crosslinking boundary in the corneal stroma can be seen under a slit lamp, ASOCT, and a corneal confocal lens. 36 Under the slit lamp, it appears as a gray-white dividing line in the corneal stroma. Under ASOCT, it shows a high-reflective arc-shaped band in the stroma. Under confocal microscopy, it is generally considered a stroma transition zone with and without stroma cells. The depth may indicate the position and degree that can be achieved by crosslinking. 36 In this study, ASOCT found that after genipin crosslinking, a high-reflective arc-shaped band could appear similar to UV-riboflavin crosslinking. Moreover, the DL depth of the G2, G3, and UVA groups was similar. Dynamic observation of the DL depth found that the G2 and G3 groups were relatively stable, and the UVA group gradually became shallower. This also helps confirm the effectiveness of genipin crosslinking. The composition of the DL after genipin crosslinking is currently unknown. We found a large number of highly reflective structures by IVCM, which is speculated to be related to the DL, but histologic examination and molecular biological testing are required in the future. Under IVCM, we found that 2 weeks after treatment, a large number of highly reflective cell clusters and highly reflective amorphous substances were seen in the corneal stroma of the G2 and G3 groups, showing various forms and other highly reflective structures (corneal stroma cells, nervous) not easy to distinguish. At the end of observation, highly reflective cell clusters and rod-like and filament-like highly reflective structures could still be seen in the G2 and G3 groups. Similarly, 8 weeks after treatment, we also found highly reflective stromal cell clusters in the deep stroma in the UVA group. Mazzotta et al. 37 found similar changes in corneal confocal microscopy scans of patients after UV-riboflavin crosslinking and speculated that such highly reflective substances may represent new collagen fibers and extracellular matrix produced by restored stromal cells ingredient. Since the superficial and middle corneal stromal cells quickly die and disappear after UV-riboflavin crosslinking, it takes 2 to 3 months for the stromal cells to recover from deep to shallow. 38 Therefore, most of these highly reflective structures were first found in the deep layers of the stroma and later, which is consistent with our findings. Corneal stromal cells can secrete collagen fibers and proteoglycans, assist in the assembly of collagen fibers, assist in the formation of collagen laminae, and play an important role in the formation of the corneal stroma and the maintenance of the stroma structure. 39 Corneal stromal cells are still present 24 hours after genipin crosslinking. 15 In this study, stromal cells were mixed in highly reflective materials. Therefore, we speculate that the highly reflective amorphous material observed in the genipin crosslinking group is related to corneal stromal cells, which may be neocollagenous fibers or matrices produced by activated stromal cells, which are stimulated by crosslinking. For the highly reflective substances observed in the genipin crosslinking group, no research has elucidated the composition of such structures. Further research is needed from the perspectives of biochemistry, biomechanics, and ultrastructure. Haze after corneal crosslinking is a common side effect after UV-riboflavin crosslinking. Greenstein 40 found that the haze densitometry after crosslinking was inversely related to postoperative vision. However, haze does not persist after crosslinking, and most of this haze disappears 6 to 12 months after lamellar remodeling. 40,41 Densitometry measurement using Pentacam is currently used to find haze after crosslinking of UV riboflavin, which is difficult to discern by the eye. 42 Avila et al., 11 who used Pentacam to measure corneal densitometry of rabbit cornea ex vivo after genipin solution crosslinking, found that the densitometry of the cornea was concentration dependent on genipin solution. In this study, Pentacam was used to find the changes in corneal optical density of each group before and after 8 weeks of treatment. It was found that the densitometry of the G2, G3, and UVA groups increased significantly after 8 weeks. In addition to crosslinking surgery, a similar haze occurs after Photo Refractive Keratectomy (PRK). Studies have shown that haze after PRK is associated with corneal stromal cell-mediated damage repair. During this process, the density of stromal cells increases, generating new extracellular matrix components. 43 Some studies suggest that haze is also associated with corneal stromal cells after crosslinking. 40 Corneal stromal cells contain crystalline proteins, which have the same refractive index as corneal stroma. Changes in the crystal protein and its refractive index in activated corneal stromal cells have caused the scattering of light to form haze. 44 We analyzed the correlation between corneal stromal cell density and densitometry value and found that there is a positive correlation between corneal stromal cell density and densitometry value. We speculate that corneal stromal cells may play an important role in the genipin crosslinking process. The safety of various crosslinking methods for corneal endothelial cells is the focus of our attention. UV riboflavin crosslinking has the possibility of damaging corneal endothelial cells. 45,46 Therefore, for progressive keratoconus patients and corneal dilatation patients after refractive surgery with a corneal thickness less than 400 μm, UV-riboflavin crosslinking is generally not recommended. 47 The thinnest corneal thickness of the UVA group before treatment in this study was only 316.20 ± 39.26 μm, and the G2 and G3 groups were 301.40 ± 50.76 μm and 329.20 ± 24.95 μm, respectively. After treatment, the corneas in the G3 and UVA groups became thinner: 251.20 ± 67.36 μm and 277.00 ± 45.03 μm, respectively. Studies have suggested that the thinning of the cornea after crosslinking may be due to the denser corneal stroma and lamellar compression due to the new fiber connections. 48 IVCM found that the density of corneal endothelial cells in each genipin-crosslinked group had no significant change compared with that before treatment, but the corneal endothelial cells in the UV riboflavin crosslinked group decreased significantly. Morphologically, the corneal endothelial cells were not significantly abnormal after 2 weeks of treatment in each genipin group. In the UV riboflavin crosslinked group, obvious morphologic changes of endothelial cells were seen, including swelling and expansion of endothelial cells, which is consistent with the damaged morphology of UV riboflavin crosslinked endothelial cells observed under scanning electron microscopy in the previous study. 15 Up to 8 weeks after treatment, normal endothelial cell morphology was seen in all groups treated with genipin, while endothelial cells in the UV riboflavin crosslinked group still had edema and enlargement. Therefore, for thin cornea, especially ultra-thin cornea less than 350 μm, the genipin crosslinking method has irreplaceable advantages. Because corneal stromal cells play an important role in the formation of the corneal stroma and the maintenance of the stroma structure, 37 we counted corneal stromal cells in each experimental group 8 weeks after treatment and selected target depths of 100 μm (approximately half the depth of the crosslinking line) and 200 μm (approximately the depth of the full crosslinking line). In the UVA group, there were still areas with no cell structure in the corneal stroma at 8 weeks after operation, and it was difficult to count the stroma cells, so it was not included in the statistical scope. At 8 weeks after treatment, the density of corneal stromal cells in the G2 and G3 groups at 100 μm and 200 μm depth increased significantly compared with that before treatment. This further illustrates that genipin crosslinking has no toxic effect on corneal stromal cells. It is well known that UV riboflavin crosslinking has a clear injury effect on corneal stromal cells, and this significant change was also observed in our study. In contrast, genipin is safer for the cellular components of the corneal stroma. In the previous study, we found that vacuole-like structures appeared in the stromal cells observed under transmission electron microscopy 24 hours after genipin crosslinking. 15 The cell was expanded and deformed due to the vacuole structure, but the cell membrane and organelle structure were normal. This study also found increased corneal stromal cell density after genipin crosslinking. Therefore, we infer that genipin may have the effect of activating corneal stromal cells, and this effect begins to occur 24 hours after crosslinking, but its specific mechanism needs to be further explored. Avila et al. 9 mentioned that the crosslinking effect of genipin was not affected by the presence of corneal epithelial cells, but they did not give any experimental data, and subsequent studies have removed corneal epithelium. In this experiment, an unscratched epithelial group was set up to study the effect of corneal epithelium on genipin crosslinking. The results showed that no crosslinking lines were observed in ASOCT examination, no changes in densitometry were observed in Pentacam examination, and no changes in stroma and cellular components were observed under IVCM. The corneal epithelium is rich in lipids, and fat-soluble substances are easy to pass. Genipin is an iridoid glycoside, which contains multiple chemical groups such as hydroxyl and carboxyl groups in the molecule and is easily soluble in water. 49,50 Therefore, it can be judged that the 0.25% genipin solution cannot penetrate the corneal epithelium into the corneal stroma to crosslink in a short time. In conclusion, this study adopted a vacuum ring local immersion crosslinking method, using UV riboflavin crosslinking as an effective control, and observed the corneal stroma structure by immersing 0.25% genipin solution for 5 minutes after epithelial removal. From the influence of cell components, this study further confirmed the safety of genipin crosslinking, especially in the field of thin corneal crosslinking. This method takes a short time, has strong operability, and has good application prospects. At the same time, we also found that corneal stromal cells may play an important role in genipin crosslinking, leading the way for further exploration of genipin corneal crosslinking. This study still has some limitations. The most important point is that this study mainly focuses on morphologic observations. The mechanisms for many morphologic changes after genipin crosslinking are unclear. Including the DL observed under ASOCT, the composition of highly reflective materials under the confocal microscope, and the increase in densitimetry found by Pentacam. More studies, such as histology, cell biology, and molecular biology research are needed.
2021-09-17T06:17:25.991Z
2021-04-29T00:00:00.000Z
237536400
s2orc/train
v2
Genomic landscape of gastric cancer: molecular classification and potential targets
Genomic landscape of gastric cancer: molecular classification and potential targets Gastric cancer imposes a considerable health burden worldwide, and its mortality ranks as the second highest for all types of cancers. The limited knowledge of the molecular mechanisms underlying gastric cancer tumorigenesis hinders the development of therapeutic strategies. However, ongoing collaborative sequencing efforts facilitate molecular classification and unveil the genomic landscape of gastric cancer. Several new drivers and tumorigenic pathways in gastric cancer, including chromatin remodeling genes, RhoA-related pathways, TP53 dysregulation, activation of receptor tyrosine kinases, stem cell pathways and abnormal DNA methylation, have been revealed. These newly identified genomic alterations await translation into clinical diagnosis and targeted therapies. Considering that loss-of-function mutations are intractable, synthetic lethality could be employed when discussing feasible therapeutic strategies. Although many challenges remain to be tackled, we are optimistic regarding improvements in the prognosis and treatment of gastric cancer in the near future. INTRODUCTION With more than 900,000 new cases being reported every year, gastric cancer has become the fourth most commonly diagnosed cancer in the world (Jemal et al., 2011), and its death rate ranks as the second highest worldwide (Siegel et al., 2013). People in Asia, Eastern Europe and South America present the highest mortality of gastric cancer due to its high incidence (Siegel et al., 2013). In recent years, despite improvements in prognosis after the application of cisplatin and fluoropyrimidine-based chemotherapies, surgery remains the only curative therapy (Ryu et al., 2014). Unfortunately, highly frequent relapse, as well as distant metastases, ensure that the five-year survival of gastric cancer rarely exceeds 10% (Group et al., 2010;Siegel et al., 2013). Therefore, more effective therapeutic approaches are urgently needed. A better understanding of the mechanisms underlying gastric cancer tumorigenesis is of crucial significance for conquering the disease. Major molecular biological advances have resulted in the better survival of gastric cancer patients . Small subsets of gastric cancers are defined by biomarkers, including the overexpression of HER2 (epidermal growth factor receptor kinase 2) protein and amplification of its gene ERBB2. These biomarkers have led to the first targeted treatment approach for gastric cancer. The clinical trial of trastuzumab, an anti-HER2 antibody, showed that the use of trastuzumab for the treatment of HER2-overexpressing gastric cancer patients improved their overall survival compared with standard platinum-and fluoropyrimidine-based chemotherapy (Gravalos et al., 2011). Moreover, functional genomic alterations, e.g., c-MET activation, have been identified as addi-tional biomarkers that would benefit these personalized treatments. However, the increasing knowledge of gastric cancer etiology has given rise to the realization that gastric cancer is characterized by molecular complexity. Recently published comprehensive genomic analyses of gastric cancer not only challenge the traditional clinical classification of gastric cancer but lead to much-needed new targets for drug development and therapeutic strategies (Kakiuchi et al., 2014;Liang et al., 2012;Wang et al., 2011Wang et al., , 2014Zang et al., 2012). Because advancing genomic technologies continue to refine the molecular biology of gastric cancer, this review focuses on new insights into the genetics of gastric cancer revealed by recent next-generation sequencing studies. We describe functional genetic alterations in gastric cancer and provide several rational strategies that may broaden the range of clinical therapeutic approaches for gastric cancer. GENOMIC LANDSCAPE AND MOLECULAR CLASSIFICATION In 2014, The Cancer Genome Atlas (TCGA) project analyzed the genomic landscape of 295 primary gastric adenocarcinoma tumor tissues (Cancer Genome Atlas Research, 2014) through genome sequencing and comprehensive molecular evaluations. These analyses led to the proposal of a new molecular classification for gastric cancer. Gastric adenocarcinomas traditionally varied from intestinal-type gastric carcinomas (IGCs) to diffuse-type gastric carcinomas (DGCs) in terms of their histological heterogeneity according to the Lauren classification system (Lauren, 1965). In 2010, the World Health Organization proposed a division of gastric cancer into papillary, tubular, mucinous (colloid) and poorly cohesive carcinomas. However, these classification systems show little clinical therapeutic utility. Fortunately, new molecular classifications that were recently confirmed by genome sequencing analysis provide a guide to targeted agents that should be evaluated through clinical trials for distinct gastric cancer patients. Gastric cancer is divided into four subtypes according to the new molecular classification (Figure 1): tumors positive for Epstein-Barr virus (EBV), tumors with microsatellite instability (MSI), genomically stable tumors (GS), and tumors with chromosomal instability (CIN) (Cancer Genome Atlas Research, 2014). These molecular subtypes show distinct genomic features. EBV-infected tumors are chromosomally stable but present a significantly enriched EBV burden, showing extensive genome-wide hypermethylation and minimal demethylation (Cheng et al., 2015;Strong et al., 2013). In addition, EBV-infected tumors display frequent ARID1A, BCOR and PIK3CA mutations, 9p chromosome amplification, and lack of TP53 mutations, which contrasts with the high TP53 mutation frequency observed in CIN and MSI tumors (Cancer Genome Atlas Research, 2014;Wang et al., 2014). MSI tumors additionally exhibit a high prevalence of DNA promoter hypermethylation, such as at the MLH1 promoter, which is different from EBV-associated DNA hypermethylation (Leite et al., 2011;Park et al., 2013). MSI tumors exhibit elevated mutation frequencies of genes encoding targetable oncogenic proteins (TP53, KRAS, ARID1A, PIK3CA, ERBB3, PTEN and HLA-B) (Cancer Genome Atlas Research, 2014;Wang et al., 2014). Although few clear targets are observed, GS tumors are enriched in both the diffuse histological variant and mutations of CDH1, RHOA or fusions of RHO-family GTPase-activating proteins. CIN tumors are characterized by extensive frequencies of TP53 mutation (71%), CDH1 mutation (37%), marked aneuploidy and focal amplification of receptor tyrosine kinases that are clinically therapeutic targets (Cancer Genome Atlas Research, 2014). The use of genomic landscaping and molecular classification may provide a valuable adjunct to histopathology and a roadmap for gastric cancer patient stratification and trials of targeted therapies. Chromatin remodeling genes Nearly half of gastric cancers harbor mutations in chromatin remodeling genes (Zang et al., 2012). ARID1A, the AT-rich interacting domain containing protein 1A, is one of the most commonly mutated chromatin remodeling genes and has been reported to exhibit a mutation frequency ranging from 8% to 27% in gastric cancer samples (Abe et al., 2012;Inada et al., 2015;Wang et al., 2012). The majority of ARID1A mutations are frameshift or nonsense mutations that ultimately cause reduced expression of ARID1A protein in the cells Zang et al., 2012). The function of ARID1A is mainly involved in DNA mismatch repair (Inada et al., 2015). Therefore, the loss of ARID1A may lead to genomic instability. Although the tumor suppressor role of ARID1A has been confirmed (Guan et al., 2011;Zang et al., 2012), it remains difficult to restore the weakened expression of ARID1A in patients. In addition, effective therapeutic approaches to target cancer with ARID1A mutations have yet to be elucidated . Fortunately, synthetic lethality provides new insights regarding approaches to target cancer cells with ARID1A aberrations. Synthetic lethality exploits the fact that many cancer cells acquire defects in DNA repair pathways and become dependent on a compensatory mechanism to survive (Farmer et al., 2005;Jekimovs et al., 2014). Inhibition of the compensatory DNA repair pathway selectively kills cancer cells with a defect in a particular DNA repair pathway. Because ARID1A plays a key role in chromatin remodeling (Inada et al., 2015), the loss of its function makes cells rely on other compensatory pathways for maintenance of genomic stability and promotion of survival. Therefore, it may be feasible to identify pathways that compensate for the reduced expression of ARID1A. EZH2 methyltransferase, the catalytic subunit of polycomb repressive complex 2, was recently identified as one such factor (Bitler et al., 2015) that shares a synthetic lethal relationship with ARID1A aberrations. EZH2 helps maintain genomic stability via generation of the lysine 27 trimethylation mark on histone H3 (H3K27 Me3) by its catalytic SET domain (Cao and Zhang, 2004). EZH2 inhibition causes regression of ARID1A-mutated ovarian tumors in vivo, but this effect has not been confirmed in ARID1A-mutated gastric cancer. Nevertheless, EZH2 inhibition may be a rational and promising therapeutic approach for gastric cancer treatment. In addition to EZH2 methyltransferase, the PI3K/Akt pathway is another key pathway that acts in a synthetically lethal manner in tumors with defective ARID1A (Zang et al., 2012). ARID1A mutations were found to be significantly associated with tumors harboring PIK3CA-activating mutations (Yamamoto et al., 2012) or loss of PTEN expression (Bosse et al., 2013). ARID1A-deficient cells demonstrate an increased phosphorylation of Akt at the Ser473 site (Liang et al., 2012). These data suggest that loss of ARID1A expression sensitizes cancer cells to PI3K or Akt inhibitors (Samartzis et al., 2014). Because PI3K inhibitors are currently under clinical evaluation in gastric cancer (Fuereder et al., 2011), it will be necessary to perform a pre-selection of patients with respect to their ARID1A status prior to treatment with PI3K or Akt inhibitors. Apart from ARID1A, other members of the SWI-SNF complex (ARID1B, PBRM1 and SMARCC1), ISWI complex (SMARCA1) and NuRD complex (CHD3, CHD4 and MBD2) as well as other genes encoding histone-modifying proteins (SIRT1 and SETD2), are also mutated in 59% of gastric cancers (Zang et al., 2012). Moreover, histone methyltransferase genes and epigenetic modifier genes also mutate at a relatively high frequency in gastric cancer. Other genes involved in the maintenance of genomic stability may also be inactivated in gastric cancer. BRCA1 or BRCA2 inactivation, which occurs in approximately 10% of gastric cancer patients, is closely correlated to PARP1 function (Farmer et al., 2005). This synthetic lethal relationship has led to the exploitation of PARP1 inhibitors in the treatment of BRCA1-inactivated breast cancer (Bryant et al., 2005). Further investigation in synthetic lethality may provide additional avenues that will broaden the horizon of targeted therapeutic strategies for gastric cancer. RhoA-related pathways Rho GTPases are subsets of the Ras superfamily that regulate and coordinate cell motility, the cell cycle, cell cytoskeleton remodeling and other cellular processes (Iden and Collard, 2008;Lu et al., 2009b;Narumiya et al., 2009;Sahai and Marshall, 2002). RhoA, an important member of the Rho GTPase subsets, plays a critical role in stress fiber formation, which is involved in cell invasion, metastasis and tumorigenesis through its downstream effectors, including ROCK1, protein kinase N, mDia and citron (Lu et al., 2009b;Narumiya et al., 2009). In breast cancer, esophageal squamous cell carcinoma, colon cancer and other solid tumors, RhoA protein is overexpressed and serves as a quantitative marker for prediction of the progression stage and prognosis in a molecular detection strategy (Bellizzi et al., 2008;Fritz et al., 1999;Fukui et al., 2006;Malissein et al., 2013;Zhang et al., 2013). Ras V12 and loss of p53 synergistically induce RhoA activity (Xia and Land, 2007). More recently, several studies have discovered novel mutations of RHOA (14.3% to 25.3%) and somatic genomic alterations of RhoA-related Rho-GAPs in GS tumors with diffuse-type histological characteristics; however, these have rarely been observed in other gastric cancer subtypes (Cancer Genome Atlas Research, 2014; Kakiuchi et al., 2014;Wang et al., 2014;Zhou et al., 2014), suggesting that the RhoA-related pathway might be a novel signaling driver in GS tumors. Recurrent inactivating mutations in RhoA GTPase have been reported in T cell lymphoma (Cools, 2014;Palomero et al., 2014;Sakata-Yanagimoto et al., 2014;Yoo et al., 2014). In GS tumors, mutant RhoA encodes known functional domains related to effector interaction or GTP binding, working in a gain-of-function manner (Kakiuchi et al., 2014). The Tyr42, Arg5 and Gly17 residues in RhoA pro-tein have been identified as key mutation hotspots (Cancer Genome Atlas Research, 2014;Kakiuchi et al., 2014). The Tyr42 mutation attenuates the activation of protein kinase N but does not affect mDia activation (Cancer Genome Atlas Research, 2014). The TCGA project detected missense mutations at Tyr42 and Asp59 of GTPase-RhoA and mapped them onto the structures of RHOA and ROCK1 (Cancer Genome Atlas Research, 2014). Four alterations in the effector domain (Tyr34, Phe39, Glu40 and Try42 sites) have been shown to impair the binding of RhoA to its effector proteins . In addition to RhoA mutations that drive GS tumors by disrupting Rho signaling, the TCGA project further found RHOA-COL7A1 and COL27A1-ZNF618 fusions in GS tumors (Cancer Genome Atlas Research, 2014;Ushiku et al., 2015;Wang et al., 2014). Gain-of-function and structural variants of RhoA dysregulate Rho signaling and triggers an invasive phenotype in diffuse GC. Agents targeting RhoA itself, e.g., ROCK I, ROCK II and other effectors in the RhoA oncogenic pathways, have shown therapeutic benefits in cardiovascular disease, urogenital disorders and in other types of cancer (Gur et al., 2011;Nunes et al., 2010), suggesting a therapeutic potential for GS tumors (Sadok et al., 2015;Shang et al., 2012). The selective Rho-kinase inhibitors Y-27632 and Fasudil have been tested in pre-clinical and clinical studies (Molli et al., 2012;Olson, 2008). Y-27632 exhibits an anti-tumor effect against Ehrlich's ascites carcinoma in mice through binding to the Rho-kinase ATP binding pocket in an ATP-competitive manner (Olson, 2008). In different cell lines, including cells isolated from a chronic myeloid leukemia patient (Molli et al., 2012), myeloid cells bearing oncogenic forms of KIT, FLT3 and BCR-ABL (Mali et al., 2011) and human colon cancer cells (Attoub et al., 2002), the Clostridium botulinum C3 exoenzyme could inhibit Rho ADP-ribosylation to trigger cell apoptosis and decrease cell proliferation. A small molecular compound, Rhosin, a RhoA inhibitor, was discovered through virtual screening (Shang et al., 2012). Rhosin exhibits dose-ependently inhibitory activity toward RhoA by targeting the GEF-interactive site of RhoA. Moreover, Rhosin also suppresses the invasion of mammary epithelial cells and mammary sphere formation (Shang et al., 2012). Compound Y16 works synergistically with Rhosin in the inhibition of LARG-RhoA interaction, RhoA activation and RhoA-diated signaling by targeting G-protein-coupled Rho guanine nucleotide exchange factors. The combination of Y16 and Rhosin effectively inhibits the growth, migration and invasion of breast cancer cells (Shang et al., 2013). To date, no inhibitors of RhoA signaling pathways have progressed into standard clinical therapy. However, the optimization of such inhibitors could be useful for future gastric cancer treatments. TP53 dysregulation TP53 mutations have been observed in approximately 40% of gastric cancers (Iwamatsu et al., 2001;Oki et al., 2009), making it one of the most prevalent genetic alterations in gastric cancer. In addition to mutations at six discrete hotspot codons within the DNA-binding domain of TP53, loss of heterozygosity (LOH) of the TP53 gene is a main reason for the loss-of-function of p53 (Kobayashi et al., 1996;Smith et al., 2006;Tahara, 2004). Although the prognostic impact of TP53 abnormalities in gastric cancer remains controversial (Gamboa-Dominguez et al., 2007;Lee et al., 2014;Liu et al., 2012;Sumiyoshi et al., 2006;Wei et al., 2015), the correlation between TP53 abnormalities and the occurrence of aneuploidy is becoming increasingly clear (Cesar et al., 2004;Gobbo Cesar et al., 2006;Lu et al., 2009a). This finding is reasonable because the maintenance of genome stability is one of the key roles of p53 (Belyi et al., 2010), often called "the guardian of the genome" (Suzuki and Matsubara, 2011). Data from recently published second-generation sequencing studies (Cancer Genome Atlas Research, 2014;Wang et al., 2014) also support this notion that a cluster of TP53 mutations is observed in one of the specific subtypes of gastric cancer, specifically gastric tumors with CIN. Because TP53 alterations are closely associated with gastric cancer tumorigenesis, it is better to discuss gastric cancer treatments with respect to the p53 status. One feasible therapeutic approach to target p53-deficient tumors is to restore the function of p53. To achieve this goal, a recombinant adenovirus encoding p53 has been developed (Jekimovs et al., 2014). The results from clinical trials with two of the adenovirus-mediated p53 (Ad-p53) cancer gene therapies, advexin (Senzer and Nemunaitis, 2009;Wolf et al., 2004) and SCH-58500 (Atencio et al., 2006;Buller et al., 2002), demonstrated the safety and feasibility of their administration. However, the anti-tumor efficacy of these therapies has been limited in some cancer patients. This insufficiency may be attributed to the low transduction of p53 into cancer cells via these Ad-p53 vectors. To overcome these transduction defects, competent oncolytic adenoviruses (CRAd-p53) vectors have been developed. The CRAd-p53 vectors exploit the promoters of cancer-related genes to maintain a stable and high virus expression. Initial in vitro and in vivo studies focused on AdDelta24-p53 (van Beusechem et al., 2002), SG600-p53 (Wang et al., 2008) and OBP-700 (Yamasaki et al., 2012) demonstrated their equivalent safety and improved antitumor effects compared with those of their Ad-53 counterparts. The efficacy of recombinant Ad-p53 in gastric cancer cell lines was recently proven either as a monotherapy or in combination with oxaliplatin (Chen et al., 2011). However, its in vivo anti-tumor activity in the treatment of gastric cancer awaits further investigation. Although tumor suppressor genes such as TP53 are rarely tractable, low-molecular-weight compounds capable of selectively targeting p53-deficient tumors were recently identified. One of the most effective p53 reactivators, PRIMA-1 (Bykov et al., 2002), is able to restore the specific DNA binding and transcriptional transactivation function to mutant p53 by stabilizing the p53 core domain and promoting wild-type folding. The administration of PRIMA-1 leads to p53-dependent apoptosis in a range of tumors with p53 deficiency. APR-246 (PRIMA-1 MET), similarly to its analog, was tested in a Phase I/II clinical trial with promising results (Lehmann et al., 2012). Another recently reported promising compound that targets p53-deficient cancers is an FDA-approved drug for the treatment of type 1 and type 2 diabetes (Venkatanarayan et al., 2015). Pramlintide, a synthetic analogue of amylin, has been demonstrated to trigger rapid tumor regression in p53-deficient thymic lymphomas. This anti-tumor effect has been attributed to the ability of pramlintide and amylin to inhibit glycolysis and induce reactive oxygen species and apoptosis. These aforementioned sophisticated strategies to target p53-mutant cancers will provide novel insights into the treatment of gastric cancer. Receptor tyrosine kinases As one of the most frequently dysregulated pathways, receptor tyrosine kinases (RTKs) are activated by either copy number gains or hotspot mutations that maintain an active conformation of the protein kinase domain (Carrera et al., 2014). In gastric cancer, the majority of RTKs that show dysregulation is the epidermal growth factor receptor family (ERBB). Activation of ERBB2 (HER2) is found in approximately 17% of gastric cancer samples, and gene amplification is the main cause of this effect (Gravalos and Jimeno, 2008;Yano et al., 2006). This finding prompted the initiation of clinical trials of gastric cancer, which were designed to explore the efficacy of trastuzumab, a mAb against the extracellular domain of HER2 protein (Gravalos et al., 2011). The median overall survival rate was favorable for a trastuzumab-plus-chemotherapy arm with a 26% reduction in the death rate. These promising results prompted the approval of trastuzumab in 2013 for the treatment of gastric cancer. Apart from trastuzumab, ramucirumab, a fully human IgG1 anti-VEGFR-2 mAb, was approved in 2015 for patients with gastric cancer or GEJ (gastro-esophageal junction) adenocarcinoma who show progression following fluoropyrimidine or platinumcontaining chemotherapy. Although the clinical application of trastuzumab and ramucirumab improves the overall survival rate of gastric cancer patients, treatment resistance is inevitable. Primary resistances as well as required resistances, are the major reasons for the treatment failure observed in patients. Because PIK3CA is one of the main downstream effectors of RTK signaling, its hotspot mutations that lead to constitutive PI3K pathway signaling even in the absence of growth factors usually induce primary resistance to RTK inhibition (Velho et al., 2005). Because nearly 80% of EBV-positive gastric cancer tumors harbor altered PIK3CA (Cancer Genome Atlas Research, 2014), it is necessary to conduct a pre-selection of patients with respect to the status of PIK3CA and its suppressor gene, PTEN, prior to onset of trastuzumab treatment. In HER2-overexpressing tumors, the predominant mechanism of resistance is compensatory signaling by other cell-surface receptors, including the reprogramming of IGF1R, MET, GDF15 and other members of the ERBB family (Nahta, 2012). Because the PI3K pathway is commonly shared by different RTKs, its inhibitors may be a useful option for overcoming trastuzumab resistance in the future. Other mechanisms underlying treatment resistance include the expression of a selectively truncated version of HER2 (Molina et al., 2001;Nagy et al., 2005), alterations of focal adhesion kinases FAK and Src (Gong et al., 2004) and STAT3 activation (Korkaya et al., 2012). It has now become increasingly clear that cancer cells have many redundant mechanisms that confer resistance to targeted therapies and that rational drug combinations are needed to achieve enhanced efficacy. Stem cell pathways Aberrations in stem cell pathways result in fibrosis, degenerative diseases and cancer. Transforming growth factor  (TGF-), Wnt and hedgehog signaling are pivotal pathways that influence the cell division, invasion, migration and ulcer repair processes Lagasse, 2008;Zhao, 2014). Disorders in these processes usually lead to gastric adenocarcinoma and other tissue-specific gastrointestinal cancers (Stojnev et al., 2014). Genome sequencing and comprehensive molecular profiling have identified novel driver mutations involved in stem cell pathways in gastric cancer. Genes in the TGF- pathway have been predicted to be key drivers in both MSI and other types of gastric cancers Wang et al., 2014). TGFBR2, ACVR2A, SMAD4, SMAD2 and ELF3 mutations have been observed in MSI tumors and microsatellite-stable (MSS) tumors (Cancer Genome Atlas Research, 2014;Wang et al., 2014). ELF, the TGF- adaptor protein, and the common mediator Smad4 are important for the maintenance of cell structure and the conferment of cell polarity (Katuri et al., 2006;Levy and Hill, 2005). Inactivating mutations in ELF3 in gastric cancer may particularly lead to the silencing of TGF- signaling through reduced TGFBR2 expression (Park et al., 2001). Additionally, inactivating mutations in TGF- also occur in pancreatic carcinomas and colon cancers (Katuri et al., 2006). Loss of expression of Smad4 and ELF in advanced colorectal cancer is indicative of poor prognosis . In such cases, tumor cells may enhance their own proliferative, invasive and metastatic behavior through other aspects of TGF- signaling (Ikushima and Miyazono, 2010). Thus, TGF- signaling can switch from a tumor-suppressing to a tumor-activating function. TMEM16A, a membrane protein associated with calcium-dependent chloride channel activity (Caputo et al., 2008), has been reported to be significantly upregulated and amplified in gastric cancer tissues and contributes to tumor invasion and poor prognosis of gastric cancer through the TGF- pathway . Recurrent mutations in genes encoding Wnt pathway molecules, such as APC, MACF1 and CTNNB1, have frequently been found in both DGCs and IGCs (Anastas and Moon, 2013; Kakiuchi et al., 2014). CTNNA2, which encodes one component of cell-adhesion complexes, is mutated in 6.4% of MSS tumors, and this mutation has been identified as a novel driver mutation in gastric cancer . RNF43, which encodes an E3 ubiquitin ligase involved in the deregulation of the Wnt pathway, is also inactivated by mutation in MSS tumors (Koo et al., 2012). Several targeted agents of Wnt signaling are being developed , including -catenin-TCF antagonists and other mechanism-based inhibitors that principally target enzymes (Kahn, 2014). All of these agents are still in their infancy and need to be evaluated for their clinical efficacy and safety in gastric cancer patients. Nevertheless, some nonspecific modulators affect the Wnt pathway, such as non-steroidal anti-inflammatory drugs (NSAIDs) Sandler et al., 2003), COX2 inhibitor (Grosch et al., 2001;Xie et al., 2012), vitamins (Klampfer, 2014;Larriba et al., 2011;So and Suh, 2015), polyphenols Oh et al., 2014) and other FDA-approved drugs. In addition, the combination treatment of the ERK1/2 inhibitor PD98058 and DAPT, a potent secretase inhibitor, has been shown to markedly sensitize gastric cancer cells to apoptosis by suppressing -catenin signaling (Yao et al., 2013). Furthermore, GLI3 and ZIC4, which are involved in hedgehog signaling, are also important driver genes affecting a portion of MSS tumors . The hedgehog pathway inhibitors, vismodegib, which was approved in 2012 by the FDA for the treatment of locally advanced and metastatic basal cell carcinoma (BCC) (Chang et al., 2014;Sandhiya et al., 2013;Wilkes, 2012), has also been employed in the treatment of advanced gastric and gastroesophageal junction cancer. Patients administered FOLFOX and vismodegib simultaneously, achieved better median overall survival (vismodegib+FOLFOX, 14.9 months compared to FOLFOX, 11.5 months) (NCT00982592) (Cohen et al., 2013). BMS-833923, another SMO inhibitor, has also been combined with cisplatin and capecitabine for the treatment of inoperable metastatic gastric or gastroesophageal cancer patients (NCT00909402). Further clinical studies on hedgehog pathway inhibitors, such as vismodegib, BMS-833923, SARIDEGIB and LY2940680, are needed for gastric cancer (Justilien and Fields, 2015;Sandhiya et al., 2013). Amplifications of the genes that encode the stem cell markers CD44 and CD24 and other stem cell signaling biomarkers constitute additional aberrations that have been reported in gastric cancer Zhang et al., 2011b). These novel driver gene mutations of stem cell pathways may be exploited as biomarkers for the unveiling of potential therapeutic targets. Aberrant DNA methylation Cancer cells typically present aberrant DNA methylation, including peculiar gene promoter CpG island hypermethylation and global genomic DNA hypomethylation (He et al., 2015). These usually occur at the 5′ position of the cytosine ring within CpG dinucleotides, resulting in the silencing of genes and non-coding genomic regions (Cheng et al., 2013). Tumor-suppressor gene methylation is one of the most well-defined epigenetic alterations involved in gastric carcinogenesis (Qu et al., 2013). Approximately 400 genes are actively expressed in normal gastric epithelial cells, but these genes can be inactivated in gastric cancers through hypermethylation of their gene promoter CpG islands (Kang, 2012). Aberrant gene promoter CpG island hypermethylation occurs early in multi-stage gastric carcinogenesis and tends to increase in a step-wise manner in the progression toward malignancy (Cheng et al., 2013). Because hypermethylation of tumor-suppressor gene promoters is a common characteristic of gastric cancer cells, inhibition of DNA methyltransferases has emerged as an effective strategy against gastric cancer (Egger et al., 2004). Aberrant DNA methylation might be an important mechanism in EBV-related and MSI gastric carcinogenesis (Matsusaka et al., 2011). EBV-associated gastric cancer makes up almost 10% of all gastric cancers, which has special clinicopathological characteristics such as male predilection and preferential location in the cardia and middle part of the stomach (Cheng et al., 2015). EBV-positive tumors showed the highest degree of genome-wide hypermethylation and minimal demethylation, while MSI cancers accumulated a large number of promoter hypermethylation and demethylation out of promoter . Additionally, all EBV-positive tumors display CDKN2A (p16 INK4A ) promoter hypermethylation, while MSI subtype exhibits the MLH1 hypermethylation. The extreme CpG island methylator phenotype in EBV-positive tumors is different from that in the MSI subtype, which mirrors differences between the two groups in their spectra of mutations and gene expression. EBV-positive tumors have a substantially higher frequency of promoter DNA hypermethylation and less gene mutations than MSI subtype (Cancer Genome Atlas Research, 2014;Wang et al., 2014). Methylation of the p16 gene promoter occurs continually in various human cancers, including colon, lung, breast, bladder and gastric cancers (Alves et al., 2011;Celebiler Cavusoglu et al., 2010;Jablonowski et al., 2011;Veganzones-de-Castro et al., 2012;Zhang et al., 2011a). P16, a member of the cyclin-dependent kinase inhibitor (CKI) family, blocks cells in the G1 phase and induces apoptosis through the activation of caspase-3 (Merlo et al., 1995). P16 shows high frequency and density of gene promoter methylation with a loss of expression (Na and Woo, 2014). The demethylation agent 5-aza-dC markedly upregulates the expression of the p16 gene and promotes cell apoptosis, suggesting that hypermethylation of the p16 promoter might be involved in EBV-associated gastric carcinogenesis and demethylation therapy may be a novel therapeutic strategy for EBV-associated gastric cancer (He et al., 2015). Several DNA-demethylating agents have been tested in clinical trials and have already been applied in clinical therapy, e.g., 5-azacytine and 5-aza-2′-deoxycytidine. Both of these agents are used to treat all subtypes of myelodysplastic syndrome and acute myelogenous leukemia (Kaminskas et al., 2005;Prakash et al., 2001;Yoo and Jones, 2006). DNA-demethylating agents used either alone or in combination with chemotherapeutic drugs and histone deacetylase inhibitors have been shown to be effective in the treatment of cancers. For instance, 5-aza-2′-eoxycyti-ne is used to treat ovarian cancer in combination with carboplatin (Appleton et al., 2007;Pohlmann et al., 2002). Promoter hypermethylation of tumor-suppressor genes, e.g., P16, CDH1, and MLH1, occurs at a high frequency in gastric cancer cells. Several studies have shown that DNA demethylation is an effective strategy for the treatment of hypermethylation-associated gastric cancer (He et al., 2015;Na and Woo, 2014). There is no doubt that DNA methyltransferase inhibition is a potential remedy for gastric cancer. CONCLUSIONS A comprehensive understanding of the mechanisms underlying gastric cancer tumorigenesis is indispensable for therapeutic development. Although gastric cancer lags behind many other tumor types with respect to genetic sequencing and specific targeted therapies, recently published data employing second-generation sequencing have broadened our horizons regarding the genomic landscape of gastric cancer. A new molecular classification has been described and found to be correlated with the distinct salient genomic features among gastric cancer subtypes. The identification of these subtypes will offer a roadmap for patient stratification and trials of targeted therapies. Another notable achievement from the sequencing studies is the discovery of gain-of-function mutations in RhoA that are associated with gastric cancer tumorigenesis. Further investigation of RhoA-related therapies in gastric cancer may be beneficial for the improvement of patient survival. Despite the continual emergence of genomic alterations in gastric cancer, few of these can be qualified as potential therapeutic targets, largely because loss-of-function mutations were thought to be intractable and hard to be targeted. The synthetic lethality theory provides new insights into the tightly correlated functions of distinct genes. Many synthetic lethal genes have been identified in tumor cells in which the tumor-suppressor genes were somehow silenced. These tumor-addicted genes may be validated as potential targets if selective toxicity in terms of genetic alterations is expected. However, many studies are needed to evaluate both the feasibility and the safety of these potential targets in gastric cancer treatment. As the first valid functional target, HER2 overexpression is found in approximately 17% of gastric cancer patients. The anti-HER2 antibody trastuzumab has been demonstrated to be beneficial for the improvement of overall survival. Despite the success of trastuzumab in gastric cancer treatment, acquired resistance is inevitable. A better understanding of the molecular mechanisms underlying resistance is necessary to overcome this clinical problem. Moreover, the importance of combinatory strategies in the battle against resistance should never be underestimated if a sustainable response is desired.
2022-12-28T15:25:18.388Z
2016-07-26T00:00:00.000Z
255157820
s2orc/train
v2
Perioperative blood loss in open retropubic radical prostatectomy - is it safe to get operated at an educational hospital?
Perioperative blood loss in open retropubic radical prostatectomy - is it safe to get operated at an educational hospital? Introduction Blood loss during radical prostatectomy has been a long term issue. The aim of this study was to investigate the influence of the training level of the first assistant regarding blood loss in open retropubic radical prostatectomy at an educational hospital. Material and methods 364 patients underwent radical prostatectomy from 11/2006 to 10/2007 at one institution operated by one surgeon. In 319 patients all predefined parameters were obtained. Training level was determined by year of residency (1-5 yrs) or consultant status. Perioperative blood loss was calculated using three parameters: Hemoglobin level before and after surgery, postoperative sucker volume and weight of compresses. Furthermore the influence of prostatic size and BMI was analyzed. Results The Hb-decrease 24 h postoperatively was 2.4 g/dl median (-0.4-7.6 g/dl); sucker volume was 250 ml median (10-1500 ml); weight of compresses and swabs was 412 g median (0-972 g). One patient needed a transfusion with two erythrocyte concentrates one day after the surgery. There was no significant correlation regarding Hb-decrease (p = 0.86) or sucker volume plus weight of compresses (p = 0.59) in regard to the years of residency of the assisting physician. Also the number of assisted operations (n = < or > 20) had no significant influence on calculated blood loss (p = 0.38). Conclusions For an experienced surgeon the impact of the assistant regarding blood loss seems negligible. The training level of the assistant was not significantly correlated to a rise or decrease of perioperative blood loss. In our data radical prostatectomy could be safely performed at an educational hospital independent of the training level of the first assistant. INTRODUCTION Blood loss during radical prostatectomy has been a long term issue. New techniques and advanced experience made radical prostatectomy a safe procedure with a well defined risk for patients. Different factors, like body mass index, prostatic volume, and pelvic size have been described to potentially influence blood loss during radical prostatectomy. [13; 14] To estimate a possible impact of the first assistant on the perioperative blood loss, three parameters were analysed in this study: hemoglobin level before the surgery and 24h after the surgery, sucker volume after the surgery, and weight of all used compresses and swaps after the surgery. To avoid possible bias all operations were performed by the same surgeon in less than one year of observation. MATERIAL AND METHODS We analysed prospectively the data of 364 prostate cancer patients who were consecutively admitted to our hospital for open radical prostatectomy. All patients were operated on between November 2006 and October 2007 by one surgeon. Of 364 patients, 319 could be evaluated according to all three predefined parameters. Every operation was performed in a one assistant setting. The training level of the assisting physician ranged from first year residency up to the status of a well experienced consultant. In most cases bilateral pelvic lymphadenectomy was performed. The assistant performed the lymphadenectomy unilaterally under the guidance of the surgeon. Lymphadenctomy was performed mainly as the so called standard variant, including lymph nodes in the obturator fossa and the external iliac artery. Perioperative blood loss was calculated defining three parameters. Hemoglobin level (Hb, g/dl): Routine blood parameters were obtained including haemoglobin (g/dl) on the day of patients´admittance to the hospital. The normal range of Hb level for men was 14 to 18 g/dl according to our laboratory. All routine blood parameters were determined a second time 24h after the surgery. Based on the results the postoperative Hb decrease was calculated. Hb level was determined a third time after (5-8 days after surgery) in 244 of 319 (76%) of cases. Sucker volume (ml): The sucker volume was determined after the operation (10 ml scale). Weight of used compresses and swabs (g): All used compresses and swabs were weighed before and after the surgery. The predetermined own weight was subtracted by the final weight after surgery. Of the total weight, the amount of the used irrigation fluid (100ml = 100g), the calculated urine production during the time of an opened urethra (average of 20 Patients, 50 ml = 50g) and the urinary catheter balloon block volume (15ml = 15g) were subtracted. All procedures were performed as open retropubic radical prostatectomy. Assisting doctors were enrolled in their first, second, third, forth or fifth year of residency or as consultants, 3, 2, 3, 3, 2, 10 in number, respectively. The change of year of residency during the observation time was respected in the final evaluation. Body mass index and prostatic size (determined by transrectal ultrasound) were assessed at the time of patients´admittance to the hospital. The first aim of this study was to determine a possible correlation between the training level of the assisting doctor and the calculated perioperative blood loss. Furthermore the influence of prostatic size and BMI on the estimated perioperative blood loss was analysed. RESULTS On the first postoperative day the decrease of the haemoglobin level was 2.4 g/dl median for all cases (-0.4 -7.6 g/dl). The average sucker volume was 250 ml median (10 -1500 ml). The weight of compresses and swabs was 412 g median (0 -972 g), calculated as described above. Sucker volume and weight parameter were combined for analysis. There was detected no significant correlation between the level of Hb-decrease and the training level of the assistant (p = 0.86, see Fig. 1). Also no significant change in Hb-value 24h after surgery and 5-8 days after surgery was detected. There was detected no significant correlation between sucker volume plus weight of compresses and the training level of the assistant either (p = 0.59, see meters according to each year of residency are presented in Table 1. There was no significant difference between 1-3 years vs. > 3 years of residency in regard to sucker volume plus weight of compresses (p = 0.59). There was detected no significant difference regarding Hbdecrease (p = 0.22) or sucker volume plus weight of compresses (p = 0.38) for physicians with more respectively less than 20 assisted radical prostatectomies. There has been detected no statistical significant correlation between perioperative blood loss and body mass index (p = 0.32) or prostatic size (p = 0.2) (see Table 2). One patient needed a transfusion of two erythrocyte concentrates one day after the surgery. For statistical analysis the Mann-Whitney-U test and Kruskal Wallis analysis were performed. DISCUSSION Blood loss during radical prostatectomy has been a long term issue. This observation is supported by current literature, which highlights a series of studies comparing different surgical techniques regarding complications and blood loss. [1-5; 7-12; 15] Different factors as cause for an increased perioperative blood loss have been studied so far. Patients' body mass in-dex [14; 16; 17], prostate size [14; 18], pelvic size [13] etc. have been study targets. In our study we investigated for the first time the influence of the training level of the first assistant on the perioperative blood loss in open radical prostatectomies. The hypothesis for this study was that especially in a demanding operation like open radical prostatectomy the assistance by a relatively inexperienced physician could be related to a higher level of perioperative blood loss. This is a common concern of patients who get operated at an educational hospital. Inexperience of the assistant in surgery in general or misinterpretation of a critical situation could represent the cause for initial bleeding or could prolong actual bleeding time during surgery. This question seems to be important especially at an educational hospital where a large number of physicians start their surgical training. This study was initiated to control optimum surgical care and to assess data for patients´information according this issue. To analyze the influence of the assisting physician only, all evaluated operations were performed by one surgeon in less than one year of observation time avoiding bias of different surgeons or a possible technique change over time. The obtained data show that perioperative blood loss in open radical prostatectomy, calculated by the Perioperative blood loss correlated with BMI p = 0.32 Perioperative blood loss correlated with prostatic volume p = 0.26 use of three different parameters, is not statistically significantly correlated with the training level of the assistant. The study was designed to estimate perioperative blood loss most accurately. Concerning the parameter Hb-decrease the question was raised, whether a single postoperative Hb-measurement was sufficient to base further conclusions on. Therefore the Hb-level was determined one more time five to eight days after surgery in 76% of cases and correlated with the 24h value. No significant difference between the two levels was detected, so the 24h value was used for all calculations. To determine perioperative blood loss most accurately, details like perioperative urine production, amount of irrigation fluid and catheter balloon block volume were obtained and respected in the final calculation. Analyzing the dataset, there were no significant differences in the readings when the experience of the assisting doctor varied. Therefore perioperative blood loss seems to be dependent mainly on the surgeon himself, as she/he is the person who prevents or controls major bleeding. The assistant influences perioperative blood loss presumably only by making a substantial mistake such as hurting a major blood vessel during LAE or by slowing down the treatment of an acute bleeding through inappropriate reactions. Both conditions were not observed or rather had no statistically significance in the presented dataset. Interestingly the presented data also demonstrate that even a high training level of the assisting physician is not correlated with a decrease of perioperative blood loss. This supports further the theory that is mainly the surgeon her/himself, who is responsible for perioperative blood loss. In the literature there is a current debate about the influence of BMI on perioperative complications including blood loss in retropubic prostatectomy [14; 16; 17]. Chang et al. demonstrated in their trial that BMI was a significant correlative predictor of estimated blood loss on multivariate analysis. [16] On the other hand Singh et al. could not find a significant impact of BMI on operative or postoperative morbidity including blood loss. [14] In our data set there was found no impact of BMI on perioperative blood loss (p = 0.32). However it has to be mentioned that the number of obese patients (BMI greater 29) was rather low in our cohort. (BMI mean 26.6) Also prostatic volume is reported to be one of the factors that may negatively influence perioperative blood loss. Singh et al could show in their study that a prostate volume higher than 50 ccm3 correlated with higher blood loss but did not reach statistical significance [14]. The size of the prostate seems to play a more important role in the use of laparoscopic or robotic assisted operations as shown by Bozco et al. [19]. In our dataset there was found no statistically significant correlation between prostatic size and perioperative blood loss (p = 0.26) A limitation of this study is that only perioperative blood loss was calculated, and no drainage-volume of the postoperative phase was recorded. It would be of interest whether these observations are transferable to other centres, using different techniques like laparoscopy or robotic assisted surgery. All in all the presented data show that the impact of the assistant regarding blood loss seems negligible. This important information can be used to better inform patients who undergo radical prostatectomy especially regarding the question of perioperative safety at an educational hospital. CONCLUSION For an experienced surgeon using modern surgical techniques the impact of the assistant regarding blood loss seems negligible. The training level of the assistant seems not to be correlated to a rise or decrease of perioperative blood loss. In our data radical retropubic prostatectomy could be safely performed at an educational hospital independent of the training level of the first assistant.
2018-04-03T00:39:22.851Z
2009-07-22T00:00:00.000Z
13554720
s2orc/train
v2
COVID-19 in 7 multiple sclerosis patients in treatment with ANTI-CD20 therapies
COVID-19 in 7 multiple sclerosis patients in treatment with ANTI-CD20 therapies Highlights • Monoclonal anti-CD20+ antibodies are safe drugs in patients with multiple sclerosis infected with SARS-CoV-2.• Monoclonal anti-CD20+ antibodies may have some protective effect on the development of patients infected with SARS-CoV-2.• The presence of healthy humoral immunity may not be essential to ensure a good clinical course following SARS-CoV-2 infection. Introduction In December 2019, the first cases of SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2) infection were detected in Wuhan. This is the third coronavirus zoonosis to affect humans in 20 years and this time it has led to a rapidly spreading pandemic (Perlman, 2020). The COVID-19 (Coronavirus disease 2019) pandemic has forced neurologists to make quick and important decisions with MS patients using immunosuppressive treatment. Ocrelizumab and rituximab are anti-CD20 monoclonal antibody (mAb) treatments used in MS. Ocrelizumab is a humanized monoclonal antibody against CD20+ and an approved treatment for relapsing and progressive MS (RMS and PMS). Rituximab is a chimeric monoclonal antibody against CD20+, initially approved for CD20+ non-Hodgkin lymphoma and later for CD20+ chronic lymphocytic leukemia and rheumatoid arthritis and used in neuromyelitis optica as an off-label MS treatment. Both anti-CD20 mAbs bind to the surface of B cells, causing their depletion (Moreno Torres and García-Merino, 2017). Here, we describe our experience with seven patients treated with these drugs who suffered from COVID-19. The main clinical characteristics and treatments of the cases detailed below are summarized in Table 1. Case reports Case 1: 60-year-old male, diagnosed with RMS in 2010, started on treatment with glatiramer acetate, switched to natalizumab in 2013. In 2017, treatment was changed to rituximab due to persistent radiological activity and clinical progression (he was diagnosed at that time as being secondary progressive). The patient had an Expanded Disability Status Scale (EDSS) value of 8. In December 2019, CD19+ cells were absent from the peripheral blood. The patient presented on March 17 due to a four-day course of fever, cough and dyspnea. The main laboratory findings were: lymphopenia (0.60 × 10 3 /mm 3 ), slight decrease in Ig M(Immunoglobulin M) M(58.6 mg/dl, range: 80 -250), positive SARS-CoV-2 RT-PCR in nasopharyngeal swab. Chest x-ray showed infiltrates in left hemithorax. The patient showed good clinical and radiological evolution with specific SARS-CoV-2 treatment (hydroxychloroquine 200 mg/12 h for 10 days) and was discharged home five days after admission without sequelae and with a negative PCR https://doi.org/10.1016/j.msard.2020.102306 Received 2 May 2020; Received in revised form 6 June 2020; Accepted 13 June 2020 Table 1 Clinical and phenotype characteristics of multiple sclerosis patients. V. Meca-Lallana, et al. Multiple Sclerosis and Related Disorders 44 (2020) 102306 swab in May 2020. Case 2: 49-year-old male, smoker, diagnosed with RMS in 2014, started treatment with glatiramer acetate in 2015. Due to suboptimal response, switched to ocrelizumab in 2017. Last infusion was in January 2020, when CD19+ cells were absent from the peripheral blood. EDSS was 3. Attended emergency department on March 11 with a five-day history of cough, associated with dyspnea and fever. The main laboratory findings were lymphopenia (0.51 × 10 3 mm 3 ) and Creactive protein (CRP) 4.95 mg/dL. Chest x-ray findings were normal and he tested positive for SARS-CoV-2 RT-PCR in nasopharyngeal swab. Specific SARS-CoV-2 treatment was started (lopinavir/ritonavir 200/ 50 mg 2 tablets/12 h) and he was discharged on the fifth day of admission because of good clinical evolution. Control RT-PCR for SARS-CoV-2 in nasopharyngeal swab on April 22 was negative. Serology showed positive IgG and negative IgM. Case 3: 45-year-old male, diagnosed with RMS in 2015, started treatment with teriflunomide, replaced by ocrelizumab in March 2017 due to lack of efficacy. Last infusion was in January 2020, when CD19+ cells were absent from the peripheral blood. He showed no lymphopenia, mild hypogammaglobulinemia (IgG: 696 mg/dl, range: 800 -1600; IgM: 48.7 mg/dl, range: 80 -250; normal IgA) and EDSS 2. The patient attended emergency department on April 3 due to a ten-day history of cough, dyspnea, myalgia and low-grade fever. Laboratory data showed slight increase in CRP (0.83 mg/dl). He presented a chest x-ray with bilateral infiltrates and positive SARS-CoV-2 RT-PCR in nasopharyngeal swab. SARS-CoV-2 treatment was started (hydroxychloroquine 200 mg/12 h for 10 days, lopinavir/ritonavir 200/50 mg 2 tablets/12 h, azithromycin 250 mg/24 h for 4 days), and the patient presented a very favorable evolution and was discharged after four days. The patient deteriorated five days after discharge, with dyspnea required readmission for monitoring for one week, although no further treatment was required. He was discharged without sequelae. Serology testing conducted on May 20 was positive for both IgG and IgM. PCR in nasopharyngeal swab was negative. Case 4: 25-year-old female diagnosed with RMS in 2012 started on treatment with natalizumab, discontinued due to poor tolerability. From 2014 to 2018 she was on treatment with fingolimod, replaced by rituximab in December 2018 because of persistent radiological activity. Treatment was switched to ocrelizumab in March 2020 (first dose of 300 mg on March 4, 2020). EDSS value was 1. On April 15, prior to the scheduled administration of the second dose of ocrelizumab, she presented a positive SARS-CoV-2 RT-PCR in nasopharyngeal swab (performed as screening prior to immunosuppressive treatment). Blood count and Igs were normal and CD19+ cells were absent from the peripheral blood. Treatment was postponed. Control RT-PCR on April 22 was negative. Complete COVID-19 serology (IgG + IgM) was also negative. Seven days later, ELISA serology for IgG and IgM showed negative results. The patient has remained asymptomatic throughout this time and the second ocrelizumab infusion (300 mg) was performed in May 2020 without incident following two negative PCR tests. Case 5: 36-year-old woman diagnosed with RMS, with a first spinal cord relapse in 2009. The patient began treatment with glatiramer acetate. It was decided to change treatment due to inefficacy as she had two relapses. She was started on treatment with rituximab in March 2019 because ocrelizumab had not yet been approved, and switched to ocrelizumab in September 2019 when it became available for her. Her EDSS was 2 and her clinical course was good, with no new relapses. Laboratory tests in December 2019 showed CD19+ cell depletion with no other changes. On April 1, she presented symptoms of fever and headache, reporting contact with a relative who died of COVID-19 the week before. There was no PCR confirmation. The patient recovered completely in one week without sequelae. Serology testing for IgG and IgM against SARS-COV-2 in May 2020 was negative. Case 6: 60-year-old female first presented with a picture of progressive paraparesis and ataxia in 2004. After a complete examination, including cranial and spinal MRI, she was diagnoses with PMS. The patient accumulated disability over the years until starting treatment with ocrelizumab as part of a clinical trial in 2011. The patient maintained an impressive slow progression (from an EDSS of 6.5 to 7.5 in nine years with good upper limb function). On March 12, 2020, she presented a three-day picture of fatigue, headache, cough, 38°C fever and hyposmia that subsided spontaneously and without treatment. Not confirmed by PCR or serology. Case 7: 52-year-old male diagnosed with RMS in 2009. Treated with glatiramer acetate at the time he was diagnoses since he presented with a relapse. The patient presented progression after the first relapse, leading to a clinical impression of a 'SAP' (Single Attack with later Progression) form of MS. The patient was administered ocrelizumab treatment beginning in February 2019 due to MRI activity and continued progression. Serology (IgG and IgM) and PCR for SARS-CoV-2 were performed before administering his treatment dose in May 2020. Both were positive, and the patient was asymptomatic. Discussion Anti-CD20+ monoclonal antibodies are used in MS treatment. The incidence of severe infections from ocrelizumab in clinical trials was very low (1.3% for relapsing MS and 6.2% for primary progressive MS) (Hauser et al., 2017). Ocrelizumab is associated with decreased levels of IgM (and to a lesser degree for IgA and IgG), and serious infections occurred, but their incidence was low in clinical trials and extended phases (Derfuss et al., 2020). Clinical trials reported similar incidence of infections between rituximab and placebo (69.6% and 68.2% vs 65.3% and 71.4% respectively) (Moreno Torres and García-Merino, 2017). The incidence of infections in open-label prospective studies varies widely, ranging from 61.5% to 8%, however infections are generally mild to moderate (Midaglia et al., 2018). Rituximab decreases immunoglobulins, especially IgM levels, without a clear association with serious infection risk (Moreno Torres and García-Merino, 2017). In this work, we report our experience in MS patients with anti-CD20+ antibodies who have presented SARS-CoV-2 infection. Even with differing clinical pictures, all presented favorable evolution, for which there are several hypotheses: 1 Patients treated with anti-CD20 may be capable of having a primary immune response in the initial phase of infection. Ocrelizumab and rituximab induce depletion of circulating CD20+ cells and not the B cells in secondary lymphoid organs, favoring an adequate immune response against primary infection (G Novi et al., 2020;Baker et al., 2018). 2 B cells and immunoglobulin may not be absolutely necessary for viral elimination. Perhaps in some especially milder cases, innate immunity anti-viral T cells may be sufficient for recovery Soresina et al., 2020). 3 Several publications have suggested that selective immunosuppression prior to SARS-CoV-2 infection could benefit and even protect patients from its hyperinflammation phase, which is accompanied by a release of proinflammatory cytokines that can ultimately be fatal. It is hypothesized that the decrease in IL-6 releasing peripheral B cells could confer this protection to patients in the hyperinflammation phase (Giovanni Novi et al., 2020;Giovanoni, 2020). In our series, all patients presented a favorable evolution, but it is worth mentioning patients 1, 6 and 7, who were older and had a higher EDSS. Worse infection evolution might therefore have been expected, yet they present adequate resolution of the clinical picture. Patients 6 and 7 in particular present a very mild clinical picture and an asymptomatic picture respectively. It should also be noted that patients 4 and 7 were asymptomatic carriers. Serology testing could not detect immune response to the virus in patients 4 and 5, but it did in patients 2, 3 and 7. This does not seem to be associated either with severity of the picture presented or with being an asymptomatic carrier, as patient 7 for example did not present a clinical picture and developed antibodies. The absence of CD19+ B cells cannot fully explain this either, since this occurs in all the patients we have reported on. This could be explained by the fact that patients with negative serology (4 and 5) came off rituximab treatment before ocrelizumab and perhaps the use of both therapies was detrimental to antibody formation. In the VELOCE study, humoral responses were attenuated in patients who were B-cell depleted having received ocrelizumab. Patients were nonetheless able to have humoral responses to the vaccines and cellular immune responses were not assessed (Stokmaier et al., 2018). Adding in the use of rituximab, it is possible that this humoral response is reduced. Another option to consider is the possibility of false negatives in the test results. As mentioned previously, COVID-19 resolution may not always necessarily require B cells. It is theorized that innate immunity or Tcell-mediated immunity might be sufficient in some patients to resolve the picture because of the favorable evolution of infection in patients without B lymphocytes, as in X-linked agammaglobulinemia (Soresina et al., 2020). Conclusion Our experience with the evolution of patients treated with anti-CD20 drugs has been positive. We can hypothesize a 'protective' role of selective immunosuppression in the COVID-19 hyperinflammation phase, in addition to the preserved ability of patients treated with anti-CD20 to make an adequate primary immune response. This may help us make decisions in treatment doses in the current pandemic (Giovanoni, 2020). We have found antibodies against SARS-CoV-2 in patients treated with ocrelizumab, but in patients who previously used rituximab this immunity is not achieved or we are not able to detect it. Regardless of the presence or absence of antibodies, progression has been favorable in all cases and so resolution of the condition could be considered to be independent of humoral immunity. Greater experience through patient records is required in order to draw firm conclusions.
2020-06-16T13:05:50.703Z
2020-06-15T00:00:00.000Z
219688800
s2orc/train
v2
Effects of substrate on shrimp growth, water quality and bacterial community in 1 the biofloc system nursing
Effects of substrate on shrimp growth, water quality and bacterial community in 1 the biofloc system nursing This study aimed to investigate the effects of substrate on water quality, shrimp growth 16 and bacterial community in the biofloc system with a salinity of 5‰. Two treatments, 17 biofloc system with (sB) or without (nB) addition of elastic solid packing filler (nylon) 18 as substrate, were set up. Penaeus vannamei postlarvae (PL, ~ stage 15) were stocked at 19 a density of 4000 PL m -3 in each treatment with triplicates for a 28-days culture 20 experiment, taking glucose as carbon source (C:N 15:1). Results showed that the 21 survival rate (96.3±3.6%), FCR (0.76±0.06) and productivity (1.54±0.12 kg m -3 ) in sB 22 treatment were significantly better than those in nB treatment (81.0±7.1%, 0.98±0.08 23 and 1.14±0.09 kg m -3 , P <0.05). All water parameters were in the recommended ranges. 24 Substrate showed significant effect on TAN, TSS, turbidity, biofloc volume, pH and 25 carbonate alkalinity ( P < 0.05). Actinobacteria (4.0-22.7%), Bacteroidetes 26 (10.4-33.5%), Firmicutes (0.2-11.2%), Planctomycetes (4.0-14.9%) and Proteobacteria 27 (29.4-59.0%) were the most dominant phyla for both treatments. However, the bacterial 28 community in sB treatment showed to be significantly different from that in nB 29 treatment (Jaccard distance 0.94±0.01, P =0.001). Substrate showed significant effects 30 on Shannon, Heip, Pielou and Simpson index, as well as relative abundances of 31 Actinobacteria, Bacteroidetes and Planctomycetes ( P < 0.05). The results suggested 32 that addition of substrate affected the shrimp growth, water quality and bacterial 33 community in the biofloc system nursing P. vannamei PL with a 5‰ salinity. control the problems usually appeared in the traditional prenursery system, such as 57 biosecurity and toxic ammonia and nitrite (Samocha, 2010), due to the advantages of 58 this technology on nitrogen assimilation in situ and pathogen control under minimal or 59 Thirty PL were selected randomly and individually weighed to the nearest 0.1 mg with 145 an electric balance (AUX220, Shimadzu, Japan) each week. Survival rate, weekly 146 increment of body weight (wiW), specific growth rate (SGR), feed conversion rate 147 (FCR) and productivity were calculated according the following formulates: 148 Ltd. The raw data produced from high-throughput sequencing has been deposited at 172 NCBI with accession number of PRJNA646765. 173 Data processing for the high-throughput sequencing data was carried out under the 174 QIIME 2 (Quantitative Insights Into Microbial Ecology, Version 2019.10) framework 175 (Bolyen et al., 2019). In brief, ambiguous nucleotides, adapter sequences and primers 176 contained in reads, and short reads with length less than 30 bp were removed with the 177 cutadapt plugin (Martin, 2011). After that, bases in the two ends of reads with quality 178 score lower than 25 were trimmed. Thereafter, chimeras were filtered, and pair-ended 179 reads were joined, dereplicated, to obtain high-quality reads which were clustered to 180 operational taxonomic units (OTUs) with an identity of 0.97 by using the Vsearch tool 181 (Rognes et al., 2016). Thereafter, the counts of OTUs were normalized by 16S rRNA gene copy number based on rrnDB database (version 5.6) with the QIIME 2 plugin of Data was statistically analyzed with the SPSS platform for windows ( Water quality 213 The levels of dissolved oxygen and temperature in the present study were above 5.0 mg 214 L -1 and 26.0 o C, respectively (Table 1). The effects of time on both parameters were 215 significant (P < 0.05, Table 1). Whereas, substrate showed no significant effect on the 216 levels of dissolved oxygen and temperature (P > 0.05, Table 1). 217 The carbonate alkalinity in sB treatment was maintained at a high level of 218 306.5±51.7 mg L -1 CaCO 3 , although it was lower than that in nB treatment (373.7±12.4 219 mg L -1 CaCO 3 , Table 1). The pH values in nB treatment and sB treatment were 7.20±0.14 and 7.11±0.03, respectively (Table 1). Both parameters were significantly 221 affected by substrate and time, as well as their interactions (P < 0.05, Table 1). 222 The mean concentrations of the three inorganic nitrogen compounds were at low 223 levels in the present study (< 1.0 mg L -1 , The biofloc volume (BFV) levels in nB treatment and sB treatment were similar, 233 with a value of 8.8±2.3 and 8.3±2.3 mL L -1 , respectively (Table 1). However, the total 234 suspended solids (TSS) and turbidity in sB treatment (491.3±150.2 mg L -1 , and 235 299.5±92.0 nephelometric turbidity units, NTU) were higher than those in nB treatment 236 (148.5±31.3 mg L -1 and 111.9±56.3 NTU, Table 1). Substrate showed significant main 237 effects on those three parameters (P < 0.05, Table 1). The effects of time on BFV and 238 turbidity were also significantly, as well as the interaction of substrate and time on BFV 239 (P > 0.05, Table 1). 240 Growth performance 241 During the 28-days culture experiment, the average body weights of shrimp in sB 242 treatment were higher than those of nB treatment ( Fig. 1). At the end, although the final 243 body weight in sB treatment (0.40±0.03 g) was not significantly different from that of B 244 treatment (0.36±0.04 g, P = 0.596, Table 2), the survival rate (96.3±3.6%) and 245 productivity (1.54±0.12 kg m -3 ) of the former treatment were significantly higher than 246 those of the latter (81.0±7.1% and 1.14±0.09 kg m -3 , P < 0.05, Table 2). The feed 247 conversion rate (FCR) in sB treatment (0.76±0.06) was significantly lower than that in 248 nB treatment (0.98±0.08, P = 0.044, Table 2). There no significant difference was 249 observed between both treatments for the weekly increment of body weight (wiW) and 250 specific growth rate (SGR) of shrimp during the culture experiment (P > 0.05, Table 2). 251 Bacterial diversity 252 The Shannon index for bacterial community in sB treatment (6.77±0.18) was 253 higher than that in nB treatment (6.14±0.07, Table 3). Substrate, time and their 254 interaction showed significant effects on this index (P < 0.05, Table 3). The Shannon 255 index in sB treatment obtained the highest value at 7 d and then slightly decreased, but 256 that in nB treatment peaked at 21 d (Fig. 2 a). 257 Higher OTU counts (6238.3±353.8) and Margalef index (387.7±28.0) was 258 observed in sB treatment, when compared to that in nB treatment (5713.5±368.2 and 259 343.9±22.0, Table 3). Both indexes significantly affected by time (P < 0.05), but not by 260 substrate (P > 0.05, Table 3). Those two indexes showed similar changing trends with 261 the Shannon index in each treatment (Fig. 2 b and c). 262 The evenness indexes, Heip index and Pielou index, in sB treatment (0.018±0.001 263 and 0.538±0.003) and nB treatment (0.013±0.001 and 0.493±0.01, Table 3) peaked at 264 14 d and 21 d, respectively (Fig. 2 d and e). The main effects and interactions of 265 substrate and time on both indexes were significant (P < 0.05), with exception of the 266 interaction on Heip index (P = 0.544, Table 3). 267 The Simpson index in sB treatment (0.955±0.003) was higher than that in nB 268 treatment (0.927±0.010, Table 3). This index in sB treatment displayed a peak value at 269 14 d, but that in nB treatment showed a decreasing trend during experiment (Fig. 2 f). 270 Substrate and time significantly affected this index (P < 0.05, Table 3). Whereas, the 271 Berger-Parker index in sB treatment (0.152±0.014) was lower than that in nB treatment 272 (0.193±0.020, Table 3). Substrate was not significantly affected this index (P > 0.05, 273 Table 3). The change trend of Berger-Parker index in the present study was contrary to 274 that of the Simpson index in each treatment (Fig. 2 g). 275 The bacterial community distances within nB treatment and sB treatment were 276 0.76±0.36 and 0.77±0.37, respectively (Table 4). PCA analysis also showed that water 277 samples collected from the same treatment were closer than those from the other 278 treatment, basing on the OTUs composition (Fig. 3). In addition, the bacterial beta 279 diversities between both treatments were significantly different with a Jaccard distance 280 of 0.94±0.01 (P = 0.001, PERMANOVA, Table 4). 281 b). The main effects and 290 interactions of substrate and time on proportions of those seven phyla were significant 291 (P < 0.05), except the substrate effects on Firmicutes and Proteobacteria, and the 292 interaction of substrate and time on Planctomycetes (P > 0.05, Table 5). 293 The bacterial composition profiles on class, order, family and genus levels were 294 showed in Fig. 1S-Fig. 4S, respectively. 295 The LEfSe analysis showed that an unassigned genus and the phylum 296 Verrucomicrobia were the most significant biomarkers for nB and sB, respectively ( In the present study, it was found that the turbidity levels in both treatments were 319 increased continuously. This might be contributed to the removal operation for biofloc 320 during 14-28 d of the experiment in the present study. The turbidity in the present study 321 was determined on water sample without big-size biofloc which settled by using an 322 Imhoff cone for 15 min, indicating that this parameter was correlated to the content of 323 small-size biofloc. Previous studies have showed that the bacterial groups are different 324 between small-and big-size bioflocs (Chen et al., 2019; Huang et al., 2020). Therefore, the removal operation in this study might make settleable solids (big-size biofloc) as 326 well as bacterial groups attached be removed from the water column, which would 327 improve the growth of bacteria adhering on small-size biofloc retained in the water 328 body, by reducing the competition from bacteria relating with big-size biofloc, and in 329 turn, prompting formation of small-size suspended biofloc and increasing the turbidity. 330 Substrate showed significant effect on turbidity in the present study (P = 0.002). It 331 was thought that the addition of artificial substrates may influence the water circulation 332 in the tanks, leading to smaller turbulence in the water, and in turn to facilitating 333 sedimentation or particle aggregation and formation of large flocs, which reduces the 334 suspended solids and thus the turbidity in the water column (Ferreira et al., 2016; 335 Fleckenstein et al., 2020). However, in the present study, the turbidity level in sB 336 treatment was found to be higher than that in nB treatment. It was speculated that 337 substrate might play a positive role in the process of turbidity increasing caused by the 338 removal operation for big-size biofloc discussed above. 339 Substrate also showed significant effect on carbonate alkalinity in the current study 340 (P < 0.001). And the carbonate alkalinity in sB treatment was lower than that in nB 341 treatment in the current study, indicating that more alkalinity was consumed in biofloc
2022-02-12T16:04:42.061Z
2022-02-10T00:00:00.000Z
246770400
s2orc/train
v2
Primum Non Nocere: Before working with Indigenous data, the ACL must confront ongoing colonialism
Primum Non Nocere: Before working with Indigenous data, the ACL must confront ongoing colonialism In this paper, we challenge the ACL community to reckon with historical and ongoing colonialism by adopting a set of ethical obligations and best practices drawn from the Indigenous studies literature. While the vast majority of NLP research focuses on a very small number of very high resource languages (English, Chinese, etc), some work has begun to engage with Indigenous languages. No research involving Indigenous language data can be considered ethical without first acknowledging that Indigenous languages are not merely very low resource languages. The toxic legacy of colonialism permeates every aspect of interaction between Indigenous communities and outside researchers. To this end, we propose that the ACL draft and adopt an ethical framework for NLP researchers and computational linguists wishing to engage in research involving Indigenous languages. Introduction Beginning with our community's first academic conference in 1952 (see Reifler, 1954) and continuing with the establishment of the Association for Computational Linguistics (ACL) 1 in 1962 (MT Journal, 1962), the members of our research community have examined a huge range of topics, ranging from linguistic and computational linguistic models and theories to engineering-focused problems in natural language processing. 2 While great progress has been made in recent years across many NLP tasks, the overwhelming majority of NLP and CL research focuses on a very small number of languages. Over the 70 years from 1952 to 2022, the vast majority of CL and NLP research has focused on a small number of widely-spoken languages, nearly all of which represent politically-and economically-dominant nationstates and the languages of those nation-states' historical and current adversaries: English, the Germanic and Romance languages of western Europe, Russian and the Slavic languages of eastern Europe, Hebrew, Arabic, Chinese, Japanese, and Korean. Bender (2009) surveyed papers from ACL 2008 and found that English dominated (63% of papers), with 20 other languages distributed along a Zipfian tail (Chinese and German shared the number 2 slot at just under 4% of papers each); across all ACL 2008 long papers, only three languages (Hindi, Turkish, and Wambaya) were represented outside of the language families listed previously. This lack of diversity directly impacts both the quality and ethical status of our research, as nearly every successful NLP technique in widespread current use was designed around the linguistic characteristics of English. 3 A special theme designed to address this shortcoming has been selected for the 60th Annual Meeting of the ACL in 2022: "Language Diversity: from Low Resource to Endangered Languages." This theme is to be commended as a step towards a more linguistically diverse research agenda. Yet as we expand our research to a broader and more inclusive set of languages, we must take great care to do so ethically. The endangered Indigenous languages of the world are not merely very low resource languages. The toxic legacy of colonial-ism permeates every aspect of interaction between Indigenous communities and outside researchers (Smith, 2012). Ethical research must actively challenge this colonial legacy by actively acknowledging and opposing its continuing presence, and by explicitly acknowledging and centering Indigenous community goals and Indigenous ways of knowing. To this end, we propose an ethical framework for NLP researchers and computational linguists wishing to engage in research involving Indigenous languages. We begin in §2 by examining the abstracts of papers published in the proceedings of the toptier conferences (ACL, NAACL, EMNLP, EACL, AACL) and journals (Computational Linguistics, TACL) of the Association for Computational Linguistics from the past several years (hereafter referred to as *ACL papers/abstracts), replicating the results of Bender (2009), confirming that recent *ACL papers still lack significant language diversity. In §3 we address research practices and ongoing colonialism in Indigenous communities. Finally, we examine decolonial practices appropriate for a draft framework of ethical obligations ( §4) for the ACL research community. Recent *ACL papers lack significant language diversity We begin by examining the abstracts of *ACL papers from the past several years to confirm the results of Bender (2009), namely that recent *ACL papers still lack significant language diversity. We collect a corpus of 9602 recent *ACL abstracts from the ACL Anthology; 4 more than 80% fail to mention any language (see Table 1). Essentially all such papers that fail the #BenderRule assume English as the language of study (Bender, 2019 (dominated by English), Sino-Tibetan (dominated by Mandarin Chinese), Japonic (essentially all Japanese), and Afro-Asiatic (dominated by Arabic and Hebrew). Indo-European languages are assumed (English) or explicitly mentioned in 97% of abstracts. The next three most mentioned language families account for another 1% of abstracts. 5 Combined, only 165 out of 9602 abstracts (1.7%) mention any language from any other language family. These findings are also consistent with those of Joshi et al. (2020), who scrape and examine a corpus of approximately 44,000 papers, including both *ACL papers and papers from LREC, COLING, and ACL-affiliated workshops. Joshi et al. present a 6-point taxonomy for classifying languages according to the quantity of labelled and unlabelled corpora and models available for each language, and find that *ACL papers are low in terms of language diversity and are dominated by the highestresource languages. Unfortunately, we were unable to apply our language family-level analysis on their dataset, as it was not publicly available for down-load. While Joshi et al. (2020) find that language diversity is somewhat higher at LREC and ACLaffiliated workshops, the larger issue of language homogeneity in top-tier *ACL venues is extremely problematic. In a research community that calls itself the Association for Computational Linguistics, it is completely unacceptable that fewer than 20% of top-tier *ACL abstracts mention the name of any language (see Table 1), and those that do are dominated by one language (English) and its language family (Indo-European). Research and Ongoing Colonialism in Indigenous Communities The linguistic homogeneity in *ACL papers can be viewed as a symptom of a much larger problem, namely that our research paradigms are deeply rooted in a Western scientific tradition that is inextricably intertwined with colonialism. Smith (2012, p.50) notes that in this tradition, there are implicit and explicit rules of framing and practice that express power. In *ACL research, the act of not explicitly stating any language, of assuming English as the default, is one such practice. Research scientists rarely consider the philosophy of science (Popper, 1959) on which our research is predicated; as Wilson (2001) notes, this is defined by an ontology, epistimology, methodologies, and axiology that are seldom acknowledged. In our field, these often surface as unacknowledged positivist (Comte, 1853) assumptions that science is value-neutral and empirical observations and logical reasoning fully and completely define the nature of science and reality (Egan, 1997). The first step in enacting decolonial ethical practices is acknowledging that we hold these assumptions and recognizing that there are other Indigenous philosophies of science that are equally valid and are rooted in fundamentally distinct worldviews that center relationality (see Wilson, 2008). By failing to acknowledge and critically examine the philosophical foundations of our science, we implicitly and unconsciously elevate our ideas of research and language work above those of Indigenous communities (Leonard, 2017). Given the distinct value systems and distinct views of reality of outside research scientists and Indigenous communities, it is not surprising that even good-faith efforts of well-meaning outside researchers are often viewed by Indigenous communities as irrelevant at best and exploitative at worst. 6 Outside perceptions of Indigenous peoples are inextricably linked to corresponding histories of colonization, and are typically accompanied by (usually outdated and incorrect) assumptions about the "proper" roles of Indigeneous peoples today that correspond with neither reality nor Indigenous people's views of themselves (Deloria, 2004;Leonard, 2011). When a linguist (or a computer scientist) begins the process of interacting with an Indigenous community and working with that community's Indigenous language, the starting "lens through which others view [the linguist's] professional activities will at least partly reflect what 'linguist' has come to mean, and that this in some cases will occur regardless of whether [the linguist] personally exhibit a trait that has come to be associated with this named position" (Leonard, 2021). Endangered Indigenous languages are not merely very low-resource languages. Each Indigenous community represents a sovereign political entity. Each Indigenous language represents a crucial component of the shared cultural heritage of its people. The rate of intergenerational transmission of Indigenous language from parent to child in many Indigeneous communities has declined and is continuing to decline (Norris, 2006), resulting in a deep sense of loss felt by older generations who grew up speaking the Indigenous language as well as by younger generations who do not speak the language who experience a diminished sense of cultural inclusion (Tulloch, 2008). Language is an integral part of culture, and declines in robust Indigenous language usage have been correlated with serious negative health and wellness outcomes (Chandler and Lalonde, 2008;Reid et al., 2019). At the same time, Indigenous individuals and Indigenous communities have suffered greatly from colonial practices that separated children from communities, actively suppressed Indigenous language and culture, misappropriated land and natural resources, and treated Indigenous people, cultures, and languages as dehumanized data to study (Whitt, 2009;NTRC, 2015;Leonard, 2018;Bull, 2019;Dei, 2019;Guematcha, 2019;Bahnke et al., 2020;Kawerak, 2020). As Smith (2012) notes, "research is probably one of the dirtiest words in the indigenous world's vocabulary;" it is "implicated in the worst excesses of colonialism" and "told [Indigenous people] things already known, suggested things that would not work, and made careers for people who already had jobs." It is then, hardly surprising that "After generations of exploitation, Indigenous people often respond negatively to the idea that their languages are data ready for the taking" (Bird, 2020). Indigenous communities are rightly taking up the slogan "Nothing about us without us" (see, for example, Pearson, 2015). Even when we consider the "lived experiences and issues that underlie [the] needs" of Indigenous communities, these community priorities are far too often treated as subordinate to research questions deemed valuable by members of academe (Leonard, 2018;Wilson, 2008;Simonds and Christopher, 2013). Credulous evangelical claims of technology as savior 7,8 only exacerbate these tensions (Irani et al., 2010;Toyama, 2015). Prerequisite Obligations for Ethical Research involving Indigenous Languages and Indigenous Peoples When CL and NLP researchers begin to work with Indigenous language data without first critically examining the toxic legacy of colonialism and the self-identified priority needs and epistemology of the Indigenous community, the risk of unwittingly perpetuating dehumanizing colonial practices is extremely high. It is therefore critically urgent that the ACL, perhaps through the recently-formed Special Interest Group on Endangered Languages (SIGEL), should go beyond the ACL's 2020 adoption of the ACM Code of Ethics 9 and begin a process of drafting and adopting a formal ethics policy specifically with respect to research involving Indigenous communities, Indigenous languages, and Indigenous data. In so doing, the ACL can provide specific and foundational ethical guidance for our members that goes far beyond the general ethical 7 "The number of endangered languages is so large that their comprehensive documentation by the community of documentary linguists will only be possible if supported by NLP technology." (Vetter et al., 2016) 8 "Languages that miss the opportunity to adopt Language Technologies will be less and less used, while languages that benefit from cross-lingual technologies such as Machine Translation will be more and more used." (ELRA, 2019) 9 https://www.aclweb.org/portal/ content/acl-code-ethics guidance provided by institutional review boards (only some of which are intimately familiar with the ethical pitfalls particular to work with Indigenous communities). We should draw upon the recent Linguistics Society of America (2019) ethics statement, the foundational principles of medical ethics (autonomy, nonmaleficence, beneficence, and justice; Beauchamp and Childress, 2001), the recommendations of Bird (2020), and the wisdom of Indigenous scholars such as Deloria, Wilson, Smith, and Leonard. As a beginning, we have identified four key ethical obligations that should at a minimum be included in such an ethics policy: cognizance, beneficence, accountability, and non-maleficence. Obligation of cognizance The colonial political and racial ideas and behaviors that support and enable colonization and oppression are intentionally invented historical creations (Allen, 2012;Kendi, 2017). Before we engage with Indigenous peoples, let alone work with Indigenous data, we must intentionally make ourselves cognizant of this history. As outside researchers, we stand in a privileged position, and as such have an urgent obligation to educate ourselves about this history and about current practices that perpetuate these systems of oppression in the present day (Kendi, 2019;Smith, 2012). 10 Before we are capable of ethically engaging with Indigenous data, we must learn the ways in which Indigenous communities approach reality and science, and accept that these are fully formed and fully valid worldviews with which we have an obligation to fully engage. Our research is premised on a particular philosophy of science which is nearly always left unstated. We must make ourselves cognizant of our own ontology, epistemology, methodology, and axiology, and the fact that there are alternative philosophies of science that are equally valid. We must educate ourselves about Indigenous ontologies, epistemologies, methodologies, and axiologies that are centered around relationality (Wilson, 2008). The obligation of cognizance therefore mandates that we as researchers intentionally and thoroughly educate ourselves about colonization of Indigenous communities; about the role that academic researchers have had and continue to play in the exploitation of Indigenous communities, Indigenous languages, Indigenous culture, and Indigenous data; and about Indigenous expectations and ways of being centered on relationality that differ from those we typically encounter in our research. In practical terms, this cognizance and the education requisite in this obligation should typically be provided by a senior researcher (one already very familiar with the relevant issues) whenever a new student or junior researcher first expresses an interest to begin research involving Indigenous data. At an institutional level, the leadership of multilingual NLP shared tasks such as the SIG-MORPHON shared tasks should take the lead in educating their respective sub-communities in this regard as such shared tasks consider expansion to include Indigenous language data. Obligation of beneficence Indigenous communities are sovereign political entities with inherent political and human rights. Many of these rights are enumerated in the Declaration on the Rights of Indigenous Peoples (United Nations, 2007). This includes the right of each Indigenous community to protect and develop its culture (Article 11), the right to dignity (Article 15), the right to develop and elect its own decisionmaking institutions (Article 18), and the right to "maintain, control, protect, and develop [the community's] intellectual property over [its] cultural heritage, traditional knowledge, and traditional cultural expressions" (Article 31). The obligation of beneficence therefore mandates that we as researchers ensure that our work benefits the Indigenous communities with which we work in ways that those communities recognize as beneficial. In practical terms, this means that any outside researcher who wants to work with Indigenous data must seek to engage with the relevant Indigenous communities in order to learn about and to meaningfully support priority areas identified by Indigenous governing bodies and decision-making institutions that fall within our respective scopes of expertise. Put another way, ethical research involving Indigenous data must include concrete deliverables requested by the respective Indigenous community or communities. Obligation of accountability As outside researchers seeking to work with Indigenous data, we have a responsibility to seek out respectful and meaningful relationships with the In-digenous communities whose data we seek to use. We have a responsibility to develop these relationships in ways that are appropriate and meaningful to the Indigenous communities with which we seek to work. We must intentionally acknowledge and accept the rightful authority of Indigenous communities' governing and decision-making bodies over those communities' own respective languages, cultures, and data. The obligation of accountability therefore mandates that we as researchers develop meaningful relations with the sovereign governing bodies of the Indigenous communities with which we seek to engage, and that we be meaningfully accountable to such bodies in our work involving their data. This relationship-building should take place before the research project begins. This relationship between researcher and sovereign Indigenous institutions can be thought of as highly analogous to the relationship between the researcher and governmental granting agencies such as the U.S. National Science Foundation. In practical terms, once this relationship has been built and research has begun, the researcher should regularly report to and agree to be held accountable by Indigenous community's governing and decision-making institutions with respect to the agreed-upon community goals. Obligation of non-maleficence Colonization and colonial practices have inflicted substantial and often genocide-scale harm on Indigenous communities over the past five centuries (Smith, 2017), harm that is ongoing and is often perpetuated by modern research practices. We must intentionally adopt the ethical prime directive of the medical community, often stated in the Latin aphorism Primum Non Nocere "Above all, do no harm" (Smith, 2005). There are many good and laudable reasons why we should choose to engage in research with Indigenous communities, but none of these reasons is powerful enough to justify harm caused by our research. The obligation of non-maleficence therefore mandates that above all else, we do no harm to Indigenous people and Indigenous communities. In practical terms, this means that researchers seeking to engage with Indigenous data critically examine the harmful ramifications of proposed work well before it is conducted. If we can do good through our research without doing harm, that is well, but it is better to not engage than to cause harm. Acknowledgments Great thanks are due to the elders and community members of the Alaska Native communities of Gambell and Elim who welcomed me and my family in the 1980s, and from whom I have learned much. To the Yupik and Yup'ik instructors who honored me with a Yupik name and who graciously introduced me to these amazing languages: Igamsiqanaghhalek! Special thanks are due to the leadership and elected councils and boards of the Native Village of Gambell, Sivuqaq Inc, Gambell Schools, the Bering Strait School District, the Alaska Native Language Archive, and the City of Gambell. This work was supported by the U.S. National Science Foundation Award #1761680: NNA: Collaborative Research: Integrating Language Documentation and Computational Tools for Yupik, an Alaska Native Language.
2022-05-15T13:18:20.261Z
2022-01-01T00:00:00.000Z
248780200
s2orc/train
v2
Impact of a Large Fire and Subsequent Pollution Control Failure at a Coke Works on Acute Asthma Exacerbations in Nearby Adult Residents
Impact of a Large Fire and Subsequent Pollution Control Failure at a Coke Works on Acute Asthma Exacerbations in Nearby Adult Residents Clairton, Pennsylvania, is home to the largest coke works facility in the United States (US). On 24 December 2018, a large fire occurred at this facility and damaged pollution control equipment. Although repairs were not completed for several months, production continued at pre-fire capacity and daily emissions increased by 24 to 35 times, with multiple exceedances of monitored levels of outdoor air pollution (OAP). The aim of this study was to objectively evaluate the impact of this industrial incident and resultant OAP exceedances on asthma morbidity. We assessed pre-fire and post-fire rate ratios (RR) of outpatient and emergency department (ED) visits for asthma exacerbations among nearby adult residents. Pre-fire versus post-fire RRs increased for both visit types: RR = 1.82 (95% CI: 1.30, 2.53; p < 0.001) and 1.84 (95% CI: 1.05, 3.22; p = 0.032) for outpatient and ED visits, respectively. Additionally, total visit rates increased on days with OAP exceedances: RR = 2.47 (95% CI: 1.52, 4.01; p < 0.0001), 1.58 (95% CI: 1.00, 2.48; p = 0.048) and 1.79 (95% CI: 1.27, 2.54; p = 0.001) for PM2.5, SO2, and H2S exceedance days, respectively. These results show a near doubling of acute visits for asthma exacerbations in nearby adult residents during this industrial incident and underscore the need for prompt remediation and public notification of OAP exceedances to prevent adverse health impacts. Introduction Clairton, Pennsylvania (PA), is home to the largest coke works facility in the United States (US). On 24 December 2018, a large fire occurred at this facility, which resulted in damage to its desulfurization pollution control equipment. Although repairs to this equipment were not completed for several months, production continued at pre-fire capacity and multiple exceedances of sulfur dioxide (SO2), particulate matter less than 2.5 microns in diameter (PM2.5) and hydrogen sulfide (H2S) occurred during this time. It was estimated that 4685 tons of SO2 were released into the environment during this period [1], which is nearly as much SO2 emitted annually by all sources in Allegheny County, PA, US [2]. Production was not curtailed since the facility attempted to mitigate outdoor air pollution (OAP) emissions by diluting coke oven gases with natural gas and diverting gases to its other facilities. Initially, it was determined that such efforts might be effective in preventing OAP exceedances; however, approximately five weeks after the fire, OAP exceedances occurred on three consecutive days. This prompted a review of the emission data and the subsequent issuance of an air quality enforcement order on 28 February 2019 [1]. This enforcement order required the facility to comply with emission reductions within 30 use for respiratory disease, including asthma, during this same OAP incident [16,17]. Increased respiratory symptoms were reported by nearby residents during and for several weeks after a large industrial fire in Texas, US, which burned and released PM2.5 and black carbon into the atmosphere for several days [18]. A recent meta-analysis confirmed reports of an increased incidence of asthma in first responders involved in rescue and recovery following the attack and fire on the World Trade Center in the US [19]. Other studies have reported increased respiratory morbidity due to other biomass combustion sources, including wildfires and prescribed fires [20][21][22]. In addition to regulating air quality standards to protect public health, recent efforts have focused on providing prompt notification of acute deteriorations in air quality to impacted residents [23,24]. Interestingly, a Canadian asthma monitoring system incorporated simultaneous modeling of health impacts with OAP data [25]. Numerous studies have shown that air quality alerts lead to increased individual behavior changes aimed at reducing OAP exposure [26][27][28][29]. Recent efforts have also focused on building emergency response systems for both natural disasters and unnatural incidents [30][31][32][33]. Many of these efforts incorporate proactive communication, monitoring, and management of environmental risks to protect the health of impacted residents. Acute Asthma Exacerbations This protocol was reviewed by the Institutional Review Board at Allegheny Health Network (ANH) and qualified for exempt status with waivers of consent and Health Insurance Portability and Accountability Act (HIPPA) authorization. AHN's electronic patient visit database was queried to obtain a limited data set of daily acute outpatient visits and ED visits for adults aged 18-64 years residing in Clairton zip code 15025 with discharge diagnoses of asthma exacerbations (J45.901, J45.21, J45.31, J45.41 and J45.51) and status asthmaticus (J45.902, J45.22, J45.32, J45.42, and J45.52) during the study periods. AHN is a regional health system consisting of 14 hospitals, 8 urgent care centers, and over 200 outpatient locations. AHN's Jefferson Hospital is the only site providing ED care in zip code 15025 and has over 25,000 ED visits annually. Data were obtained for two time periods: the active study period ranged from 24 December 2018 to 28 February 2019 and is referred to as the post-fire period; the comparative study period ranged from 24 December 2017 to 28 February 2018 and is referred to as the pre-fire period. Rate calculations assumed a population size of 9616 adults 18-64 years of age in zip code 15025 based on the most recent American Community Survey (ACS) data from 2019 [34]. Other Data Pollution emission data were obtained from a publication of the local county health department and included SO2 and H2S emissions across three US Steel facilities in the Mon Valley, including Clairton Coke Works, Edgar Thompson Works, and Irvin Works (see Figure 1) [1]. PM2.5 emissions from these facilities were not reported. All three facilities were included because coke oven gas was diverted away from the failed desulfurization equipment at Clairton Coke Works and flowed toward the Edgar Thompson and Irvin facilities to be released from flaring stacks into the ambient air. Emission data were expressed as average daily H2S grains per hundred dry standard cubic foot (grains/100 dscf) and average daily pounds (lbs/day) of SO2. US Steel's installation permits for all three facilities imposed a site-wide limit for sulfur compound emissions to no more than 35 grains/100 dscf [1]. Ambient air pollution monitoring data were obtained from the Environmental Protection Agency (EPA) website and included PM2.5 concentrations obtained from a reference monitor in Liberty and SO2 obtained from reference monitors in Liberty and North Braddock (see Figure 1) [35]. These monitoring sites are part of the EPA Air Quality System (AQS) that is used to monitor compliance with the Clean Air Act and were specifically selected since they measured the pollutants of interest, provided temporal data, and were located near and in the wind path of the US Steel facilities. The National Ambient Air Quality Standards (NAAQS), developed by the EPA, are 35 micrograms per meter cubed (ug/m 3 ) averaged over a 24-h period for PM2.5, and 75 parts per billion (ppb) maximum, hourly, over a 24-h period for SO2 [36]. H2S data were obtained from the local county health department website and were collected from the reference monitor in Liberty. NAAQS has not developed a standard for H2S; however, Pennsylvania has set a standard for H2S of 5 ppb averaged over a 24-h period [37]. The National Weather Service (NWS) office in Pittsburgh routinely launches weather balloons (frequently referred to as upper-air soundings) as part of their upper-air observations program. Weather balloons are launched twice each day, at 7 a.m. and 7 p.m. Eastern Standard Time (EST), and collect temperature, humidity, and wind data as they ascend through the atmosphere. Analysis of data collected by weather balloons is used by the NWS to identify the presence and the strength of inversions. The American National Meteorological Society defines inversions as layers in which temperature increases with altitude instead of following the normal pattern of decreasing air temperature with increasing altitude. Inversions can result in stagnant air masses and the accumulation of OAP. On an annual basis, the local county health department summarizes and publishes this data [38]. This study examined all data gathered from the upper-air soundings released by the NWS office in Pittsburgh during the study periods, including average daily temperatures, wind direction and speed, percentage of days with inversions, and strength, depth, and duration of inversions. The US influenza surveillance system is a collaborative effort between the Centers for Disease Control and Prevention and many partners in state, local, and territorial health departments, public health and clinical laboratories, vital statistics offices, healthcare providers, clinics, and emergency departments [39]. Information is collected from multiple data sources to identify when and where influenza activity is occurring, determine what influenza viruses are circulating, detect changes in influenza viruses, and measure the impact influenza is having on outpatient illness, hospitalizations, and deaths. Influenza data for the 2017-2018 and the 2018-2019 seasons was obtained from the local county health department website [40]. Data included numbers of cases, hospitalizations, and deaths, as well as the peak week of cases each season. A case is defined as testing positive via an antigen, culture, or polymerase chain reaction (PCR) test. Information regarding influenza deaths was received from the Pennsylvania National Electronic Disease Surveillance System (PA-NEDSS) and death certificates. Rate calculations assumed a population size of 753,948 adults 18-64 years of age in the local county based on ACS data reported for 2019 [41]. Data Analysis Demographic data for Clairton zip code 15025 and Allegheny County were described by distributions for age (<18, 18-64, >65 years), gender (male, female), race (African American, white, other), percentage of persons living below federal poverty level, and median household income. Rates of asthma exacerbation visits by type (acute outpatient or ED) per 1000 residents 18-64 years of age in zip code 15025 were compared in the pre-fire versus post-fire periods using generalized linear model (GLM) analyses with specification of Poisson distribution [42]. This procedure was replicated to assess significance of difference in daily rate of total acute exacerbations on OAP non-exceedance versus exceedance days for PM2.5, H2S, and SO2. Boxplots (median interquartile range [IQR]) were generated to display OAP exposure distributions across all 67 days in each period. Distribution for each OAP metric in the pre-fire versus post-fire periods was assessed for significance of difference using Mann-Whitney U-test due to positive skewing of data. Chi-square test assessed significance in percentage of days in exceedance for SO2 (>75 ppb maximum hourly over a 24-h period), H2S (>5 ppb averaged over a 24-h period) and PM2.5 (>35 ug/m 3 averaged over a 24-h period) and inversion days that occurred pre-fire versus post-fire. Temperature, wind direction and speed, and strength, depth, and duration of inversions were described by mean (+SD) in each period and compared using the independent t-test. Rates of influenza cases, hospitalizations, and deaths per 1000 residents 18-64 years of age in Allegheny County were compared pre-fire versus post-fire using GLM with Poisson distribution specified as done for asthma exacerbations. Analyses were conducted using SPSS V18.0 and R version 3.6.1. Demographics Due to the limited nature of the data set obtained from AHN, specific demographic information, including race, gender, and age was not available for the study population. However, the demographic composition of residents in Clairton zip code 15025 was similar to the entire local county, as shown Table 1. Approximately 52% of residents were female, 11% lived below the federal poverty level, and race distribution showed 78-80% were white, 13-16% African American, and 4-9% other. Approximately 60% of residents were 18-64 years of age. Figure 2 shows a near doubling of the number of acute outpatient and ED visits for asthma exacerbations in the period after (as compared to the period before) the Clairton Coke Works fire. In the timeframe before the fire, there were 54 acute outpatient visits; and after the fire, there were 98 acute outpatient visits. This translates to a pre-fire versus post-fire increase in the outpatient rate from 5.6 to 10.2 per 1000 residents, respectively (RR = 1.82; 95% CI: 1.30, 2.53; p < 0.001). Additionally, in the timeframe before the fire, there were 19 ED visits; and after the fire, there were 35 ED visits. This translates to a prefire versus post-fire increase in the ED visit rate from 2.0 to 3.6 per 1000 residents, respectively (RR = 1.84; 95% CI: 1.05, 3.22; p = 0.032). Overall, there were 73 and 133 total visits for asthma exacerbations before and after the fire, respectively. The 82% increase from 7.6 to 13.8 total acute asthma visits per 1000 residents pre-fire and post-fire, respectively, was significant (RR = 1.82; 95% CI: 1.37, 2.42; p < 0.001). Figure 2. Rates of acute asthma visits by type (outpatient, ED, and total) per 1000 residents (population = 9616 in zip code 15025) before and after the Clairton Coke Works fire. Data were compared using GLM analyses with specification of Poisson distribution. RR = rate ratio. Table 2 summarizes the total daily average H2S and SO2 emissions that were released across the three US Steel Mon Valley facilities in the four days prior to and the 36 days following the 24 December 2018 fire at the Clairton Coke Works. After the fire, the total daily average emissions for H2S and SO2 increased 24 and 35 times, respectively. Figure 3 displays the distribution of daily average PM2.5 and H2S levels and daily maximum hourly SO2 levels that were recorded at monitors in the pre-fire and post-fire periods. Post-fire, the distribution of the median SO2 level more than doubled that observed in the pre-fire period. Pre-fire and post-fire median (IQR) SO2 (ug/m 3 ) distributions were 8.00 (2.00, 27.00) and 18.50 (6.75, 37.25), respectively (p = 0.014). Distributions for PM2.5 and H2S did not significantly differ pre-fire versus post-fire (p > 0.05). Pre-fire and post-fire median (IQR) PM2.5 (ug/m 3 ) distributions were 11.13 (7.08, 18.96) and 11.27 (8.18, 15.48), respectively (p = 0.863). Pre-fire and post-fire median (IQR) H2S (ppb) distributions were 0.50 (0.10, 3.42) and 1.08 (0.32, 2.34), respectively (p = 0.148). Table 3 summarizes the dates of OAP exceedances that were recorded in the post-fire period at the Liberty and North Braddock reference monitors. During this period, there were four NAAQS PM2.5 exceedances (>35 ug/m 3 averaged over 24 h) at the Liberty monitor, five NAAQS SO2 exceedances (>75 ppb averaged over an hour) at the Liberty monitor and two NAAQS SO2 exceedance at the North Braddock monitor, and six state H2S exceedances (>5 ppb averaged over 24 h) at the Liberty monitor. There were more days with PM2.5 and SO2 exceedances post-fire as compared to pre-fire. For PM2.5, the percentage of days with exceedances were 1.5% pre-fire versus 6.2% post-fire (p = 0.161). For SO2, the percentage of days with exceedances were 3.0% pre-fire versus 10.4% post-fire (p = 0.084). The drastic increase in SO2 to 145 ppb measured at the Liberty monitor on 28 December 2018 was attributed to high emissions from the Clairton Coke Works facility in the presence of a weak weather inversion and light winds from the south-southwest direction [38]. There were not more days with H2S exceedances post-fire as compared to pre-fire; the percentage of days with H2S exceedances were 14.9% pre-fire versus 9.1% post-fire (p = 0.301). Dashed lines indicate exceedance level for respective OAP exposure: PM2.5 >35 µg/m 3 , H2S >5 ppb, SO2 >75 µg/m 3 . Distributional differences were tested for significance using Mann-Whitney U test due to positive skewing of data. Figure 4 shows the rate of total outpatient and ED asthma visits (per 1000 residents) on days with (as compared to days without) OAP exceedances during both study periods. The rate of total asthma visits was 0.15 on PM2.5 non-exceedance days (<35 ug/m 3 ) and more than doubled to 0.37 on PM2.5 exceedance days (>35 ug/m 3 ). On days with PM2.5 >35 versus <35 ug/m 3 , the RR was 2.47 (95% CI: 1.52, 4.01; p < 0.001). The rate of total asthma visits was 0.15 on SO2 non-exceedance days (<75 ug/m 3 ) and increased to 0.24 on exceedance days (>75 ug/m 3 ). On days with SO2 >75 versus <75 ug/m 3 , the RR was 1.58 (95% CI: 1.00, 2.48; p = 0.048). The rate of total asthma visits was 0.14 on H2S non-exceedance days (<5 ppb) and increased to 0.26 on H2S exceedance days (>5 ppb). On days with H2S > 5 versus <5 ppb, the RR was 1.79 (95% CI: 1.27, 2.54; p = 0.001). Table 4 summarizes the weather inversion data during each of the time periods of study. There was no evidence of increased weather inversion events before as compared to after the fire. The number of days with inversions was 24 (35.8%) and 17 (25.3%) prefire and post-fire, respectively (p = 0.32). The average daily strength, depth, and duration of inversions did not significantly differ between the time periods. The average daily temperature and wind direction and speed did not differ significantly during the time periods (data not shown). The average (+ SD) daily temperature ( O F) was 30.3 + 16.0 and 31.3 + 10.9 (p = 0.72) pre-fire and post-fire, respectively. The average (+ SD) daily wind direction (degrees with north = 0, east = 90, south = 180 and west = 270) was 244.3 + 64.3 and 227.5 + 66.4 (p = 0.14) pre-fire and post-fire, respectively. The average (+ SD) daily wind speed (miles per hour) was 8.2 + 3.2 and 8.9 + 4.0 (p = 0.23) prefire and post-fire, respectively. Table 5 summarizes the influenza season data for the local county during each of the time periods of study. There was no evidence that severity or peak of influenza season contributed to the post-fire findings of increased asthma visits. The influenza season was milder post-fire as compared to pre-fire and it peaked outside of the post-fire study period. In fact, in the pre-fire as compared to the post-fire period there were 30% more influenza cases (RR = 1.30; 95% CI: 1.26, 1.33; p < 0.05) and 2.8 times more hospitalizations due to influenza (RR = 2.79; 95% CI: 2.44, 3.19; p < 0.001). p-value comparing pre vs. post fire rates based on GLM Poisson regression analyses. Discussion This study objectively assessed the impact of a large industrial fire and resultant damage to pollution desulfurization equipment on asthma morbidity in nearby adult residents. Repairs to this equipment were not completed for several months and facility production continued at pre-fire levels during this time-period. Pollution emissions from the facility were significantly increased and multiple OAP exceedance were recorded at local monitors during this period. Figure 5 summarizes the events related to this industrial incident and Table 6 summarizes the study results. The results document a near doubling of the rate of outpatient and ED visits for asthma exacerbations in the months following this incident when damaged pollution control equipment was offline. They also show increased rates of total outpatient and ED visits for asthma exacerbations on days with OAP exceedances. Additionally, the results show that these acute visits were unrelated to confounding factors including weather inversions and seasonal influenza activity. In fact, the influenza season was significantly more severe pre-fire as compared to post-fire. Similarly, weather inversions trended toward being more severe pre-fire as compared to postfire. The results of this study contribute to the identification and understanding of the effect of this incident on health outcomes and should guide the development of relevant public policies to protect the health of impacted residents during such events. Study Results • Near doubling of rates of visits for asthma exacerbations in the post-fire period. • Increased rates of visits for asthma exacerbations on days with OAP exceedances. • These increased visit rates were unrelated to weather inversions and seasonal influenza activity. The results of the current study are consistent with those of a recent retrospective investigation that documented increased self-reported asthma symptoms and rescue medication use in adults with asthma residing near this facility following the Clairton Coke Works fire [6]. In addition to confirming the impact of this incident on asthma morbidity, the current study expands upon these previous findings in several ways. First, the current study used a medically documented discharge diagnosis of asthma exacerbation as an objective measure of asthma morbidity as compared to the subjective outcome of self-reported symptoms and rescue medication used in the prior investigation. Second, the prior investigation reported exclusively on the impact of SO2 emissions and monitor exceedances, whereas the current study included assessments of additional relevant OAP constituents, including PM2.5 and H2S. Third, the current study included an assessment of potential confounding factors including weather inversions and respiratory infections. Finally, the current study analyzed comparative time-periods (pre-fire and post-fire) among the same study population, while the previous investigation analyzed only post-fire outcomes in a nearby impacted population versus a distal non-impacted population. Collectively, the results of these two independent studies show the impact of this industrial incident on multiple indicators of asthma morbidity, including asthma symptoms, rescue medication use, acute outpatient visits, and ED visits. The results of the current study are also consistent with prior reports documenting associations between exposure to elevated levels of OAP and asthma morbidity. Epidemiologic studies have consistently reported an association between short-term SO2 exposures and asthma morbidity, including acute outpatient visits, ED visits, and hospital admissions [9][10][11]. Similarly, other studies have shown an association between short-term PM2.5 exposure and these same asthma outcomes [12][13][14]. Fewer studies have examined the association between short-term H2S exposure and asthma morbidity. Studies have reported conflicting results, with some showing an association between H2S exposure and increased asthma morbidity, while others show no or protective effects of H2S on asthma morbidity [43][44][45][46]. Other investigations documented the acute impacts of industrial incidents and associated OAP exposures on respiratory outcomes in nearby residents. A recent study documented the impact of a prolonged coal fire and subsequent PM2.5 elevations on asthma morbidity [15]. In that report, the relative risks for daily counts of asthma related ED visits and hospital admissions were 2.32 (95% CI: 1.71, 3.14) and 1.83 (95% CI: 1.14, 2.94), respectively, during the fire period as compared to the non-fire period. Similarly, an earlier study documented the impact of the closure and reopening of a US steel mill on PM10 levels and hospital admissions for acute respiratory illnesses including asthma [47]. In that study, pediatric respiratory admissions were two to three times higher when the mill was open as compared to when it was closed. The outcome rates reported in both of those investigations are similar to the increased outcome rates reported in the current study. The current study did not examine the impact of chronic OAP exposure on asthma outcomes in the study population. However, our group recently documented both high rates of asthma in children residing near point sources of OAP including the U.S. Clairton Coke Works and Edgar Thompson Works facilities [48]. In that study, we found that 70% of participants had exposure to PM2.5 greater than the World Health Organization WHO) standard of 10 μg/m 3 . Overall prevalence of asthma was 22.5% (as compared to national rate of 8.3%) with PM2.5 and sulfur exposures significantly related to increased odds of asthma. Another recent retrospective study reported decreased historical lung function in adults with asthma residing near the Clairton Coke Works facility who experienced increased symptoms and rescue medication use following the fire [6]. Epidemiologic studies previously documented the strong association between long-term PM2.5 exposure and poor asthma outcomes including decreased lung function in both children and adults, decreased lung function growth rate in children, and asthma prevalence [49][50][51]. The current study did not examine the impact of the acute fire and subsequent OAP exceedances on long-term respiratory effects in the exposed population. However, a recent study reported increased lower respiratory symptoms and asthma in nearby residents six years after exposure to high SO2 levels following a large sulfur stockpile fire in Africa [52]. Several other studies documented persistence of adverse respiratory effects, including bronchial hyper-responsiveness and asthma, for 3 months to 14 years after the initial acute exposure to high levels of SO2 [53][54][55]. Future studies are needed to assess the long-term respiratory effects of the Clairton Coke Works fire and subsequent OAP exceedances in nearby residents. The results of the current study are important because they objectively document the adverse impact of an industrial incident and subsequent OAP exceedances on health outcomes. Additionally, they emphasize the need for additional public health policies to protect vulnerable residents from future events. Recently, the local health agency has proposed that industries develop strategies to reduce emissions during exceedances of OAP levels. Additional efforts need to focus on developing a rapid alert system to immediately notify impacted residents so they can implement health protective measures during such events. Several countries have already implemented such systems and numerous studies confirm that susceptible populations do modify their behavior to protect their health in response to such alerts [23][24][25][26][27][28][29]. Moreover, a comprehensive rapid response system should be put in place to quickly assess immediate health effects and provide necessary preventative and emergent medical care. Recent literature confirms the need for and success of such systems [30][31][32][33]. Finally, a health registry should be developed in vulnerable communities to track short-term and long-term outcomes related to both chronic and acute OAP exposure. This need is underscored by the recent report of worsening asthma symptoms and increased rescue medication use among patients in an existing asthma registry during the Clairton Coke Works fire and subsequent emission and OAP exceedances [6]. The current study has several strengths. First, it examined the impact of a large coke works fire and subsequent OAP exceedances on acute asthma morbidity in the nearby exposed population. This data was obtained by objective reporting of discharge diagnosis at the time of the visit and did not rely on subject recall or self-report of symptoms and rescue medication use. We controlled for seasonality by using the comparative time-period for the prior year. We also showed that this increase in asthma visits was not due to air stagnation related to weather inversions. This was particularly important given the region's topography of a river valley surrounded by hills and results of other studies demonstrating increased asthma morbidity during weather inversions [56]. We also showed that the increase in asthma visits was not due to a more severe influenza season. This is important because respiratory viral infections, including influenza, are recognized as triggers of asthma exacerbations. Finally, we used PM2.5 and SO2 data obtained from relevant, nearby US EPA AQS reference monitors located in the wind path of the industrial sites, and current regulatory thresholds established by the US EPA to assess the impact of these two pollutants on asthma visits. The latter is important because the most recent integrated health assessments by the US EPA concluded the existence of "causal" and "likely to be causal" relationships between short-term SO2 and PM2.5 exposures, respectively, and respiratory morbidity, particularly in individuals with asthma [57,58]. The main limitation of the current study is that it was conducted using a limited data set and specific demographic data on the asthma patients was not available. We did report that the demographic profile of the Clairton zip code was similar to that of Allegheny County; however, that does not rule out more localized demographic differences. Indeed, the Clairton Coke Works facility is directly adjacent to census tracts that are recognized as environmental justice communities [59]. This is consistent with recent reports, which documented that minorities and those with lower socioeconomic status are much more likely to reside near OAP sources [60][61][62]. The second limitation is that it did not assess the impact of other OAP sources on asthma morbidity. However, at the time of this incident, there were no other reports of increased OAP from relevant sources, including both industrial sites and mobile sources such as traffic. As such, it can be concluded that the OAP exceedances that occurred at the relevant monitors after the industrial incident were attributed to emissions released due to the fire and the resulting breakdown of the desulfurization pollution control equipment. The third limitation is that we did not have information on individual OAP exposures but instead used OAP data collected from centralized monitoring stations. Of note, both the nearest monitoring station and furthest residence in the study geography were located approximately two linear miles from the site of the Clairton Coke Works. As such, it is possible that individual exposures to OAP were underestimated for most residents living closer than two linear miles from the facility. In support of this possibility, a Canadian study documented clusters of ED visits for asthma among children residing in the same census tracts where two industrial OAP were located [63]. Another limitation relates to the need for caution when interpreting our finding that asthma visits were increased on days with H2S exceedances >5 ppb averaged over 24 h. We selected this threshold because it is mandated by the state of PA; however, it was established to protect the environment and not public health. As summarized above, the few studies of the effect of H2S exposure on asthma outcomes reported conflicting results [43][44][45][46]. Consequently, there are no current international or US health-based standards for regulating H2S levels. The WHO has an air quality guideline of 150 µg/m 3 (10.6 ppb) H2S, averaged over a 24-h period. This guideline is based on the avoidance of eye irritation. Moreover, WHO recommends that H2S concentrations not exceed 0.005 ppm (5 ppb; 7 µg/m 3 ), over a 30-min period, to avoid substantial complaints about odor [64]. As such, the results of this study should not be interpreted as demonstrating a causal association between H2S exceedances >5 ppb averaged over 24 h and increased asthma visits; however, the results do not rule out the possibility that the observed H2S exceedances are a marker for the presence of another pollutant that contributed to the observed outcomes. The final limitation relates to the exclusive focus on asthma morbidity as the study outcome. Although we did not examine the impact of the fire and subsequent OAP exceedances on health care costs, it has been reported that the average cost for an ED visit for asthma is approximately USD 1500 [65]. As such, the total costs related to increased ED visits after the fire is approximately USD 24,000. Additionally, we did not examine the impact of the fire on asthma mortality due to the relatively small population of the study geography and the subsequent possibility of statistical error. However, other studies have reported increased asthma and respiratory mortality with short-term OAP exposure [66,67]. Finally, this study focused exclusively on asthma outcomes. Prior studies have documented an impact of short-term OAP on other respiratory outcomes, including COPD-related ED visits [68,69]. It is possible that COPD and other respiratory related ED visits were also increased after the fire; however, examination of this outcome was beyond the scope of this study. Conclusions In summary, the results of this study document a near doubling in the rate of outpatient and ED visits for asthma exacerbations in the months following the Clairton Coke Works fire when damaged pollution control equipment was offline. They also show increased rates of total outpatient and ED visits for asthma exacerbations on days with OAP exceedances. Additionally, the results show that these acute visits were unrelated to confounding factors including weather inversions and seasonal influenza activity. These results contribute to the identification and understanding of the effect of this incident on health outcomes, and will be disseminated to community residents, leaders, and officials to motivate the development of relevant public health policies to protect impacted residents during such events. As discussed above, such policies should include regulations that industries curtail emissions during exceedances of OAP levels. A rapid alert system should be established to promptly notify impacted residents so they can implement protective health strategies during such events. For example, residents with asthma should receive targeted messages to limit OAP exposure, implement self-management plans and start or increase controller medications. Additionally, more vulnerable residents, such as those with asthma or other pre-existing respiratory conditions, should be advised and/or assisted to relocate immediately. A rapid response system should be developed to quickly assess immediate health impacts and provide both preventative and emergent medical care. Finally, a health registry should be developed in vulnerable communities to track short-term and long-term outcomes related to OAP exposure. Informed Consent Statement: Patient consent was waived due to the exclusive use of de-identified data. Data Availability Statement: Data are available upon reasonable request by contacting the corresponding author at deborahgentile092465@gmail.com.
2021-07-01T12:47:40.549Z
2021-06-25T00:00:00.000Z
235692600
s2orc/train
v2
Handwritten Arabic Character Recognition for Children Writ-ing Using Convolutional Neural Network and Stroke Identification
Handwritten Arabic Character Recognition for Children Writ-ing Using Convolutional Neural Network and Stroke Identification Automatic Arabic handwritten recognition is one of the recently studied problems in the field of Machine Learning. Unlike Latin languages, Arabic is a Semitic language that forms a harder challenge, especially with variability of patterns caused by factors such as writer age. Most of the studies focused on adults, with only one recent study on children. Moreover, much of the recent Machine Learning methods focused on using Convolutional Neural Networks, a powerful class of neural networks that can extract complex features from images. In this paper we propose a convolutional neural network (CNN) model that recognizes children handwriting with an accuracy of 91% on the Hijja dataset, a recent dataset built by collecting images of the Arabic characters written by children, and 97% on Arabic Handwritten Character Dataset. The results showed a good improvement over the proposed model from the Hijja dataset authors, yet it reveals a bigger challenge to solve for children Arabic handwritten character recognition. Moreover, we proposed a new approach using multi models instead of single model based on the number of strokes in a character, and merged Hijja with AHCD which reached an averaged prediction accuracy of 96%. Introduction Handwriting recognition is one of computer vision problems. It is a process of automating the identification of handwritten script by a computer which transforms the text from a source such as documents or touch screens to a form that is understandable by the machine . The image can be offline input from a piece of paper or a photograph, and online if the source was digital such as touch screens [1]. The handwritten text for each language has many different patterns and styles from writer to writer. Many factors such as age, background, the native language, and mental state affect the patterns in any piece of handwritten text [2]. Automatic handwritten recognition is well investigated in the literature by using many methods revolving from machine learning such as: K-nearest Neighbors (KNNs) [3]Support Vector Machines (SVMs) and transfer learning [4] [5]. Also, some studies used deep learning techniques such as Neural Networks (NNs) [6]. Recently, most of the studies have used Convolutional Neural Networks (CNNs) [5] [7][8] [9]. Latin languages have been intensively studied in the literature and achieved state-of-the-art results [10][3] [11]. Nevertheless, the Arabic language still needs more investigation. Arabic is a Semitic language, it is the fourth most spoken language in the world [12]. Arabic has its own features that make it different from other languages including spelling, grammar, and pronunciation. Arabic writing is semi-cursive, and it is written from right to left. The Arabic alphabet contains 28 characters. Every character has many shapes depending on its position in the word. These aspects make the automatic handwritten recognition of the Arabic script harder than other languages. Many recent studies were conducted targeting Arabic handwritten recognition [13], [14], [15]. However, all of them focused on recognizing Arabic adult script except for [7]. They created Hijja dataset which was collected from 591 children from Arabic schools. Additionally, they proposed a CNN model to evaluate their dataset. Their achieved prediction accuracy was %87. In this research, we aim to improve the prediction accuracy over Hijja dataset using CNN to have a more robust model that can recognize children's Arabic script. Our experiments will answer the following research questions: (1) Using our newly proposed CNN architecture, can we enhance the accuracy for children's Arabic handwritten character recognition? (2) Using character strokes as a filter, can we enhance the accuracy for children's Arabic handwritten character recognition? The rest of the paper is organized as follows: in Section 2, we give a background about Arabic language and Arabic script, Optical character recognition, and Convolutional Neural Network. Section 3 provides an overview of the related work, our methodology including the proposed solution, the used dataset, and the experimental setup. Results from our experimentation are presented in Section 4. Section 5 discusses and analyzes the results; Section 6 concludes the paper with some future work. Background In this section, we present the necessary background information needed to explain the underlying concepts of this research including Arabic language and Arabic script, Optical character recognition, and Convolutional Neural Network. Arabic Language and Arabic Script Arabic is a Semitic language and the language of the Holy Qur'an. Almost 500 million people around the globe speak Arabic and it is the language officially used in many Arab countries with different dialects, and the formal written Arabic is Modern Standard Arabic (MSA). MSA is one form of classical Arabic, which is the language that was used in Qur'an, and it currently has a larger and modernized vocabulary. Because it is understood by almost everyone in the Arab world, it is being used as the formal language in media and education. Arabic has its own features that make it different from other languages including spelling, grammar, and pronunciation [2]. The calligraphic nature of Arabic script is different from other languages in many ways. Arabic writing is semi-cursive, and it is written from right to left. The Arabic alphabet is 28 characters. Their shapes change depending on the position in the word. There are 16 characters, which contain dots (one, two, or three) among the 28 of the Arabic alphabets. These dots appear either above or below the character. Some characters may have the same body but with different dots number and/or position, as shown in Figure 1. Figure 1: The Arabic characters, Hamza is colored by red as it is not part of the 28 alphabets Arabic characters have different shapes depending on their position in a word: initial, medial, final, or standalone. The initial and medial shapes are typically similar and so are the final and stand-alone shapes, see Table 1. These Aspects make recognizing Arabic script tasks more challenging compared to Latin script. Because of that, there are fewer resources created for this task and thus the-state-of-art is less advanced. Nevertheless, there were some efforts to recognize Arabic handwriting in the last few years which is covered in section 3. Optical character recognition Optical Character Recognition (OCR) is a pattern recognition problem that takes printed or handwritten text as an input and creates an editable machine-understandable format of text extracted from the scanned image. OCR can be used in many applications such as advanced document scanning, business applications, electronic data searching, data entry, systems for visually challenged persons, document verification, document automation, data mining, biometrics, text storage optimization, etc. [16]. OCR is divided into offline and online recognition. It depends on the type of data that is used as an input for recognition. In offline recognition, the input is only an image of the handwriting text with less information. In Online recognition, a special input device, e.g., an electronic pen, tracks the movement of the pen during the writing process. Offline recognition is usually more difficult and challenging than online recognition [1]. Convolutional Neural Network Convolutional Neural Network is a special type of neural networks that is widely used in deep learning to extract features from visual data. CNNs proved the state-of-art results in many image classification problems. At any computer vision problem, CNNs are the best candidate for reaching considerably high accuracies compared to other machine learning (ML) algorithms. A CNN takes an image as an input, in the form of a 3D matrix, with width, height and channels, then applies several filters on this matrix; those filters are called kernels and have different types to extract different features on each convolutional layer. There are several layers in a CNN besides convolutions, which can be different from network to network, but the deeper a CNN the numerous weights it will have, so the pooling layer comes to reduce the size of convoluted layers, by either finding the max pixel value in a window or average of all values, this is important to decrease the computational resources needed in CNNs. Other layers include the activation layer, which could be ReLU or any other activation, and normalization [4]. Literature Review In this section, we present some of the existing datasets that were used for Arabic handwritten recognition in the literature. Additionally, some of the carried-out literature used CNNs for Arabic handwriting recognition. Arabic Handwritten Recognition Datasets There are many datasets created for Arabic handwritten recognition. One of them is introduced in [17]. It contains 5,600 images written by 50 adult writers which includes a variety of shapes for each character. Then Table 1: Isolated, initial, medial, and final shapes of the Arabic characters DBAHCL dataset was introduced in [18]. It includes 9900 ligatures and 5500 characters written by 50 writers for handwritten characters and ligatures. Another work was conducted to collect Arabic handwritten diacritics (DBAHD) [19]. Another dataset called AHCD was introduced in [20]. It includes 16800 characters were written by 60 adult writers. The character images only in isolated form [Arabic Handwritten Characters Recognition using Convolutional Neural Network] and it was used in many such as studies [20] [21] [7]. Lastly, Hijja dataset was introduced and experimented with in [7] which includes 47,434 characters written by 591 children. The characters are written in isolated and connected forms and it is the biggest existing dataset. We chose the last two dataset (Hijja and AHCD) in this research. CNN in Arabic Handwritten Recognition In this section we present some of the carried-out literature that used CNNs for Arabic handwriting recognition. In [22], Elleuch et al. investigated two types of neural networks for Arabic handwritten recognition. They are Deep Belief Network (DBN) and Convolutional Neural Networks (CNN). The two networks used a greedy layer-wise unsupervised learning algorithm for processing. The experiments were done on HACDB Dataset and CNN obtained the best results with 14.71 % error classification rate on the test set. Similarly, In [9], El-Sawy et al. designed and optimized CNN classifier by working on the learning rate and activation function (ReLU function). Their experiments were done on AHCD dataset, and they achieved an accuracy of 94.9% on testing data. Additionally, in [20], El-Sawy et al. aimed to recognize Arabic digits by using a CNN based on LeNet-5 to recognize Arabic digits. Their experiments were conducted on a MADBase database (Arabic handwritten digits images). Their model achieved high accuracy with 1% training misclassification error rate and 12 % testing miss classification error rate. In [23], Amrouch et al. used CNN as an automatic feature extractor in the preprocessing stage and Hidden Markov Models (HMM) as recognizer. This made the feature extraction process easier, faster, and more accurate than the manual feature extraction. They achieved 89.23% accuracy. In [15], Ashiquzzaman et al. developed a method for Handwritten Arabic Numerals using CNN classifier. They achieved 99.4% of accuracy with dropout and data augmentation. Their experiments were done on CMATERDB dataset, and they improved the accuracy by inverting the images colors such that the number is white on a black background, this is observed from previous studies that it makes it easier to detect edges. In [5], Soumia et. al. compared two approaches to Arabic handwritten character recognition. The first one using Conventional machine learning using the SVM classifier. The second one used Transfer learning with ResNet, Inception V3, and VGG16 models. They also proposed a new CNN architecture and tested it. The best accuracy results achieved with their CNN model, and it achieved 94.7%, 98.3%, and 95.2% on three databases OIHACDB-28, OIHACDB-40, and AIA9K respectively. However, all of the previous studies were focused on recognizing adult's Arabic handwriting script. In [24], Ahmed et. al. presented CNN context based accitectre to recognise Arabic letters, words, and digits. They expermented with MADBase, CMATERDB, HACDB and SUST-ALT datasets. They aimed to reach the higset possple testing accuracy and they acived 99% for digits, 99% for letters and 99% for words on 99% daaset. However, their preposed model was desighned for offline recognetion while we aim to devlope a ruobost a lightwhigt model tha can be used for online recognetion for Arabic childern handwritten writing in reallife senarios. In [25], Balaha et al. preposed two different approaches. The first one using 14 different CNN architectures on HMBD dataset and the best-aquired testing accuracy is 91.96%. Their sconed approach was using transfer learning (TF) and genetic algorithm (GA) approach named "HMB-AHCR-DLGA" was preposed to optimize the training parameters and hyperparameters in the recognition phase. The pre-trained CNN models (VGG16, VGG19, and MobileNetV2) are used in the second approach. The highest aquired testing accuracy was 92.88%. There was one study focused on children's Arabic handwriting scrip [7], Altwaijry et al. described a new collected dataset of Arabic letters written by kids aged 7-12. The dataset is called Hijja, and it includes 47,434 characters written by 591 participants. Also, they proposed a CNN model for Arabic handwritten recognition which was trained and tested on Hijja and Arabic Handwritten Character Dataset (AHCD) dataset. Their model achieved accuracies of 97% and 88% on the AHCD dataset and the Hijja dataset, respectively. All these studies proved that CNNs are the most suitable approach for Arabic handwritten recognition due to its power in automatic feature extraction which make it able to understand the difficult patterns in handwritten text. Therefore, we decided to use it in this research with a new proposed architecture to enhance the prediction accuracy on Hijja dataset to recognize children's handwriting script. We also decided to invert the images before feeding them to the model since it had positive improvements in [15]. Methodology To build a model that can recognize handwritten characters, we decided to go with a deep learning approach using Convolutional Neural Networks (CNN). CNN has proved to be strong in automatic feature extraction and has reached state of the art in many image classification problems. In this section, we present a CNN archeticture and two training approaches, with the goal of further enriching the results carried out in the literature. In the first approach, we go with a single model trained on multiple datasets. The second approach is based on the number of strokes in each character, where we divided the characters into 4 groups, as shown in Table 2. Each group was used to train a different model, then the number of strokes will be used as a filtration step before choosing which model will make the prediction. Therefore, we refer to them as the single-model approach, and multi-model approach respectively. Table 2 shows what characters belong to each group, it's also seen that some characters belong to 2 groups, columns represent groups, rows represent the handwriting style which could cause the number of strokes to be different. A stroke in our work starts from the writer's stylus, pen, or whatever method used for typing, touching the surface to form one part of the character, that part could be the main body of the character, or a dot, until the writer lefts up their hand off the surface. Datasets There are a couple of datasets for Arabic handwritten characters, as discussed in previous sections, in this experiment we used Hijja, AHCD, and a third dataset built by merging Hijja with AHCD into one new dataset, we call it Hijja-AHCD. Hijja dataset [7] consists of the 28 letters in the Arabic alphabet, in addition to the form of isolated Hamza ‫,)ء(‬ which makes it 29 total classes. The data was collected from 591 children in Arabic-speaking schools located in Riyadh, Saudi Arabia. It contains a total of 47,434 characters. This dataset includes both the isolated and connected forms of Arabic characters, as in the Arabic script, characters could have up to 3 different forms based on their location in the word. This variation made it perfect for this experiment as it will allow the network to recognize Arabic handwritten characters in all their forms. Arabic Handwritten Characters Dataset (AHCD) [20] is a publicly available dataset, which contains 16,800 characters written by 60 participants with ages ranging between 19 and 40. Unlike Hijja, the age range makes the style of the handwritten characters different, it's more clear and easier to recognize by the human eye. The total number of classes is 28, matching the number of characters in the Arabic alphabet. AHCD only contains the isolated forms of each character. The third dataset was constructed by augmenting Hijja with AHCD, both has grey images with 32 by 32 resolution, which made the merging easy and doable. We did that by using pandas, as both datasets were available in CSV format, we merged them into a new dataset that also has the CSV format and called it Hijja-AHCD. The concatenation wasn't done programmatically before each experiment, instead we chose to form a new dataset so that carrying out different experiments is easier. Data Preprocessing When we initially loaded AHCD, and visualized a subset of it, the images were incorrect as it was rotated 90 degrees to the left and flipped vertically, as shown in Figure 2: (a) Sheen belongs to Group 4 and 2, the dots could be written separately which make it 3 strokes or merged into a curve which make it one stroke. (b) Yaa belongs to Group 3 and 2, same case caused by dots, either separated or merged into a dash. Figure 3: (a) is showing a subset from AHCD without any modification, (b) is the same image after transpose. We used NumPy to transpose each image, and then save the data back to CSV, so that we don't bother later in repeating this step for each experiment. Note that the step of constructing Hijja-AHCD was done with the transposed version of AHCD. Figure 3: (a) is showing a subset from AHCD without any modification, (b) is the same image after transpose The last step before training was to invert the colors for all images, such that it has a black background and a white foreground. This step was inspired from [20] study where they had better results in Arabic digits recognition after they applied this technique. Model Architecture In this section we propose a CNN architecture, which uses many convolution layers, and ends with a classification feedforward network, Figure 4 shows the full architecture. Figure 4: The architecture of the proposed CNN model We start with an input image of 32 by 32 size and 1 grey channel. The model consists of 4 convolutional blocks, the first two blocks have 2 convolution layers with 64 filters and 128 respectively, followed by activation, then max pooling with size of 2, and finally a batch normalization step. The last two blocks have 3 convolution layers, with 256 and 384 filters respectively, followed by activation, max pooling, and batch normalization. Then, a flattening step is applied to prepare for feeding into the classifier. Instead of using one fully connected layer for classification, we use a feedforward network, consisting of 3 layers, with regularization using a dropout with 0.3 probability to avoid overfitting. HeUniform weight initializer was used instead of the default random initialization to initialize all weights of convolution and fully connected (FC) layers. The feedforward network starts with a flattening step to turn the output from the feature extractor part into a vector with 1 dimension, that is, 4 by 4 by 384 long, and then feed it into the first FC layer, which will output 256 neurons, then 128, and lastly 64. The very last FC layer is a classifier with a softmax function that produces the class rates for the given labels, 28 in case of AHCD, and 29 in case of Hijja and Hijja-AHCD. The activation function used in all the layers is LeayReLU, with a slope of 0.3. LeakyRelu is a variation of ReLU that was proposed to overcome the problem of vanishing gradient that arises in very deep networks when that uses ReLU. LeakyReLU keeps some of the negative data instead of assigning Zero to all of them, hence it's called leaky. Our experiments, as will be seen later, showed that LeakyReLU made a slight improvement on the results compared to the baseline. To optimize the network, we used Categorical Cross Entropy for the loss function, since we have multiple class rates, and the labels were provided as one-hot encoded. Lastly, Adam optimizer was used to update the weights with an initial learning rate 0.001. Experimental Setup This section explains the carried-out experiments to recognize Arabic handwritten characters and goes over the training setup. We used Python language and Keras framework, with multiple programming libraries such as pandas, NumPy and scikit. All the work was done on Google Colab with enabled GPU. First, we followed several approaches for preprocessing. Then, we built the base CNN classification model as explained in section 3.2, which is used in all experiments, with fixed hyperparameters that we found the best after many experiments. Therefore, evaluating results would be based solely on the difference between each preprocessing and training approach. Training Setup To start the training, we need to have a validation set to help in detecting when the model starts overfitting on the training set. Cross validation was used as a technique to get the best validation set instead of splitting it manually with a fixed rate like in [4]. We used StratifiedKFold function from scikit-learn library to split the training set into training and validation sets, we set the number of splits to 5 and shuffle to true, meaning that the training loop will run 5 times each time with a different training and validation sets, so that we can pick the model with highest validation accuracy. Batch size is set to 128, and we trained for 30 epochs in all the five experiments, none of the models showed any improvement after epoch number 30. During training, we used a learning rate scheduler, that is, Adam starts with an initial learning rate of 0.001, then during training, it optimizes the learning rate by itself, but we found, by experiment, that when adding a callback scheduler that is applied on the learning rate after Adam optimization on each epoch, the overall model accuracy improved. We set up this callback as a function that takes a learning rate as an input, then multiplies it by the exponential of -0.01. [24], and finally our proposed model. Experiments In (B), the multi-model approach, we used our proposed model to train and test on the Hijja-AHCD dataset. We started by filtering the characters based on number of strokes and splitted the data correspondingly. After we grouped the characters into 4 different groups based on their strokes; each group was used as a standalone dataset and is used to train. model, so we say this model is for group X. We refer to this approach as multi-model. Lastly, we experimented transfer learning using EfficientNetV0 pre-trained model on Hijja, AHCD and Hijja-AHCD. Transfer learning helps in many problems by transferring the knowledge learned from one related or unrelated domain to another one which in many cases gives a higher convergence rate and results. There are many models trained on ImageNet dataset, which is a large dataset with 1000 classes of natural images, such as MobileNet, VGG19, EfficientNet and its variants, and many others. We picked EfficientNetV0 as it's one popular choice recently for vision problems due to its efficiency and relatively small size compared to others. The experiment started with substituing the top layer with our input size (32, 32, 3), but noticed here that EfficientNetV0 expects a 3 channels image so we needed to apply a preprocessing step to convert from Grayscale images to RGB. We also added a final classification layer to output 29 classes matching our data. The training will start with the ImageNet weights, and will train all EfficientNet layers. The rest of the setup is the same as our CNN model. Baseline and Evaluation Measures To evaluate how well our model performed in each experiment, we calculated the prediction accuracy, recall, precision, and F1 measures on the test set for each corresponding dataset. For (C) to be evaluated, we calculated the average prediction accuracy over the four models for each stroke-based character group. Same for precision, recall, and F1 score measures. We refer to Altwaijry et al. sa the bsaeline since their model was the first to be trained and tested on Hijja dataset. Precision, Recall and F1 Score Precision is the proportion of correctly classified characters from all characters in class X. While recall is the proportion of correctly classified characters from all characters in class X. F1 score is a function of precision and recall. Results and Discussions In this section, we present and discuss the results of our experiments. Table 3 shows the prediction accuracy for all of them. Refer to Appendix A for the full classification reports. [24], and transfer learning with EfficientNetV0. In the second row, we used AHCD dataset. Our model obtained better results than both [7] Altwaijry, [24] Ahmed, but has a similar results with EfficientNetV0. Something that worth noting here is that there's a big difference in the accuracy of all single model experiments on Hijja versus AHCD datasets. We assume that this large difference in prediction accuracies come from the challenges Hijja dataset introduces. First, it contains different forms of each character, both isolated and connected, introducing more features and higher level of similarities between characters. Second, the characters were written by children. On the other hand, AHCD is written by adults and the characters were only in isolated forms. The third row represents carried out experiemnts on the merged dataset Hijja-AHCD. Compared with with the results on Hijja dataset alone in the first row, it can be seen that merging Hijja with AHCD improved the predection accuracy in our model, Ahmed, et al. [24], and EfficientNetV0. These results show the challenges evolving from the age difference between participants in each dataset. In Hijja, they were all children, and children's handwritings could be very challenging to understand even for the human eye, so to capture more features and help the model learn more possible ways of writing each character, merging Hijja with AHCD experimentally helped the model learn better. We assumed that showing the model more variations of handwritings is more effective and realistic for our problem than enlarging the size of Hijja dataset with usual augmentation techniques like zooming or rotating existing images, hence we decided to merge with another similar dataset with a different age segmant instead of generating new images from the same dataset. Lastly, we introduced a new approach which is based on a prior filteration step using character strokes. We calculated the averaged predection accuracy on all 4 character groups as shown previously in Table 2, and is reported in the last column in Table 3 as multi-model. The averaged predection accuracy outperformed all the single model experiments and transfer learning on Hijja-AHCD dataset. The classification reports for each of the models in each character group are shown in Tables A 4 to 7 in the appendix. In order to make use of this method in a real-life application, there should be a layer to filter the number of strokes in an image prior to prediction, and based on that number, which model is used for prediction will be decided. Such thing is doable in online writing applications where the strokes can be counted as the number of times the user interacts with the screen using their fingers or a stylus. Experimentally, we split the test set as we did with the training set for evaluation. During the evaluation, we noticed some factors of which can be said as challenges facing the CNN models with handwritten Arabic characters. First, as can be seen in Figure 2, the model misclassified the character Qaaf/‫ق‬ as Faa'/‫,ف‬ we provide the image of both characters clearly shown in Figure 9 for comparison. We can see that Faa' ̧ has 1 dot, and Qaaf has 2 dots, while the shape of the character itself is more curved than Faa', but many handwriting styles could make the main part of both look very similar, leaving the only difference in the dots above the curve. The image we showed the model has the 2 dots very unclear due to scanning and data preprocessing, this can be seen as a problem from two sides. Firstly, with the traditional data collecting method by scanning written characters from paper, and due to cleaning and preprocessing, it loses some quality. Secondly, the Arabic characters are a challenging problem since some characters look very similar, and even more challenging with the diverse handwriting styles. (a) (b) Figure 6: (a) input image of Qaaf on its final shape misclassified as Faa' (b) printed Qaaf and Faa'. Furthermore, the model made similar misclassifications on characters shown in Figure 7. All of them were part of the Hijja dataset, and they are written in the medial shape, that is, they are written as if they were in the middle of a word. The model misclassified them with similar characters written in the exact same way, also in the middle position. It's challenging even for us to classify these characters, and with the model reaching a considerable good prediction accuracy over most cases with isolated characters, it's still unable to overcome some challenges with other positions. Figure 7: more misclassifications made by the model on Hijja dataset due to similarity between isolated and connected characters Our proposal of the multi-model stroke-based solution opens the door for a new way of solving classification problems in the Arabic handwriting with the idea of a prior filtration process to overcome the shortages mentioned earlier. Conclusion and Future Work The Arabic language introduces more challenges in the fields of Deep Learning, more studies are being carried out yet not enough to reach the advanced results in Latin languages. In this research, we discuss some approaches to get higher accuracy in Arabic handwritten character recognition using Deep Learning over 2 recent datasets, one of which was written by children and introduces even higher complex patterns, the Hijja dataset. We used augmentation with another dataset, AHCD, and strokes approach on the same model. Our model reached higher accuracy on both Hijja and AHCD than the baseline, moreover, augmenting Hijja with AHCD helped raise the accuracy further than on Hijja alone, which tells that the dataset made learning the handwritten characters patterns harder for the model. We compared our model with transfer learning using EfficientNetV0 and one of the recent proposed models that gained high results on Arabaic handwritten charcters in the litrature, nevertheless, our model outperform both their results and transfer learning. Furthermore, we also compared the multi-model approach with both the results from single models in (A) and transfer learning (C) on Hijja-AHCD and it outperformed both. As future work, we plan to test our model in a real application as a proof of concept which could further reveal if these experiments are applicable under real conditions outside the testing environment. Particularly, using our model in an online writing application, where we present the model with a different type of input, and we anticipate it to perform well since images extracted from online handwriting are more clear than offline images, which could help the model recognize the features easily. In addition, we see that creating a new dataset based on online writing could further help improve the accuracy of our model in such applications. As we focused more on the complex patterns of children's handwriting, the applications of this model in educational apps to teach dictation for children are a good use case for this research.
2022-11-07T06:43:52.927Z
2022-11-03T00:00:00.000Z
253370500
s2orc/train
v2
Effect of low protein intake on acute exacerbations in mild to moderate chronic obstructive pulmonary disease: data from the 2007–2012 KNHANES
Effect of low protein intake on acute exacerbations in mild to moderate chronic obstructive pulmonary disease: data from the 2007–2012 KNHANES Background Several researchers have reported that the amount of protein intake is associated with lung function and airflow obstruction. However, few studies have investigated the effect of low protein intake on acute exacerbations of chronic obstructive pulmonary disease. This study aimed to investigate the effect of low protein intake on exacerbations in mild to moderate chronic obstructive pulmonary disease. Methods We used data obtained from the Korean National Health and Nutrition Examination Survey (KNHANES) between 2007 and 2012, linked to the National Health Insurance claims data. The clinical outcomes and the rate of exacerbation were retrospectively compared between the low protein intake group and the non-low protein intake group which was stratified by quartile categories of protein intake in 2,069 patients with mild to moderate chronic obstructive pulmonary disease. Results The low protein intake group was significantly associated with older age, women, never smoker, low household income, and low education level, compared with the non-low protein intake group. The low protein intake group was significantly associated with increased hospitalization (18.0% vs. 10.5%, P<0.001) and emergency department utilization (1.6±1.0 vs. 1.1±0.4, P=0.033) compared with the non-low protein intake group. In multivariate analysis, the low protein intake group was associated with hospitalization (odds ratio 1.46; 95% CI, 1.09–1.96; P=0.012). The multiple linear regression analysis revealed that the amount of protein intake was associated with FVC % predicted (β=0.048, P<0.001) and FEV1% predicted (β=0.022, P=0.015). Conclusions Low protein intake was associated with an increased risk of exacerbations in mild to moderate chronic obstructive pulmonary disease. The data are available at the KNHANES website (https://knhanes.cdc.go.kr). Introduction Chronic obstructive pulmonary disease (COPD) is characterized by progressive airflow limitations caused by chronic inflammation and remodeling of the airways (1). Systemic disease manifestations and acute exacerbations are associated with increased mortality risk in patients with COPD. Weight loss and muscle wasting are considered signs of terminal progression of the disease process and independent predictors of survival (2,3). These changes are frequently accompanied by reduced exercise capacity and symptoms having a nutritional impact, such as anorexia and early satiety. In fact, 25% to 40% of patients with COPD are in a malnourished status, which is well known to be associated with decreased lung function and exercise intolerance as well as increased risk of acute exacerbations and hospitalization (2,(4)(5)(6)(7). As nutritional support in stable COPD patients has been found to be effective in improving both nutritional intake and nutritional status, the role of nutritional assessment in patient management is increasing (6)(7)(8)(9). Several animal studies have suggested that protein deficiency induces pulmonary emphysema and impaired lung growth (10,11). In addition, there are reports that the amount of protein intake is associated with forced vital capacity (FVC), vital capacity and airflow obstruction in COPD (12,13). However, few studies have investigated the effect of low protein intake on acute exacerbations of COPD. In the present study, we investigated the effect of low protein intake on acute exacerbations in patients with mild to moderate COPD. We also investigated the relationship between low protein intake and lung function. We present the following article in accordance with the STROBE reporting checklist (available at https://dx.doi.org/10.21037/ jtd-20-3433). Study population The clinical outcomes and the rate of exacerbation were retrospectively compared between the low protein intake (LPI) group and the non-LPI group which was stratified by quartile categories of protein intake in patients with mild to moderate COPD. We used data obtained from the Korean National Health and Nutrition Examination Survey (KNHANES) between 2007 and 2012, linked to the National Health Insurance (NHI) claims data. The NHI system includes the medical reimbursement records for the entire Korean population. The KNHANES involves a cross-sectional, multistage probability-based sample representing the total non-institutionalized Korean civilian population. This information is available at the KNHANES website (https://knhanes.cdc.go.kr). The KNHANES contains de-identified data regarding demographics, underlying diseases, smoking history, spirometry results, laboratory data, and nutritional status. Trained nutritionists conducted interviews of each subject based on 24-hour dietary recall. Nutrient intakes were calculated using Korean food composition tables (14). Unfortunately, we were unable to investigate whether the source of protein was animal or vegetable. We screened patients aged 40 years or older who underwent spirometry and the nutrition examination survey ( Figure 1). Of these, the clinical outcomes of mild to moderate COPD were retrospectively analyzed by quartile categories of protein intake. Mild to moderate COPD was defined as the ratio of forced expiratory volume in one second (FEV 1 ) to FVC of less than 0.7 and FEV 1 ≥50% of the predicted value. Spirometry was performed using equipment that met the American Thoracic Society performance criteria (15). After excluding two patients who consumed extreme amounts of protein which was defined as more than 300 g/day, a total of 2,069 patients were included in the present study. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). All KNHANES participants signed an informed consent form. In addition, it is open data which is available for everyone. All data were anonymously managed in all stages. Thus, ethical approval was not required, because the present study used data from Study design and data collection Patients with mild to moderate COPD were stratified by quartiles of dietary protein intake. The recommended daily protein intake for Korean over the age of 40 is 55-60 g/day for men and 45-50 g/day for women (16). The distribution of protein intake is demonstrated in Figure 2. The lowest quartile of protein intake (<42 g/day) was classified as the low protein intake (LPI) group and the higher three quartiles (≥42 g/day) were classified as the non-LPI group. We collected demographic information, spirometry results, and laboratory data from the KNHANES database. To minimize potential over-diagnosis of airway obstruction, we also used the lower limit of normal (LLN) criterion which classifies the bottom 5% of the healthy population as abnormal, based on the normal distribution (17). The LLN of the FEV 1 /FVC was calculated using the following prediction equations: 125.77628 -0.36304 × age (years) -0.17146 × height (cm) for men and 97.36197 -0.26015 × age (years) -0.01861 × height (cm) for women. Based on the NHI claims data, the patient's hospitalizations, emergency department (ED) visits, intensive care unit (ICU) admissions, and prescription records were analyzed. An acute exacerbation of COPD was determined when patients were hospitalized or visited the ED with claim codes for the International Classification of Disease 10 th edition codes J42.X-J44.X and used inhaled short-acting bronchodilators and systemic corticosteroids. Statistical analysis All statistical analyses were performed using SAS ver. 9.2 (SAS Institute, Cary, NC, USA). Data are expressed as means ± standard deviations or numbers (%). Continuous variables were analyzed using either Student's t-test or Mann-Whitney tests and categorical variables were analyzed using Pearson's chi-square test. Multiple logistic regression analysis was performed by adjusting for confounding factors to assess the effect of LPI on exacerbations leading to hospitalization. The effect of protein intake on lung function was assessed using multiple linear regression analysis after adjusting for confounding factors. All tests for The lowest quartile of protein intake (<42 g/day) n=523 The higher three quartiles of protein intake (≥42 g/day) n=1,546 significance were two-sided, and all variables with a P<0.05 were considered to be significant. Patient characteristics A total of 2,069 patients were included in the present study. The baseline characteristics of the patients are presented in Table 1. The mean age of the patients was 65.4±9.8 years old; 1,468 (71.0%) were men. There were several demographic differences between the two groups: the LPI group was associated with older age, women, low body mass index (BMI), never smoker, higher rate of Medical Aid, low household income, low levels of education, and not being married. In addition, the Charlson comorbidity index was significantly higher in the LPI group compared with the non-LPI group (0.5±1.4 vs. 0.2±0.9, P<0.001). More patients in the LPI group had coronary artery disease and congestive heart failure. Nutrient intake status The LPI group was found to have not only low protein intake but also a low intake of other kinds of nutrients, such as carbohydrates, fat, and vitamins, compared with the non-LPI group ( Table 2). Interestingly, significantly more patients in the non-LPI group reported that they had hyperlipidemia (12.2% vs. 8.6%, P=0.024); however, the total cholesterol level was actually significantly higher in the LPI group (193.7±37.7 vs. 188.5±36.0 mg/dL, P=0.006). Pulmonary function Spirometry results revealed that patients in the LPI group had a lower mean FEV 1 /FVC and a higher prevalence of airflow obstruction by the LLN criterion compared with those in the non-LPI group ( Table 3). However, there were no significant differences in FVC and FEV 1 % predicted between the two groups. Exacerbations The LPI group was associated with increased hospitalization rates (18.0% vs. 10.5%, P<0.001) and frequent ED visits (1.6±1.0 vs. 1.1±0.4, P=0.033, Table 4). In addition, they were significantly associated with frequent outpatient clinic visits (14.2±22.1 vs. 8.3±15.2, P=0.011) and increased medical expenses (2,831±4,891 US dollars vs. 1,804±3,281 US dollars, P=0.015). However, there were no differences between the groups in the prevalence of ICU admission or lengths of hospital stay. To adjust for confounding factors that may affect exacerbations of COPD, we applied various models in multiple logistic regression analyses ( Figure 3). The LPI group was independently associated with an increased risk of hospitalization in all models. In model 4, which adjusted for FEV 1 % predicted, weight, smoking pack-years, and Data are presented as means ± standard deviations or numbers (%). LPI, low protein intake. Table S1). We also evaluated patients stratified by quartiles of protein intake by sex. The cut-off value of the lowest quartile of protein intake was 46 g/day in men and 32 g/day in women. As a result, the LPI group was independently associated with an increased risk of hospitalization in both sex subgroups (Table S2). We applied the same adjusted models for multiple linear regression analysis to investigate the relationship between protein intake and lung function ( Table 5). In model 4, the amount of daily protein intake was associated with FEV 1 % predicted (β=0.022, P=0.015), FVC % predicted (β=0.048, P<0.001) and FEV 1 /FVC (β=0.0001, P=0.008). Discussion The present study demonstrated that low protein intake was associated with an increased risk of hospitalization and ED visits due to exacerbations in patients with mild to moderate COPD. To our knowledge, this is the first study to investigate the effect of low protein intake on exacerbations of COPD using large-scale data of Korean COPD patients. Involuntary weight loss and muscle wasting in COPD are a consequence of the increased work of breathing, persistent inflammatory processes, and poor dietary intake resulting from anorexia and early satiety. Both metabolic and mechanical inefficiency contribute to the elevated energy expenditure during physical activity, while systemic inflammation increases resting energy expenditure (18)(19)(20). A previous study reported that protein synthesis and breakdown were elevated in weight stable COPD patients (21). This is consistent with another study that demonstrated that COPD patients showed decreased total body protein and lean body mass compared with healthy controls even when there were no differences in body weight and BMI (22). This elevated protein turnover is thought to be associated with low-grade inflammation. A negative nitrogen balance causes muscle wasting, which is known to be associated with reduced respiratory muscle strength and muscle mass. In the present study, we found that the amount of protein intake was associated with FEV 1 , FVC, and FEV 1 /FVC. This is consistent with previous reports that demonstrated that protein intake was associated with lung function, such as FVC, vital capacity, and airway obstruction (12,13). In the present study, patients in the LPI group consumed not only less protein but also fewer total calories, which might act as a confounding factor. However, Yazdanpanah et al. reported that there was no significant correlation between energy intake and lung function (12,13). In addition, several animal experiments have shown that the lungs of proteindeficient rats appeared to be less compliant than those of normal rats, and rats fed a protein-deficient diet from the neonatal period showed abnormal lung development Data are presented as means ± standard deviations or numbers (%). † , 1 USD =1,000 won; *, the equivalent dose of prednisone. LPI, low protein intake; ICU, intensive care unit; ED, emergency department; USD, United States dollar; ICS, inhaled corticosteroid; LABA, long-acting beta2-agnoist; LAMA, long-acting muscarinic antagonist; LTRA, leukotriene receptor antagonist; SAMA, short-acting muscarinic antagonist; SABA, short-acting beta2-agonist. (11,23,24). Although there are no reports about whether protein supplements improve lung function, it is well known that adequate nutritional supplementation, including protein, improves body weight, muscle mass, muscle length, and lung function (9,25). A previous prospective study demonstrated that patients with a lower BMI or those who had a weight reduction during the previous year had an increased likelihood of having an exacerbation compared with patients whose weight was either unchanged or increased (26). In addition, there are reports that hypoalbuminemia and hypoproteinemia are independent risk factors of exacerbation (27,28). We could not find any data on the levels of protein, albumin, prealbumin, and retinol-binding protein, which have been considered as markers of visceral protein stores. However, multivariate analysis showed that low protein intake was an independent risk factor of exacerbation of COPD after adjusting for factors that are well-known risk factors of exacerbation, such as FEV 1 % predicted and household income (29). Snider et al. demonstrated that nutritional supplementation was associated with reductions in readmission and length of hospital stay, possibly by reversing the disturbed energy balance in the acute phase of the illness (30). Therefore, a well-designed prospective study is needed to identify whether adequate protein supplementation has clinical benefits on the severity and frequency of exacerbations in COPD. There are many differences in the baseline characteristics. Among these, older age, male sex, current smokers, less income, and comorbidities are known to be associated with increased risk of COPD exacerbation. Interestingly, the non-LPI group significantly included more men, and current smokers, as well as patients who were younger and earned more income. To adjust these confounding factors, we corrected these in the regression. On the other hands, these characteristics reflect the real world because data were obtained from the KNHANES which involves multistage probability-based sample representing the total noninstitutionalized Korean civilian population. Well-verified COPD exacerbation prediction tool needs to be applied to evaluate the quantitative risk affected by various factors other than amount of protein intake. The present study has several limitations. First, the amount of protein intake was extrapolated from 24-hour dietary recall. This might not reflect long-term dietary pattern of the consumption of protein. If there were data available on the levels of markers of visceral protein stores, it would more correctly reflect any protein deficiency. However, the validity of 24-hour dietary recall has been well established, and the use of large scale data might compensate for any errors (31). Second, other risk factors 0 1 2 3 Figure 3 The odds ratio of exacerbation leading to hospitalization of the low protein intake group compared with the non-low protein intake group. Model 1: FEV 1 % predicted-adjusted. Model 2: FEV 1 % predicted and weight-adjusted. Model 3: FEV 1 % predicted, weight, and smoking pack-years-adjusted. Model 4: FEV 1 % predicted, weight, smoking pack-years, and household income-adjusted. of exacerbation, such as the exacerbation history before the study period, comorbidities, infection, and compliance with medication, were not fully considered. In addition, low protein intake might be associated with low other nutrients intake or reduction in muscle mass which are risk factors for COPD exacerbation. These factors might affect the severity and frequency of exacerbations, and the appetite and protein intake amount. Especially, patients with both congestive heart failure and COPD might have more brittle clinical course because they have increased risk of developing severe ventricular failure, pulmonary congestion mimicking many signs and symptoms of COPD exacerbation, and have more limited pulmonary reserve. In conclusion, low protein intake was associated with an increased risk of acute exacerbations leading to hospitalizations and ED visits in mild to moderate COPD patients. These findings suggest that encouraging patients to consume adequate protein or to use protein supplements may be important in their management. Further research is needed to clarify the implications of our results on COPD treatment. Acknowledgments Funding: This work was supported by the National Research Foundation of Korea (NRF-2020R1A5A2019210). The funding source had no role in design of the study, in data collection, analysis, or interpretation, and no role in writing the report or in the decision to submit the paper for publication. funding source had no role in design of the study, in data collection, analysis, or interpretation, and no role in writing the report or in the decision to submit the paper for publication. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). All KNHANES participants signed an informed consent form. In addition, it is open data which is available for everyone. All data were anonymously managed in all stages. Thus, ethical approval was not required, because the present study used data from those surveys retrospectively. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2021-10-19T15:27:50.410Z
2021-01-01T00:00:00.000Z
240582600
s2orc/train
v2
Comments on compositeness in the SU(2) linear sigma model
Comments on compositeness in the SU(2) linear sigma model First we summarize the quark-level linear $\sigma$ model compositeness conditions and verify that indeed $m_\sigma = 2 m_q$ when $m_\pi = 0$ and $N_c=3$, rather than in the $N_c\to\infty$ limit, as is sometimes suggested. Later we show that this compositeness picture also predicts a chiral symmetry restoration temperature $T_c = 2f_\pi$, where $f_\pi$ is the pion decay constant. We contrast this self-consistent Z=0 compositeness analysis with prior studies of the compositeness problem. 1 Now that the scalar σ meson has been reinstated in the 1996 particle data group tables [1], it is appropriate to take seriously the various theoretical implications of a quark-level linear σ model (LσM) field theory. The original spontaneously broken LσM theory [2] was recently dynamically generated [3] at the quark level in the spirit of Nambu-Jona-Lasinio [4]. In this note we summarize the color number N c and compositeness properties of the above SU(2) quark-level LσM and comment on the recent LσM analysis of compositeness given by Lurie and Tupper [5]. It is well known that for π o → 2γ decay, the N f = 2 quark triangle empirically suggests N c = 3 (also a LσM result). Moreover eq.(4) also follows from "anomaly matching" [8,9]. However we shall not invoke here the stronger (but consistent) constraints due to dynamically generating the (quark-level) LσM as they follow from comparing quadratic and logarithmically divergent integrals using (compatible) regularization schemes [3]. Thus the condition (4) depends on the NJL relation (3) being also true in the LσM. The latter assertion follows when one dynamically generates [3] the entire LσM lagrangian (1) starting from a simpler chiral quark model (CQM) lagrangian , as well as dynamically generating the two additional equations For N c = 3, the latter pion-quark coupling in (5) near the anticipated value found from the πNN coupling g πN N ∼ 13.4 so that g ≈ g πN N /3g A ∼ 3.5. Then the nonstrange constituent quark mass is m q = f π 2π/ √ 3 ≈ 326 MeV, near M N /3 as expected. But rather than repeating ref. [3] in detail, we offer an easier derivation of m σ = 2m q following only from the quark loops induced by the CQM lagrangian. This naturally leads to the notion of "compositeness". To this end, we invoke the log-divergent gap equation from fig. 2 where d -4 p = (2π) −4 d 4 p. Equation (6) is the chiral-limiting one-loop nonperturbative expression of the pion decay constant f π = m q /g with the quark mass m q cancelling out. This LσM log-divergent gap equation (6) also holds in the context of the four-quark NJL model [10]. Then the one-loop-order g σππ coupling depicted in fig. 3 is The one-loop g σππ in (7) "shrinks" to the tree-order meson-meson coupling in (1b), g ′ = m 2 σ /2f π , only if m σ = 2m q is valid along with the GTR f π g = m q . This is a Z = 0 compositeness condition [11], stating that the loosely bound σ meson could be treated either as a qq bound state (as in the NJL picture) or as an elementary particle as in the LσM framework of fig.3. But in either case m σ = 2m q must hold and therefore the additional LσM Lee condition It is also possible to appreciate the one-loop order Z = 0 compositeness condition in the context of the LσM [3] in a different manner. Our version of the Z = 0 compositeness condition is that the log-divergent gap equation (6) can be expressed in terms of a four-dimensional UV cutoff as where we have substituted only g = 2π/ √ N c and N f = 2 into (6) in order to deduce (8). The numerical solution of (8) is the dimensionless ratio Λ/m q ≈ 2.3, which is slightly larger than the NJL ratio in (3) or in (5), m σ /m q = 2. However the renormalization constant Z 4 in ref. [5] then becomes using (8), Ignoring for the moment the second term in (10) proportional to 3λ, we note that the log-divergent gap equation (6) requires the ππ → ππ quark box (dynamically generated by the CQM lagrangian ) to "shrink" (as in eq. (7) and in fig. 3) to a point contact term λ provided that [3] λ = 2g 2 . The reason why one must neglect the second meson loop term proportional to 3λ in (10) is because e.g. π α π β → π γ π δ scattering has tree level (or one-loop) graphs which must vanish in the strict zero momentum chiral limit. This fact was emphasized on pp 324-327 of the text by de Alfaro et al. [DFFR] in ref. [2]. Specifically the quartic LσM contact term −λ is cancelled by the cubic σ pole term 2g ′ 2 /m 2 σ → λ by virtue of the Gell-Mann-Lévy LσM meson chiral couplings in (1b). After the (tree-level) lead term cancellation between contact term λ and s, t, u, σ meson poles in the LσM, DFFR obtain the amplitude Then DFFR in [2] note that (14) above is just the Weinberg ππ amplitude [12] when m 2 π = 0, found instead via the model-independent current algebra and PCAC rather than from the linear σ model (LσM). Also note that (14) indeed vanishes in the strict zero momentum chiral limit. A similar chiral cancellation of the 3λ term in (10) also holds in one-loop order. When computing the one-loop order renormalization constant Z 4 as done by ref. [5] leading to eq. (10) above, one must be careful to (a) account for the DFFR-cancellation due to the soft chiral symmetry relation 2g ′ 2 /m 2 σ → λ , (b) reorganize the perturbation theory using the log-divergent gap equation (6) shrink quark loops to a contact meson term λ with λ = 2g 2 as found in (11). Then even in one-loop order one must recover the Weinberg form for ππ scattering eq. (14) in a model-independent fashion. This means that the meson loop graph with quartic couplings proportional to 3λ 2 contributing to λZ 4 as 3λ 2 /4π 2 in (10) will be cancelled by fermion box graphs which are of higher loop order. Although our nonperturbative approach mixes perturbation theory loops of different order, both DFFR and our use of the Gell-Mann-Lévy chiral symmetry meson relation 2g ′ 2 /m 2 σ → λ has the bonus of our nonperturbative approach retaining the consistent chiral symmetry compositeness condition Z 3 = Z 4 = 0 from (13). Keeping instead the middle term in (10) proportional to 3λ, ref. [5] concludes that the resulting Z 4 = 0 (then different) compositeness condition requires that the NJL limit m σ → 2m q is recovered only when N c → ∞. References [13] reach the same conclusion although they are not working with SU(2) chiral mesons (σ, π). In our opinion however, the chiral SU(2) LσM (1) already has N c = 3 and not N c → ∞ built in via the Lee condition in eqs. (2) but only when m σ = 2m q in the chiral limit. We obtain these satisfying results only by cancelling the middle 3λ meson term in (10) against higher quark loop graphs. Ref. [5] does not account for the above DFFR cancellation. Finally we extend the above zero temperature (T = 0) chiral symmetry absence of quartic meson loops in eqs. (10), (12), (14) to finite temperature. Again following ref. [5] we write the tadpole equation in mean field approximation at high temperatures for the quark-level SU(2) LσM as for flavor N f = 2 and v = v(T ) with v(0) = f π ∼ 90MeV in the chiral limit. The first two terms in (15) represent quartic σ and π loops, while the third term involving N c is the u and d quark bubble loop. The temperature factors of T 2 /12 in (15) were originally obtained from finite temperature field theory Feynman rules [14]. Now in fact there should be no quartic meson loop contributions surviving in (15) due to the above DFFR-type argument or the resulting Weinberg ππ amplitude in (14), even at finite temperatures. So the nontrivial solution of (15) at the chiral symmetry restoration temperature T c (where v(T c ) = 0) is for N f = 2, N c = 3 and λ = 2g 2 , with the first two meson loop terms in (15) proportional to (3 + N 2 f − 1)λ consequently omitted, While this predicted temperature scale in (16) had been obtained earlier [15,16], ref. [5] also noted (16) above but rejected it because of the meson loop contributions in (15). We in turn claim that the first two σ and π loop terms in (15) (and the middle term in (10) proportional to 3λ) are all zero due to chiral cancellations as in DFFR [2]. Then (15) reduces to the nontrivial solution N c g 2 T 2 c /6 = λf 2 π , (leading to T c = 2f π ) or to a quark box loop shrinking to a meson-meson quartic point [3] due to the log-divergent gap equation (6), itself a version of the Z = 0 compositeness condition. Although we concur with ref. [5]'s choice of the finite temperature quark bubble sign in eq. (15) (as opposed to the studies in ref. [15]), there is an easier way to deduce T c = 2f π by studying the single fermion loop propagator dynamically generating the quark mass [3]. Then, with no sign ambiguity arising at finite temperature one finds [17] where the −m 2 σ factor in (17) indicates the σ meson tadpole propagator generating the quark mass. When T = T c the quark mass "melts", m q (T c ) = 0, and (17) reduces to provided that N c = 3 and m σ = 2m q = 2f π g . Rather than starting at T = 0, an alternative approach to generating a realistic low energy chiral field theory begins at the chiral restoration temperature (with m q (T c ) = 0) involving bosons π and σ alone [20] and later adds in the fundamental meson-quark interaction in (1). Only then does one deduce the quark-level linear σ model (LσM) field theory [21]. While issues of N c = 3 and compositeness are then postponed, the resulting LσM theory in ref. [21] starting at T = T c ∼ 200MeV with λ ∼ 20 appears quite similar to the T = 0 LσM field theory in refs. [2,3] with λ ≈ 26 from (11) and T c ≈ 180MeV from (16). In effect, what goes around comes around . . . . Acknowledgements: This research was partially supported by the Australian Research Council.
2014-10-01T00:00:00.000Z
1997-12-17T00:00:00.000Z
14932310
s2orc/train
v2
Diagnostic value of bronchoalveolar lavage fluid metagenomic next-generation sequencing in pediatric pneumonia
Diagnostic value of bronchoalveolar lavage fluid metagenomic next-generation sequencing in pediatric pneumonia Objectives The aim of this study was to evaluate the diagnostic value of bronchoalveolar lavage fluid (BALF) metagenomic next-generation sequencing (mNGS) versus conventional microbiological tests (CMTs) for pediatric pneumonia. Methods This retrospective observational study enrolled 103 children who were diagnosed with pneumonia and hospitalized at Hubei Maternity and Child Health Care Hospital between 15 October 2020 and 15 February 2022. The pneumonia diagnosis was based on clinical manifestations, lung imaging, and microbiological tests. Pathogens in the lower respiratory tract were detected using CMTs and BALF mNGS (of DNA and RNA). The diagnostic performance of BALF mNGS was compared with that of CMTs. Results In 96 patients, pathogens were identified by microbiological tests. The overall pathogen detection rate of mNGS was significantly higher than that of CMTs (91.3% vs. 59.2%, p = 0.000). The diagnostic performance of mNGS varied for different pathogens; however, its sensitivity and accuracy for diagnosing bacterial and viral infections were both higher than those of CMTs (p = 0.000). For the diagnosis of fungi, the sensitivity of mNGS (87.5%) was higher than that of CMTs (25%); however, its specificity and accuracy were lower than those of CMTs (p < 0.01). For the diagnosis of Mycoplasma pneumoniae, the specificity (98.8%) and accuracy (88.3%) of mNGS were high; however, its sensitivity (42.1%) was significantly lower than that of CMTs (100%) (p = 0.001). In 96 patients with definite pathogens, 52 cases (50.5%) were infected with a single pathogen, while 44 cases (42.7%) had polymicrobial infections. Virus–bacteria and virus–virus co-infections were the most common. Staphylococcus aureus, Haemophilus influenzae, rhinovirus, cytomegalovirus, parainfluenza virus, and fungi were more likely to be associated with polymicrobial infections. Conclusions BALF mNGS improved the detection rate of pediatric pneumonia, especially in mixed infections. The diagnostic performance of BALF mNGS varies according to pathogen type. mNGS can be used to supplement CMTs. A combination of mNGS and CMTs may be the best diagnostic strategy. Introduction The World Health Organization reports that pneumonia is the leading cause worldwide of mortality among children younger than 5 years old (GBD 2015LRICollaborators, 2017. In clinical practice, identifying pathogens in infectious diseases is a difficult problem. Conventional microbiological tests (CMTs) are limited in their scope for pathogen detection; they are timeconsuming, have low detection rates, and usually detect only single pathogens. Although polymerase chain reaction (PCR) tests and serological detection have expanded the detection range of CMTs and increased detection rates, clinicians must first identify the type of pathogen. It is important to diagnose pathogens quickly and accurately in order to shorten the hospital stay and reduce complications and mortality. Metagenomic next-generation sequencing (mNGS) is an unbiased detection technology that can detect multiple pathogens across a wide range. It is relatively time-saving, with a turnaround time of 24-48 h. mNGS has been shown in recent years to be advantageous and viable for the identification of respiratory tract infection pathogens (Leo et al., 2017). However, sequencing DNA and RNA at the same time using mNGS has rarely been reported. In the present study, we compared the diagnostic value of CMTs and mNGS (DNA and RNA) for detecting pneumonia pathogens in children. Study design and patient selection This retrospective observational study enrolled children who were diagnosed with pneumonia and hospitalized at the Maternity and Child Health Care Hospital of Hubei Province between 15 October 2020 and 15 February 2022. The inclusion criteria were as follows: (1) the child presented with typical clinical signs of pulmonary infection, such as fever, cough, sputum, and dyspnea; and (2) the diagnosis of pulmonary infection was supported by radiological evidence (e.g., chest computed tomography scan). We excluded patients who were not tested using bronchoalveolar lavage fluid (BALF) mNGS (DNA and RNA). A total of 103 children were enrolled in this study. The recruitment process is illustrated in Figure 1. Patient age, sex, symptoms, laboratory findings, lung imaging, bronchoscopic findings, and medical history were recorded. All included patients underwent bronchoscopy to obtain BALF samples for use in CMTs and mNGS. Bronchoscopies were performed by experienced bronchoscopy physicians according to standard safety protocols. No serious adverse events were associated with the bronchoscopy procedures. This study was approved by the Institutional Ethics Committee of the Maternal and Child Health Hospital of Hubei Province [2022] IEC (018). Conventional microbiological tests Routine samples were collected, including BALF, sputum, and blood. CMTs were performed within 2 days of admission, including sputum and BALF culture and smear (acid-fast staining for Mycobacterium tuberculosis; India ink staining for Cryptococcus), nasopharyngeal (NP) swab multiplex PCR (13 respiratory pathogens), BALF PCR (for Mycoplasma pneumoniae), serum antibody test (for M. pneumoniae), antigen test (for influenza virus A/B, 1,3-b-D-glucan antigen), and serum and BALF galactomannan test (Aspergillus spp.). The detection methods are specified in Supplementary Tables 1 and 2. Clinical comprehensive analysis was regarded as the reference standard Based on the clinical diagnosis, two experienced clinicians analyzed all patients' CMT and mNGS results, along with their medical records. First, each clinician determined whether the patient had pneumonia, based on the Chinese guidelines for the diagnosis of pneumonia in children (National Health Commission of the People's Republic of China, State Administration of Traditional Chinese Medicine, 2019), according to clinical symptoms, pulmonary imaging, and clinical laboratory examination results. Second, etiology was determined by a comprehensive analysis of the patient's clinical manifestations, laboratory findings, lung imaging, microbiological examination, and treatment response. If there was disagreement between clinicians, another senior clinician was consulted and a consensus was reached. Nucleic acid extraction, library preparation, and sequencing Bronchoscopy was performed according to standard procedures using a flexible fiberoptic bronchoscope. A special collector was used to collect 3-5 ml of BALF, which was stored at 4°C. The BALF was sent for mNGS analysis (DNA and RNA). DNA was extracted using a QIAamp ® UCP Pathogen DNA Kit (Qiagen), following the manufacturer's instructions. Human DNA was removed using benzonase (Qiagen) and Tween20 (Sigma). Total RNA was extracted using a QIAamp ® Viral RNA Kit (Qiagen). Ribosomal RNA was removed using a Ribo-Zero rRNA Removal Kit (Illumina). Complementary DNA (cDNA) was generated using reverse transcriptase and deoxynucleoside triphosphates (Thermo Fisher Scientific). Libraries were constructed for DNA and cDNA samples using the Nextera XT DNA Library Prep Kit (Illumina). Library quality was assessed using the Qubit dsDNA HS Assay Kit, followed by a high-sensitivity DNA Kit (Agilent) on an Agilent 2100 bioanalyzer. Library pools were then loaded onto an Illumina NextSeq CN500 sequencer for 75 cycles of single-end sequencing, generating approximately 20 million reads per library. For negative controls, we prepared peripheral blood mononuclear cell samples (10 5 cells/ml) from healthy donors, in parallel with each batch, using the same protocol. Sterile deionized water was extracted alongside the specimens to serve as a non-template control. Bioinformatic analyses Trimmomatic was used to remove low-quality reads, adapter contamination, duplicate reads, and reads shorter than 50 bp. Low-complexity reads were removed using K-complexity, with default parameters. Human sequence data were identified and excluded by mapping to a human reference genome (hg38) using Burrows-Wheeler Aligner software. We designed a set of criteria, similar to the criteria of the National Center for Biotechnology Information (NCBI), for selecting representative assemblies of microorganisms (bacteria, viruses, fungi, protozoa, and other multicellular eukaryotic pathogens) from the NCBI Nucleotide and Genome databases (National Center for Biotechnology Information). These were selected according to three references: (1) Johns Hopkins ABX Guide 1 ; (2) Manual 1 https://www.hopkinsguides.com/hopkins/index/Johns_Hopkins_ ABX_Guide/Pathogens Flow diagram of patient inclusion and exclusion. Deng et al. 10.3389/fcimb.2022.950531 of Clinical Microbiology (Manual of clinical microbiology); and (3) case reports and research articles published in current peerreviewed journals (Fiorini et al., 2017). The final database consisted of approximately 13,000 genomes. Microbial reads were aligned to the database using SNAP v1.0 beta 18 (Zaharia et al., 2021). Virus-positive detection results (DNA or RNA viruses) were defined by coverage of three or more nonoverlapping regions in the genome. A positive detection was reported for a given species or genus when RMP was ≥5 or when RPM-r was ≥5. RPM-r was defined as RPM corresponding to a given species or genus in the clinical sample divided by RPM in the negative control (Miller et al., 2019). To minimize crossspecies misalignments among closely related microorganisms, we discounted the RPM of a species or genus that appeared in non-template controls and shared a genus or family designation; a penalty of 5% was used for species (Zaharia et al., 2021). Statistical analysis SPSS 19 (IBM Corporation) was used to perform all analyses. Clinical composite diagnosis and determination of microbiological etiology were regarded as reference standards. At the pathogen level, sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were calculated using standard formulas for proportions. Wilson's method was used to determine 95% confidence intervals for these proportions. McNemar's test was used to compare diagnostic performance between CMTs and mNGS. All tests were two-tailed. A p-value of <0.05 was considered statistically significant. Note that some children with multiple microbial infections had multiple class labels for this study (bacteria, viruses, fungi, and atypical pathogens). We report sensitivity, specificity, accuracy, and positive predictive value as performance measurements to permit direct comparisons between mNGS and CMTs. As shown in Table 2, there were 52 cases (50.5%) with monomicrobial infection and 44 cases (45.8%) with polymicrobial infection (30 cases were two-microbial infections, 13 cases were three-microbial infections, and 1 case was a four-microbial infection). There were seven cases with unidentified etiology (one patient was positive for Circovirus in blood mNGS but was not clinically considered to be infected, three cases were clinically considered to be viral pneumonia, and three cases were clinically considered to be bacterial pneumonia). Among 52 patients with monomicrobial infection, 27 cases (51.9%, 27/52) were detected using CMTs, while 48 cases (92.3%, 48/52) were detected using mNGS. Among 44 polymicrobial infections, 6 cases (13.6%, 6/44) were detected using CMTS, while 29 cases (65.9%, 29/44) were detected using mNGS. For single and mixed-microbial infections, the detection rate of mNGS was higher than that of CMTs (p = 0.000). The most common mixed infections were bacterial and viral. Staphylococcus aureus, H. influenzae, rhinovirus, cytomegalovirus, parainfluenza virus, and fungi were more likely to be associated with polymicrobial infections. Discussion Pneumonia is one of the most common causes of hospitalization for infection in children, and one of the most important causes of their morbidity and mortality (Liu et al., 2015). With extensive use of antibiotics, continuous expansion of the pathogen spectrum, and increasing numbers of hard-todiagnose infections, it is increasingly difficult to identify the etiology of pneumonia. Relevant literature shows that comprehensive conventional methods do not find pathogens in up to 60% cases (Schlaberg et al., 2017). For patients with severe pneumonia, a long clinical course, the empirical use of antibiotics, and low immunity, CMTs are far from meeting the clinical need for etiology diagnosis; this may lead to the failure of therapy and the overuse of antibiotics. The bronchoalveolar lavage technique can be used to obtain cells and solutions from the lower respiratory tract. It is performed more easily and safely as the technique matures. In clinical practice, for patients with severe illness or suspected mixed infection, clinicians may examine several pathogens at the same time. However, they must verify these pathogens based on their own experience. By contrast, mNGS can detect all possible pathogens for clinicians' judgment, which can save patients' time and money. In recent years, BALF mNGS has become a breakthrough application for the diagnosis and treatment of infectious lung diseases. mNGS has potential advantages in terms of speed and sensitivity for detecting lung diseases (Langelier et al., 2018;Li et al., 2018;Miao et al., 2018). Our study showed that, compared with CMTs, mNGS had a significant advantage in its detection rate of pathogens (91% vs. 59%, p = 0.000), even though all patients had used antibiotics. These results are consistent with the conclusions of Miao et al. (2018). mNGS was also superior to CMTs in diagnosing monomicrobial infections (92% vs. 52%, p = 0.000) and polymicrobial infections (66% vs. 14%, p = 0.000). Bacteria and viruses are pathogens commonly found in clinical settings. Our results showed that S. aureus, H. influenzae, rhinovirus, cytomegalovirus, parainfluenza virus, and fungi are more likely to be associated with polymicrobial infections, which suggests the advantages of mNGS in the diagnosis of mixed infections (Fang et al., 2020). Because mNGS can detect almost all microbes in BALF, the technique strongly support A retrospective cohort study (Quah et al., 2018) found that the proportion of respiratory viruses in the pathogen spectrum of severe pneumonia has increased. In our study, 67 cases (70%) had viral infections, of which 28 cases were single infections and 39 were coinfections. Common viruses with high detection rates were respiratory syncytial virus, cytomegalovirus, rhinovirus, influenza virus, parainfluenza virus, and bocavirus. The sensitivity and accuracy of mNGS were higher than those of CMTs for the diagnosis of viral infections (Table 3). Owing to the difficulty of viral culture and the high rate of false positives in nucleic acid detection, it can be difficult to determine the etiology of the viruses identified (Ren et al., 2018). The relative abundance and read ratios of mNGS samples, relative to the negative control, may provide some clues for the determination of viral infections. However, in clinical practice, DNA testing alone may miss some RNA viruses, resulting in a decreased detection rate . Messenger RNA of DNA viruses, detected in mNGS RNA testing, may provide clues regarding active transcription (Graf et al., 2016). Thus, performing both mNGS DNA and RNA testing is valuable in diagnosing the etiology of pneumonia. The unbiased nature of mNGS is useful for the detection of new and variant viruses , evolutionary tracing (Lu et al., 2020), and strain identification (Qian et al., 2021), as well as for guiding epidemiological investigations, public health research, and epidemic prevention and control during infectious disease outbreaks (Deurenberg et al., 2017). mNGS played a key role in the rapid identification of pathogens in the outbreak of the novel coronavirus pneumonia in late 2019 Ren et al., 2020). Our study revealed that the diagnostic performance of mNGS varied for different pathogens (Table 3). For detecting bacterial infections, the overall sensitivity (88.6% vs. 25.7%), accuracy (87.4% vs. 70.9%), PPV, and NPV of mNGS were higher than those of CMTs. However, there was no significant difference between mNGS and CMTs for the diagnosis of S. pneumoniae, S. aureus, and H. influenzae (p > 0.05). This differs slightly from previous studies (Xie et al., 2019) and may be related to the fact that these bacteria are clinically common in pediatric pneumonia, where empiric therapy is effective. mNGS has the advantage of being able to detect more pathogens. The sensitivity and accuracy of mNGS were higher than those of CMTs for the diagnosis of viral infection (p < 0.01); however, the PPV varied among different viruses. In particular, our study revealed that the parallel detection of DNA and RNA can determine the activity of DNA viruses and detect RNA viruses. Other 3 (3%) 0 (0%) 3 (7%) f 0.014 a Including Pseudomonas aeruginosa (n = 2), Haemophilus parainfluenzae (n = 3), Bacteroides fragilis (n = 1), Escherichia coli (n = 1), and Bordetella parapertussis (n = 1). b Including Acinetobacter baumannii (n = 1), Pseudomonas aeruginosa (n = 3), Haemophilus parainfluenzae (n = 1), Morella catarrhalis (n = 3), Klebsiella pneumoniae (n = 1), and Enterobacter cloacae complex (n = 1). c Including Enterovirus D68 (n = 1), Human metapneumovirus (n = 1), and Bocavirus (n = 3). d Including Human metapneumovirus (n = 2), Bocavirus (n = 4), and Enterovirus D68 (n = 1). e Including Candida albicans (n = 2). f Including Chlamydia trachomatis (n = 2) and Ureaplasma urealyticum (n = 1). For fungal infections, the overall sensitivity of mNGS was higher than that of CMTs; however, the specificity and accuracy were lower than those of CMTs. The total NPV of mNGS was 98.7% (96.3%-100%). Positive results for P. jirovecii, such as staining microscopic examination and PCR, are important diagnostic criteria; however, the detection rates are low. P. jirovecii was detected in 12 patients using mNGS. In comprehensive clinical analysis, only four cases were considered to be pneumocystis pneumonia. These cases were all infants, in whom the course of disease was >2 weeks and the effect of conventional treatment was not good. This may be related to the fact that fungal infection was secondary to low immunity after infection. The remaining eight patients recovered without antifungal therapy; P. jirovecii was probably colonized in the lower respiratory tract. Unfortunately, our study was not further validated using Gomori methenamine silver staining, which may have influenced the comparison of the two testing methods. Recent studies (Wang et al., 2019b;Lin et al., 2022) have shown that mNGS has good diagnostic performance in the detection of pneumocystis. The identification of Aspergillus spp. by mNGS remains a challenge because of the difficulty of extracting DNA from its thick polysaccharide cell walls (Bittinger et al., 2014). Three cases of severe pneumonia with Aspergillus spp. etiology were reported by He et al. (2019). mNGS results indicated Aspergillus spp., and the patients were adjusted for antifungal treatment; their conditions improved. Thus, the accuracy of mNGS for the detection of Aspergillus spp. is suggested. In contrast with these results, our study did not show an advantage of mNGS for the diagnosis of Aspergillus infections. However, the small number of cases of fungal pneumonia in this study likely introduced biases in the calculation of diagnostic performance. Although the specificity and accuracy of mNGS were high for M. pneumoniae diagnosis, the sensitivity was significantly lower than that of CMTs. In our study, mNGS did not show an advantage for the diagnosis of M. pneumoniae infections. The diagnosis of M. pneumoniae was confirmed by serology in the early stage of the disease (before admission to our hospital); the detection rate may have decreased after treatment. For the detection of M. pneumoniae, it has been reported that combined detection methods can improve the specificity and sensitivity of diagnosis and reduce false-negative and false-positive rates. M. pneumoniae cannot be reliably diagnosed using only a single test (Loens and Ieven, 2016;Tang et al., 2021). Overall, mNGS can improve the detection rate of pathogens and mixed infections in pediatric pneumonia. The diagnostic utility of mNGS differs for different pathogens. For fungi and M. pneumoniae, the CMT approach may need to be combined to improve diagnostic performance. mNGS is valuable as a complement to CMTs, especially when the clinician does not have a presumed pathogen or the local laboratory is without complete CMTs. This study has some limitations. First, the sample size was small, especially for fungal pneumonia. Second, P. jirovecii lacked further validation by Gomori methenamine silver staining; partial PCR failed to evaluate the diagnostic value of CMTs and mNGS. Third, at the time of this study, our hospital had had real-time PCR detection items for some pathogens, including 13 respiratory pathogens; there were no commercial offerings based on real-time multiplex PCR for the detection of community or hospital pathogens. Therefore, we did not compare the performances of multiplex PCR and mNGS for the detection of different pathogens. However, the use of real-time multiplex PCR assays is based on the clinician's belief that a patient is infected with one or more of these pathogens; it ignores rare and unknown pathogens. Finally, the interpretation of mNGS results, to a certain extent, depended on the subjective judgment of the clinician, which may have led to bias. Data availability statement The data presented in the study are deposited in the NCBI SRA repository, accession number SRR21425639~SRR21425741. Author contributions JL: Designed the study and revised and approved the final version. WD: Drafted the initial manuscript, retrieved pediatric literature, and edited the table and reference list. YW: Participated in formal analysis. HX: Participated in data analysis. All authors contributed to the article and approved the submitted version.
2022-10-28T13:29:14.711Z
2022-10-27T00:00:00.000Z
253164310
s2orc/train
v2
Capacitated Human Migration Networks and Subsidization
Capacitated Human Migration Networks and Subsidization Large-scale migration flows are posing immense challenges for governments around the globe, with drivers ranging from climate change and disasters to wars, violence, and poverty. In this paper, we introduce multiclass human migration models under user-optimizing and system-optimizing behavior in which the locations associated with migration are subject to capacities. We construct alternative variational inequality formulations of the governing equilibrium/optimality conditions that utilize Lagrange multipliers and then derive formulae for subsidies that, when applied, guarantee that migrants will locate themselves, acting independently and selfishly, in a manner that is also optimal from a societal perspective. An algorithm is proposed, implemented, and utilized to compute solutions to numerical examples. Our framework can be applied by governmental authorities to manage migration flows and population distributions for enhanced societal welfare. tsunamis, landslides, etc.) and slow-onset (malnutrition and hunger, drought, disease epidemics, insect infestations, etc.). Migrants from time immemorial have sought a better quality of life for themselves, moving to locations to better their situations. The [53] reports that 70.8 million humans have fled their homes worldwide, the highest level of displacement ever recorded. According to the [54], since the new millennium, the number of refugees and asylum seekers has increased from 16 to 26 million, comprising about 10% of total of the international migrants. The [18] reports that there have been significant migration and displacement events during the last 2 years with such events resulting in hardship, trauma, and loss of life. Many recent crises associated with migration have brought enhanced emphasis by both practitioners and academics on how to better address the associated challenges of migratory flows and the ultimate location of the migrants. Examples of epicenters of only a few of the migratory crises include Venezuela [23], Central America [2], Libya [50], and Syria [55], with countries such as Mexico [30], Italy [20], Greece [25], and Cyprus [52] serving as transit points for many refugees and asylum seekers in the dynamically evolving migration landscape (see also [47]). In particular, in many reports and studies, the capacity of nations to handle migrants, and we emphasize here that there are multiple classes of migrants (cf. [22]), has risen to the fore as a critical characteristic. Examples of such studies have included even the United States in terms of migrants from Central America [43], Colombia and other countries (Costa Rica and Ecuador) because of the issues in Venezuela and Nicaragua [10], as well as multiple countries in Europe as possible destination locations of migrants [48,17]. In this paper, we develop user-optimized (U-O) and system-optimized (S-O) multiclass models of human migration under capacities associated with the migrant classes and locations. Our work builds on that of [37], but with the generalization of the inclusion of capacities. Such a generalization is especially timely, as noted above. Moreover, to date, the majority of research on human migration networks, from an operations research and mathematical modeling perspective, has focused on the modeling of migration flows assuming user-optimizing behavior, originating with the work of [31]. In other words, it has been assumed that the migrants act selfishly and independently; see also [32,39,40,45,46,19,21,9,36,38,8], for a spectrum of U-O migration models. Davis et al. [15], in turn, utilize a complex network approach for human migration and utilize an international dataset for their quantitative analysis. System optimization in multiclass human migration networks is also important since governments may wish to maximize societal welfare and hope that migrants locate accordingly. However, the latter may be extremely challenging unless proper policies/incentives are put into place. Indeed, [1] have argued for an effective costefficient mechanism for the distribution of refugees in the European Union, for example. Clearly, that would require some form of central control and cooperation/coordination. Note that there are analogues to U-O and S-O network models, with a long history, in the transportation science literature (cf. [56,3,14,6]). Such concepts were made explicit, for the first time, in human migration networks, by [37]. We emphasize that in the transportation science literature, the concern is total cost minimization in the case of system optimization and individual cost minimization in the case of user optimization, along with route selection, subject to the conservation of flow equations. In the human migration network context, in contrast, we are concerned with total utility maximization in the case of S-O and individual utility maximization in the case of U-O behavior and the selection of locations. In addition, in this paper, we provide a quantitative mechanism, in the form of subsidies, that, when applied, guarantees that the system-optimized solution of our multiclass capacitated human migration network problem is also user-optimized. This is very important, since it enables governments, and policy-making bodies, to achieve optimal societal welfare in terms of the location of the migrants in the network economy, while the migrants locate independently in a U-O manner! Our work extends that of [37] to the capacitated network economy domain. Furthermore, we provide alternative variational inequality formulations of both the new U-O and S-O models, which include Lagrange multipliers associated with the location capacity constraints as explicit variables. Their values at the equilibrium/optimal solutions provide valuable economic information for decision-makers. This paper is organized as follows. In Sect. 2, we present the capacitated multiclass human migration network models, under S-O and under U-O behaviors. Associated with each location as perceived by a class is an individual utility function that, when multiplied by the population of that class at that location, yields the total utility function for that location and class. As in our earlier work (cf. [31], to start), the utility associated with a location and class can, in general, depend upon the vector of populations of all the classes at all the locations in the network economy. We assume a fixed population of each class in the network economy and are interested in determining the distributions of the populations among the locations under S-O and U-O behaviors. For each model, we provide alternative variational inequality formulations. We also illuminate the role that is played by the Lagrange multipliers associated with the class capacities on the locations in the network economy. In Sect. 3, we outline the procedure for the calculation of the multiclass subsidies in order to guarantee, even in the capacitated case, that the system-optimized solution is, simultaneously, also user-optimized. Hence, once the subsidies are applied, the migrants will locate themselves individually in the network economy in a manner that is optimal from a societal perspective. As argued in [37], there are analogues of our subsidies to tolls in transportation science. In the case of congested transportation networks, the imposition of tolls (see [14,13,12,29]) results in system-optimized flows also being user-optimized. In other words, once the tolls are imposed, travelers, acting independently, select routes of travel which result in a system optimum that minimizes the total cost to the society. In this paper, we construct policies for human migration networks that maximize societal welfare but in the case of capacities. In Sect. 4, we outline the computational algorithm, which we then apply to compute solutions to numerical examples that illustrate the theoretical results in this paper in a practical format. We summarize our results and present our conclusions in Sect. 5. The Capacitated Multiclass Human Migration Network Models In this section, we construct the capacitated multiclass network models of human migration. We first present the system-optimized model and then the user-optimized one. The notation follows that in [37], where, as mentioned in the Introduction, no capacities on the populations at the locations were imposed. We assume that the human migrants have no movement costs associated with migrating from location to location since we are concerned with the long-term population distribution behaviors under both principles of system optimization and user optimization. The network representation of the models is given in Fig. 1. There are J classes of migrants, with a typical class denoted by k, and n locations corresponding to locations that the multiclass populations can migrate to, with a typical location denoted by i. There are assumed to be no births and no deaths in the network economy. In the network representation, locations are associated with links. A link can correspond to a country or a region within a country, and the network economy can capture multiple countries. If a government is interested in within-country migration, exclusively, then the network economy (network) would correspond to that country. Table 1 contains the notation for the models. All vectors here are assumed to be column vectors. According to Table 1, there is a utility function U k i associated with each class k, k = 1, . . . , J , and location i, i = 1, . . . , n, which captures how attractive location i is for that class k. Observe that (see Table 1) the utility and, hence, the total utility,Û k i , associated with location i and class k, may, in general, depend upon the population distribution of all the classes at all the locations. The [44], for example, recognizes that different locations may be more or less attractive to distinct classes of migrants. The population of class k at location i. We group the {p k i } elements into the vector p k ∈ R n + . We then further group the p k vectors, k = 1, . . . , J , into the vector p ∈ R J n + cap k i The nonnegative capacity at location i for class k; k = 1, . . . , J ; i = 1, . . . , n β k i The Lagrange multiplier associated with capacity constraint for k at i; k = 1, . . . , J ; i = 1, . . . , n. We group all these Lagrange multipliers into the vector β ∈ R J n + P k The fixed population of class k in the network economy; The utility of individuals of class k at location i; i = 1, . . . , n. We group the utility functions for each k into the vector U k ∈ R n and then group all such vectors for all k into the vector U ∈ R J n U k i (p) The total utility of class k at location i; i = 1, . . . , n. The total utilitŷ We now present the constraints. The population distribution of each class among the various locations must sum up to the population of that class in the network economy, that is, for each class k; k = 1, . . . , J : Furthermore, the population of each class at each location must be nonnegative, that is, and not exceed the capacity: The feasible set We assume here that: for all classes k. In other words, we assume that the network economy has sufficient capacity to accommodate the population of each class. Hence, the feasible set K 1 is nonempty. Moreover, it is compact. The Capacitated System-Optimized (S-O) Problem The government (or governments), in the case of system optimization, wishes to maximize the total utility in the network economy, which reflects the societal welfare, subject to the constraints. The capacitated system-optimized (S-O) problem is: subject to constraints (1) through (3). We assume that the total utility functions for all the classes at all the locations are concave and continuously differentiable. Then, from classical results (cf. [24] and [33]), we know that the optimal solution, denoted by p , satisfies the variational inequality (VI) problem: determine p ∈ K 1 , such that: A solution p to VI (6) is guaranteed to exist under our imposed assumptions since the feasible set K 1 is compact and the total utility functions are continuously differentiable. Uniqueness of the solution p then follows under the assumption that all the utility functions are strictly concave. We now present an alternative variational inequality to the one in (6), which we utilize to compute the S-O solution in numerical examples. Furthermore, the solution of the alternative VI allows us to determine the optimal Lagrange multipliers associated with the location class capacities in the S-O context. The Lagrange multipliers at the optimal solution provide valuable economic information. We define the feasible set K 2 ≡ {(p, β)| (1), (2) hold and β ∈ R J n + }. Alternative Variational Inequality Formulation of the Capacitated S-O Problem A solution to the S-O problem also satisfies the VI: determine (p , β ) ∈ K 2 such that: The above result follows from [4], page 287. Capacities have also been applied to links in various supply chain system-optimized problems and variational inequality formulations constructed; see, for example, [35] and [41]. The Capacitated User-Optimized (U-O) Problem We now introduce the capacitated user-optimized version of the above S-O model. The new model extends the classical one introduced in [31] to include capacities. The Capacitated Equilibrium Conditions Mathematically, a multiclass population vector p * ∈ K 1 is said to be U-O or, equivalently, a capacitated equilibrium, if for each class k, k = 1, . . .,J , and all locations i, i = 1, . . . , n: From (8), one can see that locations with no population of a class are those with the lowest utilities; those locations with a positive population of a class, with the population not at the capacity for the location and class, will have equalized utility for that class and higher than the unpopulated locations of that class. Moreover, the equalized utility will be equal to an indicator λ k . The indicator λ k is, actually, the Lagrange multiplier associated with constraint (1) for k with the value at the equilibrium. Those locations with a class k at its capacity have a utility greater than or equal to λ k . A capacitated U-O solution p * satisfies the VI: determine p * ∈ K 1 such that: We now prove the equivalence of the solution to the capacitated equilibrium conditions (8) and the VI (9). Indeed, it is easy to see that, according to (8), for a fixed k and i, the equilibrium conditions imply that: Observe that, if p k * i = 0, (10) holds true; if p k * i = cap k i , then (10) also holds; and (10) also holds if 0 < p k * i < cap k i . Summing now (10) over all k and all i yields: But, because of (1), (11) simplifies to precisely (9). Furthermore, we now show that if p * satisfies VI (9), then the p * also satisfies the capacitated equilibrium conditions (8). In (9), we set p l i = p l * i for all l = k, which yields: If there are two locations, say, r and s with positive populations not at their capacities, set for a sufficiently small > 0: and all other p k i s equal to p k * i . Clearly, such a population distribution is also feasible. Substitution into (12) yields, after algebraic simplification: Similarly, by constructing another feasible population pattern: with all other p k i = p k * i , and substitution into (12) yields: Equations (13) and (14) can only hold true if: which we call λ k . Hence, the second condition in (8) has been established. On the other hand, suppose that p k * i ≥ 0 for all i, but p k * r > 0 and p k * s = 0. For a sufficiently small > 0, construct p k r = p k * r − and p k s = p k * s + , with all other p k i s equal to p k * i , and substitute these values into (12). After, algebraic simplification, we obtain: and the third condition in (8) is verified. Now, in order to verify that a solution to VI (9) also satisfies the top condition in (8), if for some location r, p k * r = cap k r , then we construct a feasible distribution pattern such that: with > 0 sufficiently small and all other p k i = p k * i . Substitution into (12), after algebraic simplification, yields: and the conclusion follows. With the above arguments, we have shown that a capacitated equilibrium p * is equivalent to the solution of the VI (9). We now provide an alternative VI formulation of the capacitated equilibrium conditions. This result is immediate by making note of [31] demonstrating that the U-O human migration model (without capacities) is isomorphic to a traffic network equilibrium problem (cf. [14,11]) and, hence, in the case of capacities, also isomorphic to a traffic network equilibrium problem with side constraints (see [28]) and with special structure. Alternative Variational Inequality Formulation of the U-O Problem The U-O solution satisfies the variational inequality problem: determine (p * , β * ) ∈ K 2 such that: Illustrative Examples We first present an uncapacitated example for which we provide U-O and S-O solutions. We then add capacities to the locations and report the new U-O and S-O solutions. There is a single class in the network economy and three locations. The total population is P 1 = 120, and the utility functions at the three locations are: The user-optimized solution is: .00, p 1 * 2 = 40.00, p 1 * 3 = 50.00, yielding λ 1 = 160, since: The S-O solution, on the other hand, is: We now impose capacities as follows: Remark We now show how the optimal Lagrange multipliers can be utilized. For example, if one modifies the utility functions by reducing each of them by the value of the optimal Lagrange multiplier associated with the location and the class, then the same user-optimizing solution is obtained as the one for the problem with the corresponding capacities. Indeed, proceeding as above, we modify the utility functions as:Ũ Similarly, one can modify the utility functions in the same manner, but by using the optimal Lagrange multipliers for the S-O problem, to obtain the same S-O solution as for the problem with the capacities. Hence, government decision-makers, in order to limit the population of certain (or all) classes at certain (or all) locations, can accomplish this through regulations corresponding to the capacities or by modifying the utility functions accordingly to yield the same result. Now, we describe how subsidies (which may be viewed as a positive intervention) can, once imposed, make the capacitated S-O solution also a capacitated U-O one. Subsidies to Guarantee the Capacitated S-O Solution Is Also a Capacitated U-O Solution In [37], a procedure was introduced for the calculation of subsidies that, once applied to the locations with a positive population of a class under S-O, guaranteed that migrants operating under the U-O behavioral principle would select locations that were also optimal from a societal standpoint; that is, they were systemoptimized. Here we show that the same general construct is also applicable to capacitated problems of human migration. The procedure is as follows. We first solve for the capacitated system-optimized solution p satisfying VI (7) or, equivalently, VI (6). For each class k, we denote those locations with a positive population by k 1 , . . . , k n k , where n k is the number of locations in the network economy with a positive population of class k. We also introduce notation for subsidies associated with the different locations for each class denoted by class k by (subsidy) k 1 , (subsidy) k 2 , . . . , (subsidy) k n k . We then enumerate those locations in a list as follows: and so on until: Note that μ k is the incurred utility for class k after the subsidies are distributed for the class at the locations with positive populations of that class. Also, we can number those locations for that class with zero populations of that class (if there are any) as follows: and so on until: Expressions (16) and (17) reveal that the appropriate governmental authority chooses the μ k for each class k and then the subsidy for each location for that class is determined by straightforward subtraction. In order to select an appropriate μ k , as noted in [37], for the uncapacitated case, the μ k s can be set as max k l ;l=1,...,n k U k k l (p ). All thus calculated are nonnegative and, furthermore, all migrants enjoy the maximal utility for each class at all the populated locations. Also, for the subsidies associated with locations with no populations of a class k (see (17)), we set those subsidies zero. Returning to the above simple example, we note that μ 1 = 180.00 and the above subsidy formulae simplify to: Observe that an application of the above subsidies modifies the utility functions as follows: The above subsidies may be viewed as investments by government(s). As for the budgets, if an individual government experiences a budgetary shortfall, additional financing may be provided by a supra authority such as the World Bank, the United Nations, or, if in Europe, the European Union. Altemeyer-Bartscher et al. [1] have argued for closer cooperation among countries regarding migration crises and also advocated for an economic approach as to distribution of the migrants. Here, we provide a quantitative approach with explicit formulae for implementation. As noted earlier, climate change as well as disasters may act as drivers of human migrations. Robinson et al. [49], for example, provide a machine learning approach to migration in the United States under sea level rise but emphasize that their approach is not yet ready for policy-making. They, as [5], consider sea level rise due to climate change and migration within a country -the United States. The latter authors observe that offering a subsidy (e.g., a partial buyout) can be effective if the government has a significantly lower discount rate than residents. However, they assume homogeneous residents, whereas we consider multiclass ones, and we also allow for multiple countries and not just regions within a country. For edited volumes on dynamics of disasters, see [26,27]. Once a disaster or disasters strike, one would modify the fixed populations of the various classes in the economy, as need be, along with the utility functions and rerun the model(s), along with the subsidies. In the case of disasters, we can expect that populations will decrease and so would utility functions associated with locations that have been negatively impacted. The Algorithm and Numerical Examples We apply the Euler method of [16] for the solution of the capacitated network models of human migration. As discussed therein (see also [42]), the Euler method is induced by a general iterative scheme and was inspired by the theory of projected dynamical systems, whose set of stationary points coincides with the set of solutions to an appropriate variational inequality problem. The Euler method, in fact, can be viewed as a time discretization of the underlying continuous time trajectories of the projected dynamical system until a solution is achieved. It has been applied to numerous network problems, including supply chain ones (see [34]). The Algorithm For purposes of standardizing the mechanism, we utilize similar notation to that in [37] and put variational inequality (7) into standard form [33]: determine X * * ∈ K ⊂ R N such that: where ·, · denotes the inner product in N-dimensional Euclidean space. F (X) is a given continuous function such that F (X) : X → K ⊂ R N . K is a closed and convex set. We define the vector X ≡ (p, β) and the vector F (X) with elements . . , n. We define the feasible set as K ≡ K 2 and N = 2J n. Thus, VI (7) can be put into the standard form (18) with X * * = (p , β ). Similarly, VI (15) can also be put into standard form with X and K as above and with the components of its F (X) given by −U k i (p, β), cap k i − β k i ; ∀k, ∀i, and with X * * = (p * , β * ). At iteration τ , the statement of the Euler method is: where P K is the projection on the feasible set K and F is the function that enters the variational inequality problem (18). Dupuis and Nagurney [16] proved that, for convergence of the general iterative scheme, which induces the Euler method, the sequence {a τ } must satisfy ∞ τ =0 a τ = ∞, a τ > 0, a τ → 0, as τ → ∞. Specific conditions for convergence of the Euler method within many network-based models can be found in [42] and in [34] and the references therein. The Euler method nicely exploits the special network structure of the models as depicted in Fig. 1 and allows for closed-form expressions at each iteration for the computation of the Lagrange multipliers associated with the capacity constraints. We solve the network subproblems of special structure, which are separable quadratic programming problems, using the exact equilibration algorithm (cf. [14,33]). This algorithm yields the exact solution at each iteration for the populations. Numerical Examples The algorithm was implemented in FORTRAN and a Unix system at the University of Massachusetts Amherst used for the computations. The series {a τ } in the algorithm was set to 1, 1 2 , 1 2 , 1 3 , 1 3 , 1 3 , etc. with the convergence tolerance equal to 10 −5 . In other words, the algorithm was considered to have converged when the absolute value of each of the computed population values for each class at two successive iterations was less than or equal to .00001. For continuity, and cross comparison purposes, the data for the uncapacitated examples was taken from [37], and to these we added capacities. For completeness, we report both the uncapacitated (solved in the paper above) and the capacitated versions, reported for the first time here. In our numerical examples, the network economy consists of two classes of migrants and five locations. Utility Function and Fixed Population Data The fixed populations in the network economy of the two classes are, respectively: P 1 = 1,000.00 P 2 = 2,000.00. The utility functions and the total utility functions for class 1 are: The utility functions and the total utility functions for class 2 are: We first recall the uncapacitated U-O and S-O solutions obtained in [37] and then report the capacitated solutions based on the new models constructed here. We also report the calculated subsidies in the more general capacitated case introduced in this paper. We provide two sets of examples. Numerical Example Set 1 The uncapacitated U-O solution for the numerical example with the above data is: One can see, from this example, that at all the locations with populations of a class at the capacity, there is an associated positive Lagrange multiplier. Also, it is clear that the capacitated U-O solution is quite distinct from the uncapacitated one. For example, all the locations have a positive population of class 2 under the capacitated solution. Moreover, in the uncapacitated case, location 5 is most attractive for class 1, whereas location 3 is most attractive for class 2. In contrast, in the capacitated case, location 3 is now most popular for class 1, whereas locations 1 and 4 are most popular (and at the capacities) for class 2. The Under the uncapacitated S-O, location 5 is most attractive for class 1 and location 3 is for class 2. However, in the capacitated case, location 4 is best for class 1 and location 1 for class 2, with locations 3 through 5 also quite competitive. We now report the calculated subsidies, which are obtained using the described procedure in Sect. 3. We note that μ 1 = 3,474.21 and μ 2 = 4,551.50 -these values represent the highest utility of each class at a location evaluated at the S-O solution, which are obtained for class 1 at location 5 and for class 2 at location 3. The calculated subsidies are: Class Numerical Example Set 2 The data were as in the first numerical example set except now we considered a sizable decrease in the populations of each of the two classes due to a disaster. As argued in [37], this could occur in the form of a pandemic, that is, a healthcare disaster hitting the network economy. We note that the novel coronavirus outbreak that originated in Wuhan, China [51], was officially declared a pandemic by the World Health Organization on March 11, 2020 (cf. [7]). This coronavirus causes the disease known as Covid-19. The data in this example was as in Numerical Example 1, except that now we assumed that 50% of the population of each class has perished, that is, P 1 = 500.00 P 2 = 1,000.00. The uncapacitated U-O solution for the numerical example with the above data is: Class 1 Uncapacitated U-O Population Distribution p 1 * 1 = 0.00, p 1 * 2 = 0.00, p 1 * 3 = 0.00, p 1 * 4 = 0.00, p 1 * 5 = 500.00. As noted in [37], in the S-O solution, one sees a greater "spreading out" of the classes among the locations than in the U-O solution. Class 2 Uncapacitated U-O Population Distribution We kept the same capacities as in the first set. The Euler method now yielded the following solution: The capacitated U-O solution for the numerical example with the above data is: We now report the subsidies that, when imposed, guarantee that the capacitated S-O solution obtained above for the second numerical example is also U-O. Here we had that μ 1 = 3,599.99 and μ 2 = 4,575. 18 Summary and Conclusions Problems of human migration are issues of global concern and are presenting immense challenges to governments around the world. Multiple countries are dealing with different classes of migratory flows and the ensuing difficulties when faced with capacities at locations under their jurisdictions. Rigorous, appropriate policies may help to better reallocate migrants across suitable locations. Historically, many of the mathematical models of human migration have utilized a network formalism and have assumed user-optimizing behavior, that is, that migrants select locations, which are best for themselves, as revealed through utility functions that depend on the population distributions among the locations of the different classes of migrants. However, such behavior may lead to costs to society and even reduced societal welfare. Hence, in this paper, we build upon the recent work of [37], who proposed both system-optimized and user-optimized multiclass migration network models and demonstrated how incentives, in the form of subsidies, when applied, guarantee that the system-optimized solution, which maximizes the total utility in the network economy, becomes, at the same time, user-optimizing. Migrants, thus, under such subsidies, and acting selfishly and independently, would select locations to migrate to and locate at that are optimal from the system perspective. In this paper, we propose a novel extension of that work, in the form of capacities at different locations associated with the classes of migrants. This brings a greater realism in capturing challenges faced by various governments who are dealing with refugees, asylum seekers, etc. For each U-O and S-O model, we provide alternative variational inequality formulations of the governing equilibrium/optimality conditions. We then utilize the variational inequality formulations with Lagrange multipliers associated with the multiclass capacity constraints to gain deeper insights into appropriate policies. We show that the Lagrange multipliers can be utilized to modify the utility functions so that the capacities are made implicit. Moreover, we show how, through the use of appropriately constructed formulae for subsidies, once applied, the system-optimized solution becomes, at the same time, user-optimized. This provides a more positive approach to the redistribution of human migrants and enhances societal welfare. In addition, in this paper, we provide an effective computational procedure, which exploits the underlying special network structure of our models. The algorithm is implemented, and the solutions to a series of numerical examples are computed. We report the user-optimized and the system-optimized solutions, both uncapacitated and capacitated, along with the subsidies for the latter. Our theoretical framework can be applied in practice under different scenarios, along with sensitivity analysis, as, for example, in the case of disasters, when there are population changes and/or modifications to utility functions because of impacted infrastructure.
2020-08-25T15:29:51.534Z
2020-11-28T00:00:00.000Z
221317810
s2orc/train
v2
Quantitative NMR Interpretation without Reference
Quantitative NMR Interpretation without Reference As has been documented numerous times over the years, nuclear magnetic resonance (NMR) experiments are intrinsically quantitative. Still, quantitative NMR methods have not been widely adopted or largely introduced into pharmacopoeias. Here, we describe the quantitative interpretation of the 1D proton NMR experiment using only absolute signal intensities with the variation of common experimental parameters and their application. Introduction Since its inception, NMR has always been considered inherently quantitative [1][2][3][4][5][6] and it has been used in teaching [7]. As opposed to all other spectroscopic methods, the intensity of an NMR signal is directly proportional to the abundance of the nuclei causing it [6][7][8], which could even be in multiple molecules [9,10]. In the case of simple mixtures, NMR allows for simultaneous quantifcation of the constituents based on one sole reference standard. Te standard does not have to share its identity with any of the analytes of interest. Tis key feature makes quantitative NMR an extremely versatile technique, and numerous applications for the quantitative analysis of pharmaceutical compounds have been proposed over the decades [6,[11][12][13][14][15][16][17][18][19][20][21][22][23]. Te majority of the described experiments are 1D liquid state, but 2D and CPMAS experiments have also been proposed. Also, most of the proposed quantitative methods are based on proton NMR experiments, but other nuclei have been used since the beginning: 31 Whichever method is chosen, the quantifcation by NMR is always based on the comparison of the signal intensity of reference material with the signal intensity of the analyte(s), as the intensities are proportional to the molar concentrations and the number of protons contributing to the signal. Te reference signal can be provided by a reference material mixed with the analyte in one solution, internal referencing (IR) [18,25,29,30,32,[40][41][42][43][44], or by a separate solution, external reference (ER). Two methods for ER have been described; most commonly, two identical experiments are carried out, one time with the analyte and the other time with the reference material [6,45,46]. Alternatively, a solution with the reference is sealed into a capillary that is then added to the solution of the analyte [47]. Hybrid methods like ERETIC [48][49][50] and PULCON [46,[51][52][53][54][55][56] have also been implemented, which combine ER and IR by an intermediate step. All these methods work with the best reliability when the reference used has a molar concentration that is close to the analyte's concentration, thus requiring some previous knowledge about the analyte. Te analysis of mixtures can also be restricted, as the quantifcation reliability might vary with the concentrations involved. Several experimental parameters shown in Table 1 have infuence on the NMR spectrum, and some of them are fexible depending on the chosen method. Here, we demonstrate a new hybrid method, fexible absolute intensitybased quantifcation by NMR (FAINT-NMR), which can be applied to the quantifcation of compounds, even with largely varying concentrations, without previous knowledge. Te work presented demonstrates that the restrictions described for external referencing methods [46] are not necessary. Te normalization of the absolute signal intensity by a receiver gain and the number of scans results in an Intensity Gain (IG) factor, based on which the quantifcation of every sample becomes possible, independent of the experimental parameters. As amplifers are notoriously nonlinear, a manual linearization of the receiver gain values was performed, in order to verify if this would improve the quantifcation quality further. Experimental Te usability of FAINT-NMR was verifed on a Bruker equipment AVANCE III 400 MHz equipped with a 5 mm BBO Prodigy probe and a sample changer, which was used with as much automation as possible for experiment acquisition, followed by partially automated interpretation. As the methods target small molecules, protons were chosen as the observed nucleus due to higher sensitivity and sufcient signal separation. Samples were weighted on a calibrated Mettler Toledo AG245 balance and diluted with 0.6 ml of DMSO-d 6 into 5 mm NMR tubes. After determining values for the fxed parameter (D1), the infuence of the fexible parameters (NS, RG) was determined. Te longest T 1 of the reference material was determined as 2.06 s by our own measurements in DMSO-d 6 , as 1.86 s in CDCl 3 [57] and 2.7 s in D 2 O [58]; thus, the inter-scan delay D1 was fxed as 16 s for all experiments. Simple proton experiments with a 90°pulse and 16 k observe points were obtained at 25°C, varying the number of scans and receiver gain. Experiments with 2 to 64 scans (NS) and receiver gain (RG) from 25.4 to the highest RG value determined by the function automatic receiver gain (RGA) for the sample were carried out in duplicates. Te RG values available on the equipment usually reach values above 4 K, which we could not observe for our samples for proton experiments. For proton experiments, we observe that the maximum RG value for low analyte concentrations is actually defned by the solvent "concentration" in the sample. When the analyte is in high concentration, it can decrease the RG value, as seen. Tus, the experiments use only a small slice of the possible RG values. All experiments were processed automatically (Fourier transformation and phase correction), followed by integration using intervals defned on one reference experiment. With this data set a constant IG (Intensity Gain, I * NS − 1 * RG − 1 * [mMol] − 1 ) factor was determined, that allows the calculation of the concentration directly from the absolute intensity of a signal. Finally, a linearization of the RG values was carried out, and the improvement of the back-calculated values was verifed. Results FAINT-NMR was applied to a series of quinine samples, Figure 1, which was chosen due to its high molecular weight. Figure 2 shows the proton NMR spectrum of quinine in DMSO-d 6 and the signals that were used for the quantifcation. Further signals were not used because they overlap or have complex multiplet patterns. In total, fve samples diluted in 0.6 ml of DMSO-d 6 were used, as shown in Table 2. In Figure 3, the absolute signal intensities of 13 signals of quinine were averaged, normalized against their concentration, number of protons and scans, and scatter-plotted according to their respective RG. Tese signals were chosen because of their lack of overlap and the small number of observed couplings. Te signal-to-noise ratio of all signals was always above 200 : 1. Te results from Figure 3 show that the per-scan signal intensity increment scatters around an average of 1650. Based on this, an IG factor of 1650 was defned for all intensity-based quantifcations shown here. Furthermore, these results were used to carry out a manual linearization of the RG values, to further improve the results. Te original and linearized RG values are shown in Table 3. In Table 4, the 5 actual sample concentrations are compared to the back-calculated values (BC) and values back-calculated using a linearized RG (BC-l). Figure 4 shows the linear regression graph of the values in Table 4. Te linear regression equations in Table 5 clearly show that the linearization of the RG improves the results, as the slope for the equation is very close to the optimum value of 1.0. Conclusions So far, a large-scale application of qNMR has been restricted by experimental conditions. In the case of internal reference methods, difculties might arise because of signal overlap or interaction of the reference with the sample. In the case of external referencing, the fxed experimental conditions usually restrict the working range of the method. Te results presented here show that some experimental parameters, like RG and NS, can be varied largely without afecting the quality of the quantifcation result. Te linearization of the RG values further improves the accuracy of the method. By lifting these restrictions, FAINT-NMR can facilitate the quantifcation by NMR in general, including trace amounts in samples, as long as well-isolated signals are observed. One possibility to achieve theses isolated signals would be to combine Bayesian data analysis with FAINT-NMR, which would provide isolated signals and turn integration limits unneccessary. Data Availability All data (NMR data as raw, processed, and integrated; Spreadsheet with data interpretation) are available at https:// doi.org/10.5281/zenodo.7221753. Conflicts of Interest Te authors declare that there are no conficts of interest.
2022-11-12T16:38:39.446Z
2022-11-10T00:00:00.000Z
253472400
s2orc/train
v2
Immunometabolism of Immune Cells in Mucosal Environment Drives Effector Responses against Mycobacterium tuberculosis
Immunometabolism of Immune Cells in Mucosal Environment Drives Effector Responses against Mycobacterium tuberculosis Tuberculosis remains a major threat to global public health, with more than 1.5 million deaths recorded in 2020. Improved interventions against tuberculosis are urgently needed, but there are still gaps in our knowledge of the host-pathogen interaction that need to be filled, especially at the site of infection. With a long history of infection in humans, Mycobacterium tuberculosis (Mtb) has evolved to be able to exploit the microenvironment of the infection site to survive and grow. The immune cells are not only reliant on immune signalling to mount an effective response to Mtb invasion but can also be orchestrated by their metabolic state. Cellular metabolism was often overlooked in the past but growing evidence of its importance in the functions of immune cells suggests that it can no longer be ignored. This review aims to gain a better understanding of mucosal immunometabolism of resident effector cells, such as alveolar macrophages and mucosal-associated invariant T cells (MAIT cells), in response to Mtb infection and how Mtb manipulates them for its survival and growth, which could address our knowledge gaps while opening up new questions, and potentially be applied for future vaccination and therapeutic strategies. Introduction Mortality associated with tuberculosis (TB), caused by Mycobacterium tuberculosis (Mtb), recorded a slight increase from a total of 1.4 million TB deaths in 2019 to 1.5 million in 2020 [1]. The rise in mortality may be due to the impact of the global COVID-19 pandemic that had reduced the access to TB diagnosis and treatment [1,2]. This has further dented our progress in the WHO End TB Strategy towards the elimination of the global TB epidemic, which had already been shown to be lagging even before the COVID-19 pandemic [1]. These impacts are predicted to be worse in the coming years as COVID19 co-infection is highly likely, with the estimation that a quarter of the global population is latently infected with Mtb [3]. COVID-19 has been shown to impair the function of immune cells [4,5], which could have a similar effect that favours TB disease progression as seen in other conditions such as human immunodeficiency virus (HIV) co-infection, initiation of antitumour necrosis factor therapy, and malnutrition [6,7]. Thus, improved efforts are urgently needed to significantly reduce the global TB burden. Even with great advances in the field of immunology in the past 30 years, there are still many gaps in our understanding of the entire mechanism of Mtb infection, which contributes to the current inefficiencies in controlling TB. In recent years immunologists have started to integrate a metabolomic approach into their research, creating an extended field termed immunometabolism [8]. Research in TB immunometabolism is gathering pace as an increasing number of studies in this field are published each year. In this review, we will discuss both early and chronic phases of the immune response to Mtb at the site of infection, and how metabolism influences those responses. With emerging evidence of a Int. J. Mol. Sci. 2022, 23, 8531 2 of 12 possible association of the gut-lung axis with pulmonary TB, factors from another site of the mucosal network will also be considered. Thus, this review will also provide a glimpse into the potential role of the gut-lung axis in exploring the mechanism of host immunity to TB. Understanding the cellular interplay between immune responses and metabolism during Mtb infection will generate ideas for improved TB vaccination strategies and new targets for therapy. Immunometabolism during Early Phase of Mtb Infection Innate immune cells that will be discussed in this review include alveolar epithelial cells, macrophages, dendritic cells, neutrophils and mucosal-associated invariant T (MAIT) cells. An overview of the immunometabolism of these cells in response to Mtb infection is illustrated in Figure 1. site of infection, and how metabolism influences those responses. With emerging evidence of a possible association of the gut-lung axis with pulmonary TB, factors from another site of the mucosal network will also be considered. Thus, this review will also provide a glimpse into the potential role of the gut-lung axis in exploring the mechanism of host immunity to TB. Understanding the cellular interplay between immune responses and metabolism during Mtb infection will generate ideas for improved TB vaccination strategies and new targets for therapy. Immunometabolism during Early Phase of Mtb Infection Innate immune cells that will be discussed in this review include alveolar epithelial cells, macrophages, dendritic cells, neutrophils and mucosal-associated invariant T (MAIT) cells. An overview of the immunometabolism of these cells in response to Mtb infection is illustrated in Figure 1. , Mtb is taken up by resident alveolar macrophages (M2) and interstitial dendritic cells (DC), which predominantly rely on oxidative phosphorylation (OXPHOS) metabolism at the basal state without anti-microbial activities. Upon stimulation, the macrophage is activated, switching to aerobic glycolysis metabolism and a pro-inflammatory M1 phenotype, with the release of cytokines (e.g., IL-12) to activate more immune cells. Resident MAIT cells may also get activated upon Mtb infection with enhanced glycolytic metabolism and anti-microbial phenotype. Activated DCs migrate to draining lymph nodes to activate T cells. (Illustration created with BioRender.com, accessed on 19 June 2022). Alveolar Epithelial Cells Inhaled Mtb must first breach the alveolar barrier to infect its host. The alveolar lumen is lined mostly (90-95%) by type 1 alveolar epithelial cells (AECs), which primarily facilitate gas exchange in the lungs, and the remaining 5-10% is occupied by type 2 AECs that are responsible for secreting surfactants and restoring damaged type 1 AECs [9,10]. Recent findings suggest that Mtb can invade and replicate in type 2 AECs, which are the favourable hiding place for Mtb from immune cells, as type 2 AECs are non-professional phagocytes [11]. This is reflected by the transcriptome of Mtb in type 2 AECs, which showed upregulation of genes for replication, cell wall synthesis, aerobic respiration, and , Mtb is taken up by resident alveolar macrophages (M2) and interstitial dendritic cells (DC), which predominantly rely on oxidative phosphorylation (OXPHOS) metabolism at the basal state without anti-microbial activities. Upon stimulation, the macrophage is activated, switching to aerobic glycolysis metabolism and a pro-inflammatory M1 phenotype, with the release of cytokines (e.g., IL-12) to activate more immune cells. Resident MAIT cells may also get activated upon Mtb infection with enhanced glycolytic metabolism and antimicrobial phenotype. Activated DCs migrate to draining lymph nodes to activate T cells. (Illustration created with BioRender.com, accessed on 19 June 2022). Alveolar Epithelial Cells Inhaled Mtb must first breach the alveolar barrier to infect its host. The alveolar lumen is lined mostly (90-95%) by type 1 alveolar epithelial cells (AECs), which primarily facilitate gas exchange in the lungs, and the remaining 5-10% is occupied by type 2 AECs that are responsible for secreting surfactants and restoring damaged type 1 AECs [9,10]. Recent findings suggest that Mtb can invade and replicate in type 2 AECs, which are the favourable hiding place for Mtb from immune cells, as type 2 AECs are non-professional phagocytes [11]. This is reflected by the transcriptome of Mtb in type 2 AECs, which showed upregulation of genes for replication, cell wall synthesis, aerobic respiration, and virulence [12]. In addition, Mtb genes involved in alternative electron transfer and non-aerobic respiration, and genes encoding Universal Stress Protein and other hypoxia-induced genes are downregulated, indicating a favourable condition for intracellular Mtb growth in AECs [12]. The direct impact of Mtb invasion on AEC metabolism is relatively unknown. Type 2 AECs are known to be the main producer of pulmonary surfactants and they need to produce lipids even under metabolically unfavourable conditions [13]. This could be another reason why type 2 AEC is a good hiding place for Mtb. Altered surfactant functions could lead to rapid Mtb growth in both AECs and macrophages, as demonstrated recently [14]. Compromised lung health, as may be the case in smoking, may facilitate direct infection of these pulmonary surfactant producers by Mtb, although this remains to be proven. Macrophages Macrophages reside in all tissues in the human body, operating as the primary phagocytes [15]. They are termed according to their location, such as alveolar macrophages (AMs) in the alveoli compartment of the lungs and interstitial macrophages (IMs) in the periphery [16,17]. AMs are the primary immune cells facing Mtb infection at the entry point. The nature of the alveoli microenvironment, which is constantly exposed to commensals and environmental components in the inhaled air, leads to a more immunotolerant M2 phenotype of AMs at the basal state. This immunotolerant state is driven by transforming growth factor β (TGF-β) secreted by AECs and AMs that can also induce regulatory signalling by naïve T cells [18,19], which favours Mtb entry and survival. As long-lived and self-renewal cells, AMs are reliant on oxidative phosphorylation (OXPHOS) to support the demands of their phagocytic function. OXPHOS is mainly fuelled by fatty acid oxidation (FAO), as lipids are abundantly available in the lung compared to glucose. This lipid-rich environment is permissive to Mtb infection as the mycobacteria also utilise lipid as their main carbon source [20]. Xenophagy is host-directed autophagy to control intracellular pathogens including Mtb, which is mediated by the binding of host ubiquitin to Mtb surface proteins and subsequent recognition by autophagy receptors [21]. Inhibition of FAO can also promote xenophagy, as demonstrated by Chandra et al., wherein Mtb is unable to grow in macrophages when host FAO is blocked, either chemically by trimetazidine (a compound in clinical use) or genetically by deletion of the mitochondrial fatty acid transporter carnitine palmitoyltransferase 2 (CPT2) [22]. FAO blockage leads to the accumulation of mitochondrial reactive oxygen species, which promotes NADPH oxidase recruitment to the phagosomal membrane, resulting in the induction of xenophagy [22]. In response to Mtb infection, AMs shift their metabolism to mount the antimicrobial and pro-inflammatory responses (M1 phenotype). IMs, which predominantly exhibit the M1 phenotype, are also recruited to the site of infection. M1 macrophages showed similar metabolic characteristics seen in cancer cells, termed the Warburg effect, such as enhanced aerobic glycolysis with the formation of lactate and decreased OXPHOS [23]. Aerobic glycolysis accelerates ATP production, though less efficiently, and provides biosynthetic capacity for activation of immune cells [23]. Gleeson et al. demonstrated that inhibition of AM metabolic shift from OXPHOS to aerobic glycolysis leads to reduction of proinflammatory IL-1β and transcription of prostaglandin-endoperoxide synthase 2 (PTGS2), and increased levels of anti-inflammatory IL-10, resulting in increased Mtb survival. They further showed that control of intracellular Mtb replication through induction of AM aerobic glycolysis is dependent on IL-1β signalling [24]. Mtb has been shown to evade host immunometabolic responses by limiting metabolic reprogramming of macrophages through induction of anti-inflammatory microRNA-21 (miR-21), which suppresses glycolysis and limits pro-inflammatory mediators such as IL-1β by targeting phosphofructokinase muscle (PFK-M) isoform [25]. miR-21 in turn is targeted by interferon gamma (IFN-γ), produced by activated immune cells, to induce macrophage glycolysis to support pro-inflammatory activities [25]. In another study, Rahman et al. showed that Mtb infection in mouse peritoneal macrophages induced host hydrogen sulphide (H 2 S) production, which leads to suppression of glycolysis. Cystathionine γ-lyase (CSE) is one of the enzymes responsible for H 2 S synthesis, and Mtb infection in CSE-deletion peritoneal macrophages resulted in a two-to three-fold increase in glycolytic intermediates with a lower bacterial burden as compared to wild-type macrophages [26]. Dendritic Cells Dendritic cells (DCs) are the vital link between the innate and adaptive immune systems of the host in response to TB. DCs take up live Mtb or the bacterial antigen at the site of infection and travel to lymph nodes to activate adaptive immune cells. In addition to their task as antigen-presenting cells, activated DCs also produce a significant amount of pro-inflammatory cytokines such as IL-6, TNF-α, and IL-1β [27]. Neutrophils Neutrophils are the most abundant type of white blood cells and among the earliest cells to arrive at the site of Mtb infection, but their specific role is less known to date [27]. Neutrophils are efficient phagocytes and recruit other inflammatory monocytes from circulation through the secretion of cytokines and chemokines after phagocytosis of mycobacteria [29,30]. Neutrophils can also contribute to the exacerbation of inflammation and disease progression, demonstrated by increased neutrophil matrix metalloproteases (MMP)-8 and MMP-9, and neutrophil elastase secretion [31]. These events are mediated by a hypoxic environment induced by Mtb infection and dependent on the neutrophil HIF-1α pathway [31]. The role of these proteases in protection against TB is unknown, but elastase has been demonstrated to have anti-mycobacterial activity [32]. The role of neutrophils in response to Mtb infection needs further clarification to modulate their function and reduce disease severity. MAIT Cells Mucosal-associated invariant T (MAIT) cells are the most abundant innate-like T cell population in humans, comprising up to~5% of the total T cell population [33]. MAIT cells are characterised by the expression of the conserved T cell receptor (TCR) α-chain Vα7.2 s(TRAV 1-2) in humans with oligoclonal Vβ chain usage, which responds to microbially derived non-peptide antigens, namely vitamin B metabolite intermediates presented by the MHC-1-related protein MR1 [34]. Activation of the MAIT cell through its TCR leads to either the expression of pro-inflammatory cytokines such as TNF-α, IFN-γ, and IL-17, or the release of cytotoxic and pro-inflammatory perforin and granzyme B [33]. Thus, MAIT cells are capable of mounting an antimicrobial response and killing infected cells upon activation. In humans, the highest frequency of MAIT cells is found in blood and liver, but they are also present in the mucosal barriers such as the lung and intestine [35]. Upon Mtb infection at the mucosal site, MAIT cells are shown to be enriched, as demonstrated by Wong et al. in the bronchioalveolar lavage samples of active TB patients [36]. In contrast, the number of circulating MAIT cells was found to be generally reduced in active TB patients, with improved functionality but not frequency of MAIT cells after 10 weeks of TB treatment [37]. These findings suggest a role played by MAIT cells in response to Mtb infection that needs to be explored. The mechanism of MAIT cell activation during Mtb infection is poorly defined. Studies by Vorkas et al. showed that priming MAIT cells with the synthetic MR1 ligand, 5-OP-RU, leads to enhanced MAIT cell activation and expansion [38]. However, these MAIT cells could not stop Mtb infection and growth, suggesting that MAIT cell priming via MR1 alone is not sufficient to control Mtb infection [38]. Thus, MAIT cells may need additional signals for their complete activation to mount an effective immune response, which needs further investigation. Metabolic properties of MAIT cells have only started to be investigated in recent years, with not much known to date, especially in TB disease. By integrating gene expression and functional data, Zinser et al. showed that MAIT cells are metabolically quiescent in the resting state, like naïve T cells and central memory T cells, and rapidly enhance their glycolytic activity after stimulation [39]. Tissue-resident MAIT cells may rely on OXPHOS as they adapt to low tissue glucose concentrations [40]. In addition to OXPHOS, IL-17producing bronchioalveolar lavage MAIT cells from children with community-acquired pneumonia have been shown to be enriched for genes encoding glycolysis and lipid efflux [41]. It is tempting to speculate that activated MAIT cells could also play a role in dampening Mtb growth via increasing their glycolytic activities. Granuloma Formation As the infection progresses, infected immune cells migrate into the lung interstitial tissue, recruiting more innate immune cells and activated adaptive immune cells such as T cells and B cells that migrate to the infection site [20,42,43]. This focal interplay between Mtb and immune cells leads to granuloma formation. Innate immune cells as discussed above, including natural killer (NK) cells [43], are involved in granuloma formation, together with immune cells activated and differentiated in the latter stages of infection ( Figure 2). In addition to M1/M2 polarization, activated macrophages could also differentiate into epithelioid cells, foam cells, or they can fuse to form multinucleated giant cells [44]. Foam cells, or foamy macrophages, and T cells are among the most studied types of immune cells regarding immunometabolism, thus only these cells will be elaborated on further in the next section. treatment [37]. These findings suggest a role played by MAIT cells in response to Mtb infection that needs to be explored. The mechanism of MAIT cell activation during Mtb infection is poorly defined. Studies by Vorkas et al. showed that priming MAIT cells with the synthetic MR1 ligand, 5-OP-RU, leads to enhanced MAIT cell activation and expansion [38]. However, these MAIT cells could not stop Mtb infection and growth, suggesting that MAIT cell priming via MR1 alone is not sufficient to control Mtb infection [38]. Thus, MAIT cells may need additional signals for their complete activation to mount an effective immune response, which needs further investigation. Metabolic properties of MAIT cells have only started to be investigated in recent years, with not much known to date, especially in TB disease. By integrating gene expression and functional data, Zinser et al. showed that MAIT cells are metabolically quiescent in the resting state, like naïve T cells and central memory T cells, and rapidly enhance their glycolytic activity after stimulation [39]. Tissue-resident MAIT cells may rely on OXPHOS as they adapt to low tissue glucose concentrations [40]. In addition to OXPHOS, IL-17producing bronchioalveolar lavage MAIT cells from children with community-acquired pneumonia have been shown to be enriched for genes encoding glycolysis and lipid efflux [41]. It is tempting to speculate that activated MAIT cells could also play a role in dampening Mtb growth via increasing their glycolytic activities. [20,42,43]. This focal interplay between Mtb and immune cells leads to granuloma formation. Innate immune cells as discussed above, including natural killer (NK) cells [43], are involved in granuloma formation, together with immune cells activated and differentiated in the latter stages of infection (Figure 2). In addition to M1/M2 polarization, activated macrophages could also differentiate into epithelioid cells, foam cells, or they can fuse to form multinucleated giant cells [44]. Foam cells, or foamy macrophages, and T cells are among the most studied types of immune cells regarding immunometabolism, thus only these cells will be elaborated on further in the next section. Figure 2. TB immunometabolism in the granuloma. Granuloma develops as TB disease progresses, which contains Mtb but also helps Mtb to persist in the host. The core of granuloma is highly hypoxic Figure 2. TB immunometabolism in the granuloma. Granuloma develops as TB disease progresses, which contains Mtb but also helps Mtb to persist in the host. The core of granuloma is highly hypoxic and inflammatory, mainly maintained by glycolysis, to kill Mtb but causes damage to host tissues. Moving to the granuloma periphery, the environment becomes less inflammatory with increased tissue repair functions but favourable to Mtb survival. Adapted from "Granuloma", by BioRender.com (2022). Retrieved from https://app.biorender.com/biorender-templates, accessed on 19 June 2022. Foamy Macrophages Alteration of the metabolic profile of Mtb-infected macrophages leads to their transformation into foamy macrophages, which contain high levels of lipid droplets [45,46]. As discussed earlier, macrophages switched to an M1 phenotype upon Mtb infection, with enhanced glycolysis that supports pro-inflammatory responses. However, Mtb can manipulate excessive glycolysis using bacterial factors such as ESAT-6 to elevate lipid accumulation into the macrophage to favour mycobacterial growth [47]. The exact mechanism used by Mtb to enhance the macrophages' uptake of lipid droplets is currently unclear, but a role for host peroxisome proliferator-activated receptor gamma (PPARγ) and testicular receptor 4 (TR4) has been demonstrated [48]. The PPARγ pathway is upregulated by Mtb infection, and both PPARγ and TR4 induce the level of the macrophage's scavenger receptor CD36, which takes up exogenous lipid [48]. In turn, the host responds to this by altering lipid metabolism to produce lipid in the forms that are protective against Mtb such as host bioactive lipids prostaglandin E2 and leukotriene B4. This metabolic reprogramming is driven by IFN-γ through HIF-1α [49]. T Cells T cell-mediated immunity is widely regarded as the most essential adaptive immune response against Mtb infection but less is known about how T cells modulate metabolism to adapt to their response to TB. Increased aerobic glycolysis is associated with T cell activation by T cell receptor ligation and binding of co-stimulatory molecules to induce an anabolic program to increase biomass for proliferation [50]. Subsequently, distinct metabolic programs differentiate T cells into lineages that determine their function. For example, Th1, Th2 and Th17 cells rely heavily on glycolysis to support their functions, expressing a high surface level of glucose transporter Glut1 [51]. In contrast, regulatory T (Treg) cells express low levels of Glut1 and increased lipid oxidation through the AMP-activated protein kinase pathway [51]. CD8 T cells also play an important role in response to Mtb infection by direct killing of infected host cells. However, findings from human and animal studies showed the lack of memory CD8 T cells even after successful treatment, indicating that the development of antigen-experienced CD8 T cells is disrupted during Mtb infection [52]. As Mtb infection persists, CD8 T cells develop bioenergetic deficiencies with a significant reduction in mitochondrial function and increased expression of inhibitory receptors such as CTLA-4 [52]. Modulation of Granuloma for Mtb Dormancy As illustrated in Figure 2, the granuloma infected by Mtb can be divided into two distinct compartments: the central core area within the granuloma is a hypoxic and proinflammatory environment with anti-microbial activity and reactive oxygen species to eliminate Mtb, whereas the peripheral area is associated with anti-inflammatory response [53]. HIF-1α is known to directly promote the inflammatory mediator IL-1β, which is involved in activating glycolytic enzymes [54]. Contrasting levels of HIF-1α expression were observed when both areas were compared, high at the core and low at the periphery [55], which indicates that the immune cells rely on glycolysis for their pro-inflammatory activities at the core area and they utilise OXPHOS for their activities in the peripheral area of the granuloma [55]. However, reduced Warburg effect-associated gene expression was observed at the core of the granuloma, which partly explains Mtb survival and persistence [55]. With this observation, it is speculated that Mtb can modulate the host defence mechanism to dampen the Warburg effect, resulting in a less efficient anti-mycobacterial response [23]. In addition, it is known that Mtb can modulate its metabolism to be dormant in the host cell, with a slow to complete shutdown of replication, and in this state, Mtb is more resistant to anti-mycobacterial agents [56]. Active immune control of Mtb, which may be detectable by the tuberculin skin test (TST) and IFN-γ release assay (IGRA), and persistence of Mtb leads to latent TB infection (LTBI) [6]. LTBI poses a great challenge in our efforts to eliminate TB due to the under-detection of LTBI cases and the unwillingness of people with LTBI to adhere to a lengthy treatment. Recent LTBI treatment has improved from a 9-month isoniazid regimen to a shorter and less frequent regimen such as a 3-month, weekly isoniazid regimen in combination with rifapentine [57,58]. However, concerns regarding the side effects of these drug regimens remained to be addressed, especially hepatotoxicity and other adverse events in patients with co-existing medical conditions such as those requiring hemodialysis [59,60]. Thus, new strategies for LTBI treatment and prevention of active TB disease are urgently needed. With great advancements in recent 'omics' technologies, our understanding of Mtb dormancy and its subsequent resuscitation has improved significantly [61], which can help us in designing novel TB vaccines and therapies. Modulation of Immunometabolism as Host Directed Therapy for TB Patients and Vaccination Strategy Modulation of immunometabolism to favour antimicrobial activity by immune cells, especially during the early phases of Mtb infection can potentially halt TB progression. The following are a few examples of potential agents to modulate metabolism in favour of immunity against TB. Some iron chelators have been shown to modulate cellular metabolism through the regulation of HIF-1α [62]. The iron chelator deferoxamine (DFX) can promote the expression of key glycolytic enzymes in Mtb-infected primary human monocyte-derived macrophages and human AMs and enhance innate immune function by inducing IL-1β in human macrophages during early infection with Mtb and upon stimulation with lipopolysaccharide (LPS) [62]. Suberanilohydroxamic acid (SAHA), an approved histone deacetylase inhibitor (HDACi), can enhance the pro-inflammatory function of human macrophages by promoting an early metabolic switch to glycolysis [63]. SAHA-treated Mtb-infected macrophages have been shown to enhance T helper cell responses but have no effects on cytotoxic T cells [63]. Interestingly, metformin-treated type 2 diabetes patients have a lower risk of Mtb infection, progress from infection to TB disease, TB mortality, and TB recurrence [64]. Expansion of a population of memory-like antigen inexperienced CD8+ CXCR3+ T cells was observed after metformin treatment, with increased (i) mitochondrial mass, OXPHOS, and FAO; (ii) survival capacity; and (iii) anti-mycobacterial properties [64]. CD8 T cell dysfunction associated with chronic Mtb infection is also shown to be reversed after treatment with metformin. Metformin treatment also enhances immunogenicity and protective efficacy against Mtb challenge in BCG-vaccinated mice and guinea pigs [64]. These findings support metformin as a candidate for TB-host directed therapy and as a TB vaccine adjunct. A Possible Gut-Lung Axis in TB Protection Looking beyond the pulmonary area as the site of contact with Mtb, there are reports that gut-associated infections and dysbiosis could influence the immune responses in the lungs during Mtb infection. Although there are currently no reports on immunometabolism, specifically on pathogenesis and dysbiosis in relation to TB, the possibility of gut immunity affecting Mtb infection in the lungs suggests that this field should be explored. Here we will discuss helminth co-infection and dysbiosis of the gut microbiota as evidence of a possible role of the gut-lung axis in defence against TB. The global burden of parasitic helminth infection is thought to exceed Mtb infection, with an estimated two billion humans being infected in areas that largely overlap with TB endemic countries [65]. Helminth co-infection could enhance Mtb infection and disease progression as both infections drive contrasting immune responses in the host. Helminth infection skews the host immune system to a Th2 response and impairs Th1 response needed to overcome Mb infection [66]. Anti-helminthic treatment in patients with latent TB has been shown to reverse this condition with a restored Th1 response [67], supporting the influence of helminth co-infection in TB patients. In another study, anti-helminthic-treated asymptomatic helminth carriers showed superior immunogenicity to BCG vaccination compared to untreated carriers [68]. This suggests that intestinal infection could also influence the efficacy of TB vaccines. The modulation of macrophage immunometabolism upon helminth infection has been comprehensively reviewed recently, with similar findings of M2 macrophage polarization that shift their metabolism predominantly to OXPHOS, lipid oxidation, and amino acid metabolism [69], which may also influence the immune response in the lung during co-infection with Mtb. Growing evidence of the influence of gut microbiota in the outcome of Mtb infection further highlights the importance of the gut-lung axis to protect against TB. In a mouse model, dysbiosis of gut microbiota by wide-spectrum antibiotics has been shown to enhance Mtb colonization in the lung [70]. Reduction of MAIT cell populations with reduced production of IL-17A was observed in the lungs of antibiotic-treated mice after one week of Mtb infection, which likely contributed to the enhanced Mtb colonization [70]. In other words, a gut microbiota-dependent depletion of MAIT cells may weaken the host's early immune response to Mtb infection, suggesting a gut-lung axis that may influence the outcome of Mtb growth. It would be interesting to explore if mucosal vaccination against Mtb would provide a better outcome of protection, not only because of the delivery through the portal of entry of Mtb per se but the ability to harness the gut's innate and adaptive immunity to directly influence immunity in the lungs. In line with this idea, our research group is exploring an oral live attenuated Vibrio cholerae as a candidate vaccine against V. cholerae with the ability to be a delivery vector for a DNA vaccine carrying heterologous TB antigens. Vibrio cholerae can colonize the surface of the epithelial cells of the small intestine, despite competition with normal microbiota, due to its ability to adjust its genes to respond to stress in terms of Type VI secretion system, quorum sensing, reactive oxygen species/pH, and bioactive metabolites [71]. Leung et al. reported that in the acute phase of cholera, circulating MAIT cells were activated, reflecting their involvement in the innate immune response to cholera. The proportion of activated MAIT cells was also found to be increased as the disease progressed, which was positively correlated with an increase in anti-LPS IgA and IgG, but not IgM, suggesting a role played by MAIT cells in LPS antibody responses, potentially with a specific role in antibody class switching [72]. The induction of Mtbspecific immune cells, including MAIT cells in the gut, may have exciting implications for the control of TB that deserve to be explored. Summary The interaction between Mtb and humans has persisted for more than 70,000 years [73], which reflects the complex interplay between them. With recent advancements in technology, our ability to fill the knowledge gaps has been greatly enhanced. We have started to appreciate that the host immune response to Mtb infection in the lung could be influenced by immune cells' metabolic activity as observed during the acute and chronic phases of TB infection. Anti-inflammatory immune cells and cells in the basal state are predominantly maintained by OXPHOS. Once the cells are activated and differentiated into a pro-inflammatory phenotype upon infection by Mtb, their metabolic activity is switched to aerobic glycolysis that produces energy more rapidly, albeit less efficiently, and generates components for their effector functions. Yet Mtb can remain dormant within the host with the potential of reactivation when the opportunity arises. The complex nature of immunometabolism communications at the site of infection as discussed above suggests that mucosal delivery of future TB therapeutics and vaccines should be explored more extensively. Growing evidence of cross-talk between lung and gastrointestinal immunity strengthens the idea of developing an oral vaccine candidate for TB, which is a widely preferable route of vaccine delivery for mass vaccination.
2022-08-04T15:10:30.552Z
2022-08-01T00:00:00.000Z
251299700
s2orc/train
v2
Analysis Of The Method Of Predictive Control Applicable To Active Magnetic Suspension Systems Of Aircraft Engines
Analysis Of The Method Of Predictive Control Applicable To Active Magnetic Suspension Systems Of Aircraft Engines Abstract Conventional controllers are usually synthesized on the basis of already known parameters associated with the model developed for the object to be controlled. However, sometimes it proves extremely difficult or even infeasible to find out these parameters, in particular when they subject to changes during the exploitation lifetime. If so, much more sophisticated control methods have to be applied, e.g. the method of predictive control. Thus, the paper deals with application of the predictive control approach to follow-up tracking of an active magnetic suspension where the mathematical and simulation models for such a control system are disclosed with preliminary results from simulation investigations of the control system in question. Introduction Active magnetic suspension (AZM) systems benefit from the phenomenon of magnetic levitation when the force F m of magnetic attraction between an electromagnet (solenoid) and a ferromagnetic core compensates the force of gravity F g (Fig. 1). That balance serves as the operation principle for active magnetic bearings, i.e. devices that benefit from attraction and repelling forces to enable stable levitation of the rotor. Use of such devices eliminates friction between mating kinematic pairs and enables continuous monitoring and diagnostics of technical condition demonstrated by such systems through measurements of vibrations and RESEARCH WORKS OF AFIT Issue 37, pp. 195÷206, 2015 forces. Aircraft engines use the magnetic bearing as components of support systems for engine shafts. But anyway, magnetic suspension systems are inherently unstable due to their structure, thus their reliable operation needs engineering of an appropriate control system to enable its stability and to achieve the required parameters of control duality. Conventional controllers for such systems are usually synthesized on the basis of already known parameters associated with the model developed for the object to be controlled. The better accuracy is achievable for these parameters the more dependable control can be provided for the object. However, sometimes it proves extremely difficult or even infeasible to find out these parameters, in particular with regard to real objects since their parameters subject to changes during the equipment lifetime. Fig. 1. Arrangement of the system designed to control position of a rotor within an active magnetic suspension system [3] Fast development of digital technologies enabled implementation of really advanced methods of control, such as adaptive (follow-up), predictive or sliding [8] solutions or robust control. These algorithms grow out of time analysis and synthesis of control units with use of state variables, which makes it possible to achieve optimized control units. These algorithms take explicit account of meas- urement uncertainties and absolve from the need to accurately know all properties of the controlled object. Adaptive control allows to changing parameters of the control unit according to embedded rules and laws of control algorithms and continuous identification of controlled object parameters. On the other hand, robust control method enables designing of control units with consideration to models of uncertainties [8]. A model of active magnetic suspensions Operation of active magnetic suspension assumes that a rotor is disposed within an air gap at the working point, i.e. within the equal distance from the both poles. That distance is one of the most important parameters of the suspension and determines other parameters of the system, such as current stiffness k i and displacement stiffness k s . These parameters define the attracting force F m of an electromagnet (solenoid). Displacement of a rotor supported on an active homopolar electromagnetic bearing within an air gap is defined by the following differential equation [1,2]: where: m -weight of the rotor -control current; -displacement of the rotor from the working position; F z -external force of disturbance. Application of the Laplace transform to the equation (1) leads to formulation of the transition function defined by the relationship (2). It is the equation that makes it possible to derive operator functions for transfer function to define behaviour of the system for such input signals as values of control currents (3) or external forces. In case of the control system subjected to analysis in this paper the input signal is understood as variations of the control current I(s) whilst the rotor displacements X(s) in the air gap serve as the output of the control unit. That unit was then reproduced in the Matlab-Simulink software environment according to the model disclosed in Fig. 2. It is the control loop that shall be supervised by a predictive control unit synthesized for that purpose. Analysis of predictive control solutions Methods of predictive control can be applied to objects with structures that are inherently unstable, non-linear and non-stationary. This method is intended to find out a sequence of future values for the control signal pursuant to a reference model. The developed algorithm enables stable and undisturbed operation of even non-linear and unstable objects with no necessity to consider that property of controlled objects in the synthesis of a control system [5,7]. There are plenty of control methods based on predictive algorithms. The most popular that found the broadest application are such algorithms as Extended Horizon Adaptive Control (EHAC), Extended Prediction Self-Adaptive Control (EPSAC) and Generalized Predictive Control (GPC). On the other hand, the second group of algorithms comprise such solutions as Model Algorithmic Control (MAC) that assumes simple adaptive control with a model of step-response, Model Predictive Control (MPC), which is a model for differential control with a model of impulse response as well as Dynamic Matrix Control (DMC) which is an algorithm for predictive control with a model of step response. The methodology for application of predictive algorithms is illustrated in Fig. 3. The algorithms are used to predict future values of output signals �( + ), = 1, … , − 1 for the horizon of prediction (H) as well as values of input control signals u(i + j), j = 1, … , L-1 for the horizon of control (L) at every mo-velocity displacement ment of time i so that to fulfil the control objective (e.g. minimization of the control deviation). Fig. 3 exhibits that increments of control signals are assumed to be zeros from the moment of i =i+L, i.e. u(i+L) = u(i+L). For correct operation of a predictive control unit the assumption L ≥ H must be fulfilled. Control horizon Predictive horizon Fig. 3. Predictive control with displaceable horizon The class of algorithms that are suitable for predictive control and use models of objects in the form of step response and impulse response comprise the Model Predictive Control (MPC) algorithm with the model of impulse response. The algorithm assumes the model of a control system in the form of equation (5). Coefficients for the polynomial of the impulse response V that occur in the relationship (5) are computed by multiplication of the discrete transfer function of the object and the Z transform for a Dirac impulse [4,7]. where: -polynomial of the impulse response; , 1 , … , n -subsequent values of the impulse response for the B/A component. --polynomials that describe the controlled object. The goal objective for that algorithm is to minimize the discrepancies between the anticipated waveform of the output signal and the reference waveform according to the criterion (6) with consideration to the weight factor imposed to the deviation between the control output and the u(i-1) value. where: The predictive algorithm that employs the model of an impulse response can be expressed in a general notation by means of equations for the RST controllers [7]: where: The K m denotation that occurs in the equation for the T polynomial stands for a reference trajectory that is defined by the equation (11) Parameters q j that stand in equations (8÷10) are components of the q vector that is defined by the (12) relationship, whilst 2 are coefficients of the polynomial that represents an impulse response and is then subjected in accordance with the equation (13). Terms of the Q matrix denoted as h i stand for parameters of the step response of the object. where: where: The synthesis process for a predictive control unit is based on a model for the controlled object (4) and assumes determination of a polynomial for both a impulse response and step response of that object. Then parameters of the V polynomial are used to find out the Q matrix where the last column of that matrix is made up of parameters h i for the step response. The MPC controller can be synthesized on the basis of coefficients representing the V polynomial as well as terms of the q vector derived from the Q matrix. A simulation model developed in the Matlab-Simulink software environment for the predictive control algorithm The Matlab-Simulink software environment with the Model Predictive Control Toolbox [9] package was used to carry out simulation studies on the predictive control algorithm for the active magnetic suspension system. Fig. 4 shows a simulation model for such a control loop. The model for magnetic suspension was developed on grounds of the equation for the rotor displacements in an air gap (2) of a homopolar magnetic bearing with permanent magnets. The simulation process made it possible to evaluate how input parameters of the intended predictive controller affect step characteristics of the system. These parameters included a control horizon and a prediction horizon parameters. MPC controller Model of an active magnetic suspension Fig. 5 presents characteristic curves obtained for a step-function input signal applied to the closed-loop control system. The input signal applied to the system was a discrete displacement of the suspension rotor within the air gap by a distance of 10 -4 m. The controller was synthesized for selected values of control horizon parameter (L = 1, L = 2, L = 5) and for fixed range of the prediction horizon parameter (H = 10). The highest value of control overshoot was revealed for the system with the largest range of the control horizon parameter (L = 5). Both for such a system as well as for the system with the control horizon value equalling 2 the system response is of oscillating nature with overshoots ranging respectively to 6% and 1%. Under the foregoing conditions for the closed-loop control systems (L = 1, L = 2, L = 5) the settling time of the system equals respectively to 0.045 s, 0.012 s and 0.015 s. For each closed-loop control system subjected to investigations the value of control deviation under steady condition equalled to zero [6]. . Characteristics for a closed-loop control system for the active magnetic suspension (AZM) system with a predictive controller, plotted as a response to a stepfunction input signal for various ranges of control horizon L. Fig. 6 depicts waveforms of control signals produced by the closed-loop control system for the active magnetic suspension (AZM) system with a predictive controller. The highest value of the control signal corresponds to the system with a control loop with a longest range of control horizon parameter (L = 5). 6. Graphs of control signals produced by a closed-loop control system for the active magnetic suspension (AZM) system with a predictive controller, plotted for various ranges of control horizons. Fig. 7 shows time waveforms for a closed-loop control system for the active magnetic suspension (AZM) system with a predictive controller. The input signal for the system was produced as a discrete displacement (step-function) of the rotor inside an air gap by the distance of 10 -4 m. The controller was synthesized for selected ranges of prediction horizon parameters (H = 5, H = 10, H = 15) and for fixed range of the control horizon parameter (L = 1). The shortest settling time equal to 0.005 s combined with the highest overshoot range of 5% was found for the control loop with the shortest prediction horizon parameter (H = 5). Its response to the step-function is of the oscillation nature. Responses to stepfunctions recorded for two other systems demonstrate an inertial behaviour with zero overshoots and the settling time of 0.04 s and 0.18 s for control horizon ranges equalling respectively H = 10 and H = 15. Deviations of control under steady conditions was zero for each close-loop control system subjected to investigations. Fig. 8 depicts waveforms of control signals produced by the closed-loop control system for the active magnetic suspension (AZM) system with a predictive controller. These waveforms are plotted for various ranges of prediction horizon H. The highest value of the control signal corresponds to the system with a control loop with a shortest range of prediction horizon parameter (H = 5). Fig. 8. Characteristics of control signals produced by a closed-loop control system for the active magnetic suspension (AZM) system with a predictive controller, plotted for various ranges of prediction horizon parameters. Recapitulations and conclusions Algorithms of predictive control are rated among the most sophisticated control methods that benefit from identification of a parameterized or not parameterized models for the controlled object. The first group of predictive algorithms comprised methods that use the transfer function of the object, such as the algorithm of Extended Horizon Adaptive Control (EHAC), Extended Prediction Self-Adaptive Control (EPSAC) and Generalized Predictive Control (GPC). The second group of algorithms is made up of the methods that employ the characteristic of the object response to a step-function input or to an input series of Direc impulses, such as the Model Predictive Control (MPC), Model Algorithmic Control (MAC) that assumes simple predictive control with a model of a step-function response, as well as Dynamic Matrix Control (DMC) which is an algorithm for predictive control with a model of step response. Dynamic properties of each predictive controller depend on already predefined control horizon and prediction horizon parameters, weighting functions for all signals and constraints imposed to both the control signals and output signals. Simulation investigations of own-developed algorithms for predictive control made it possible to find out how the ranges of control horizon and prediction horizon affect parameters that define quality of control systems. Extension of the control horizon parameter for the active magnetic suspension (AZM) system reduces the settling time of the system but leads to increase of the overshoots parameter. On the other hand, extension of the prediction horizon entails prolongation of the settling time but mitigates the oscillation departures. All control algorithms that were implemented for closed-loop control system with a predictive controller and then subjected to investigations enabled observation that the output deviations under steady conditions were zero. The foregoing results of completed analyzes and simulations make up the preliminary step to experimental verifications. Awareness of physical phenomena that occur in active magnetic suspension (AZM) system as well as familiarity with synthesis methods for parameters of a predictive controller for unstable structures enable determination of typical working parameters for the system in question. The subsequent phase of studies assumes that monitoring of variations demonstrated by control signals within the established control horizon and prediction horizon parameters shall enable to make judgment on technical condition of an active magnetic suspension (AZM) system. Achieved results shall be adaptable to more sophisticated technical units, such as magnetic bearing systems for shafts of turbojet engines.
2019-04-15T13:09:24.050Z
2015-12-01T00:00:00.000Z
114566400
s2orc/train
v2
Sequestration and Destruction of Rinderpest Virus–Containing Material 10 Years after Eradication
Sequestration and Destruction of Rinderpest Virus–Containing Material 10 Years after Eradication In 2021, the world marked 10 years free from rinderpest. The United Nations Food and Agriculture Organization and World Organisation for Animal Health have since made great strides in consolidating, sequencing, and destroying stocks of rinderpest virus–containing material, currently kept by only 14 known institutions. This progress must continue. This study used a replication-defective vesicular stomatitis virus based pseudotyping system to measure neutralizing antibodies against RPV and PPR. This system does not require the use of live infectious viral materials and thus mitigates the risk of accidental exposure. Analysis revealed that individuals vaccinated for RPV also are protected against PPR infection. Individuals that were vaccinated against PPR had lower antibody titers than those who were naturally infected and in individuals infected with either PPR or RPV neutralizing responses were highest against the homologous virus. This indicates that retrospective analysis of serologic samples can be used to determine the pathogen to which an infected individual was exposed. Because of the request to destroy all RPV samples following eradication a new diagnostic method must be developed that does not rely on RPV as a positive material. Newcastle Disease with small RNA inserts based on RPV or PPV was used as a positive control for extraction, reverse transcription, and amplification. 7 Enzyme activity The V proteins of RPV, measles virus, PPR, and canine distemper were compared to determine which had the ability to block type 1 and type 2 interferon action. Analysis revealed that the V proteins of each morbillivirus could block type 1 interferon action but they had varying abilities to block type 2 interferon action which is correlated with the co-precipitation of STAT1 with the V protein. Further analysis revealed that all morbillivirus V proteins form a complex with Tyk2 and Jak2, two interferon-receptor-associated kinases. Pirbright, UK* 8 The enzymatic role of RPV V protein was investigated to determine how it blocks interferon signaling. Analysis revealed that the morbillivirus V proteins have at least three functions that inhibit interferon signaling, the binding of STAT1 also seen with P and W proteins) which enables the blockade of type 2 interferon signaling, the binding of STAT1 which requires the Vs domain and Pirbright, United Kingdom* Study category Summary Lab location Reference part of the W domain, and the association with interferon receptor-associated kinases which also requires the Vs domain. Partially purified recombinant RNA polymerase complex of RPV was used to show in vitro methylation of capped mRNA. Analysis revealed that the catalytic module for cap 0 methyl transferase activity is located in domain 3 of the L protein whereas domain 2 stabilizes the enzyme and increases catalytic efficiency. This provides support for the modular nation of the RPV L protein. Bangalore, India § 10 E. coli was used to express the RTPase domain of RPV to investigate the RTPase activity of L protein. Analysis revealed that L protein exhibits RTPase and NTPase activities and that it has a two-metal mechanism similar to the RTPase domain of other viruses. Bangalore, India § 11 E. coli was used to express the RTPase domain of RPV to investigate its enzymatic abilities. Analysis revealed that the L protein of RPV has RNAdependent RNA polymerase, RTPase, Guanylyltransferase (GTase), and Methyltransferase activity in addition to pyrophosphatase (Ppase) and tripolyphosphatase (PPPase) activity. Bangalore, India § 12 Genome sequencing The B and L strains of RPV were sequenced to investigate host range and virulence factors. The stock B strain is pathogenic to cattle whereas the L strain is pathogenic to rabbits but not cattle and buffalo. Analysis revealed that differences in pathogenicity to cattle is caused by nt/aa substitution in P/C/V genes. Tokyo, Japan* 13 The LATC06 strain of RPV was sequenced and compared to other rinderpest viral strains. Analysis revealed that the functions of the LATC06 (Korea) and LA (Japan) strains of RPV are similar with regards to immunodominance in humoral immunity. Anyang, Korea 14 The genomes of three strains of RPV, L72, LA77, and LA96, were sequenced and analyzed to investigate their genetic variability. Analysis revealed that genetic variability occurs within the vaccine virus strain and that amino acid sequence similarity between Fusan and other strains was the lowest within the P, C, and V proteins. This indicated that the difference in pathogenicity of different strains may be Because of the V protein. Anyang, Korea 15 The LA-AKO strain of the RPV vaccine was sequenced. Analysis revealed that the bulk vaccine comprises mixed viral populations with minor mutations at the nucleotide level. Ibaraki, Japan*, ‡ 16 In preparation for the destruction of all RPV samples, the full genome sequence was determined of each distinct RPV sample housed at Pirbright. Analysis revealed that the African isolates form a single disparate clade as opposed to two separate clades and that the clade containing viruses developed in Korea were more similar to African viruses than Asian viruses. 17 *Conducted in association with a current FAO-WOAH designated RHF †Presented research conducted before 2011 ‡Supported by the FAO-WOAH Joint Advisory Committee for Rinderpest. §Rinderpest virus containing material (RVCM) was not used in these studies.
2022-08-23T12:48:54.023Z
2022-09-01T00:00:00.000Z
251742500
s2orc/train
v2
Weyl-Titchmarsh Theory for Sturm-Liouville Operators with Distributional Potentials
Weyl-Titchmarsh Theory for Sturm-Liouville Operators with Distributional Potentials We systematically develop Weyl-Titchmarsh theory for singular differential operators on arbitrary intervals $(a,b) \subseteq \mathbb{R}$ associated with rather general differential expressions of the type \[ \tau f = \frac{1}{r} (- \big(p[f' + s f]\big)' + s p[f' + s f] + qf),] where the coefficients $p$, $q$, $r$, $s$ are real-valued and Lebesgue measurable on $(a,b)$, with $p\neq 0$, $r>0$ a.e.\ on $(a,b)$, and $p^{-1}$, $q$, $r$, $s \in L^1_{\text{loc}}((a,b); dx)$, and $f$ is supposed to satisfy [f \in AC_{\text{loc}}((a,b)), \; p[f' + s f] \in AC_{\text{loc}}((a,b)).] In particular, this setup implies that $\tau$ permits a distributional potential coefficient, including potentials in $H^{-1}_{\text{loc}}((a,b))$. We study maximal and minimal Sturm-Liouville operators, all self-adjoint restrictions of the maximal operator $T_{\text{max}}$, or equivalently, all self-adjoint extensions of the minimal operator $T_{\text{min}}$, all self-adjoint boundary conditions (separated and coupled ones), and describe the resolvent of any self-adjoint extension of $T_{\text{min}}$. In addition, we characterize the principal object of this paper, the singular Weyl-Titchmarsh-Kodaira $m$-function corresponding to any self-adjoint extension with separated boundary conditions and derive the corresponding spectral transformation, including a characterization of spectral multiplicities and minimal supports of standard subsets of the spectrum. We also deal with principal solutions and characterize the Friedrichs extension of $T_{\text{min}}$. Finally, in the special case where $\tau$ is regular, we characterize the Krein-von Neumann extension of $T_{\text{min}}$ and also characterize all boundary conditions that lead to positivity preserving, equivalently, improving, resolvents (and hence semigroups). Introduction The prime motivation behind this paper is to develop Weyl-Titchmarsh theory for singular Sturm-Liouville operators on an arbitrary interval (a, b) ⊆ R associated with rather general differential expressions of the type Here the coefficients p, q, r, s are real-valued and Lebesgue measurable on (a, b), with p = 0, r > 0 a.e. on (a, b), and p −1 , q, r, s ∈ L 1 loc ((a, b); dx), and f is supposed to satisfy f ∈ AC loc ((a, b)), p[f + sf ] ∈ AC loc ((a, b)), (1.2) with AC loc ((a, b)) denoting the set of locally absolutely continuous functions on (a, b). (The expression f [1] = p[f + sf ] will subsequently be called the first quasiderivative of f .) One notes that in the general case (1.1), the differential expression is formally given by Moreover, in the special case s ≡ 0 this approach reduces to the standard one, that is, one obtains, (1. 4) In particular, in the case p = r = 1 our approach is sufficiently general to include arbitrary distributional potential coefficients from H −1 loc ((a, b)) = W −1,2 loc ((a, b)) (as the term s 2 can be absorbed in q), and thus even in this special case our setup is slightly more general than the approach pioneered by Savchuk and Shkalikov [140], who defined the differential expression as ((a, b)). (1. 5) One observes that in this case q can be absorbed in s by virtue of the transformation s → s − x q. Their approach requires the additional condition s 2 ∈ L 1 loc ((a, b); dx). Moreover, since there are distributions in H −1 loc ((a, b)) which are not measures, the operators discussed here are not a special case of Sturm-Liouville operators with measure-valued coefficients as discussed, for instance, in [41]. We emphasize that similar differential expressions have already been studied by Bennewitz and Everitt [21] in 1983 (see also [42,Sect. I.2]). While some of their discussion is more general, they restrict their considerations to compact intervals and focus on the special case of a left-definite setting. An extremely thorough and systematic investigation, including even and odd higher-order operators defined in terms of appropriate quasi-derivatives, and in the general case of matrixvalued coefficients (including distributional potential coefficients in the context of Schrödinger-type operators) was presented by Weidmann [157] in 1987. In fact, the general approach in [21] and [157] draws on earlier discussions of quasi-derivatives in Shin [148]- [150], Naimark [127,Ch. V], and Zettl [158]. Still, it appears that the distributional coefficients treated in [21] did not catch on and subsequent authors referring to this paper mostly focused on the various left and right-definite aspects developed therein. Similarly, it seems likely that the extraordinary generality exerted by Weidmann [157] in his treatment of higher-order differential operators obscured the fact that he already dealt with distributional potential coefficients back in 1987. In addition, the case of point interactions as particular distributional potential coefficients in Schrödinger operators received enormous attention, too numerous to be mentioned here in detail. Hence, we only refer to the standard monographs by Albeverio, Gesztesy, Høegh-Krohn, and Holden [2] and Albeverio and Kurasov [5], and some of the more recent developments in Albeverio, Kostenko, and Malamud [4], Kostenko and Malamud [101], [102]. We also mention the case of discontinuous Schrödinger operators originally considered by Hald [69], motivated by the inverse problem for the torsional modes of the earth. For recent development in this direction we refer to Shahriari, Jodayree Akbarfam, and Teschl [147]. It should be mentioned that some of the attraction in connection with distributional potential coefficients in the Schrödinger operator clearly stems from the low-regularity investigations of solutions of the Korteweg-de Vries (KdV) equation. We mention, for instance, Buckmaster and Koch [24], Grudsky and Rybkin [68], Kappeler and Möhr [90], Kappeler and Topalov [93], [94], and Rybkin [137]. In contrast, Weyl-Titchmarsh theory in the presence of distributional potential coefficients, especially, in connection with (1.1) (resp., (2.2)) has not yet been developed in the literature, and it is precisely the purpose of this paper to accomplish just that under the full generality of Hypothesis 2.1. Applications to inverse spectral theory will be given in [39]. It remains to briefly describe the content of this paper: Section 2 develops the basics of Sturm-Liouville equations under our general hypotheses on p, q, r, s, including the Lagrange identity and unique solvability of initial value problems. Maximal and minimal Sturm-Liouville operators are introduced in Section 3, and Weyl's alternative is described in Section 4. Self-adjoint restrictions of the maximal operator, or equivalently, self-adjoint extensions of the minimal operator, are the principal subject of Section 5, and all self-adjoint boundary conditions (separated and coupled ones) are described in Section 6. The resolvent of all self-adjoint extensions and some of their spectral properties are discussed in Section 7. The singular Weyl-Titchmarsh-Kodaira m-function corresponding to any self-adjoint extension with separated boundary conditions is introduced and studied in Section 8, and the corresponding spectral transformation is derived in Section 9. Classical spectral multiplicity results for Schrödinger operators due to Kac [85], [86] (see also Gilbert [59] and Simon [151]) are extended to our general situation in Section 10. Section 11 deals with various applications of the abstract theory developed in this paper. More specifically, we prove a simple analogue of the classic Sturm separation theorem on the separation of zeros of two real-valued solutions to the distributional Sturm-Liouville equation (τ − λ)u = 0, λ ∈ R, and show the existence of principal solutions under certain sign-definiteness assumptions on the coefficient p near an endpoint of the basic interval (a, b). When τ − λ is non-oscillatory at an endpoint, we present a sufficient criterion on r and p for τ to be in the limit-point case at that endpoint. This condition dates back to Hartman [70] (in the special case p = r = 1, s = 0), and was subsequently studied by Rellich [133] (in the case s = 0). This section concludes with a detailed characterization of the Friedrichs extension of T 0 in terms of (non-)principal solutions, closely following a seminal paper by Kalf [88] (also in the case s = 0). In Section 12 we characterize the Krein-von Neumann self-adjoint extension of T min by explicitly determining the boundary conditions associated to it. In our final Section 13, we derive the quadratic form associated to each self-adjoint extension of T min , assuming τ is regular on (a, b). We then combine this with the Beurling-Deny criterion to present a characterization of all positivity preserving resolvents (and hence semigroups) associated with self-adjoint extensions of T min in the regular case. In particular, this result confirms that the Krein-von Neumann extension does not generate a positivity preserving resolvent or semigroup. We actually go a step further and prove that the notions of positivity preserving and positivity improving are equivalent in the regular case. We also mention that an entirely different approach to Schrödinger operators (assumed to be bounded from below) with matrix-valued distributional potentials, based on supersymmetric considerations, has been developed simultaneously in [38]. Finally, we briefly summarize some of the notation used in this paper: The Hilbert spaces used in this paper are typically of the form L 2 ((a, b); r(x)dx) with scalar product denoted by · , · r (linear in the first factor), associated norm · 2,r , and corresponding identity operator denoted by I r . Moreover, L 2 c ((a, b); r(x)dx) denotes the space of square integrable functions with compact support. In addition, we use the Hilbert space L 2 (R; dµ) for an appropriate Borel measure µ on R with scalar product and norm abbreviated by · , · µ and · 2,µ , respectively. Next, let T be a linear operator mapping (a subspace of) a Hilbert space into another, with dom (T ), ran(T ), and ker(T ) denoting the domain, range, and kernel (i.e., null space) of T . The closure of a closable operator S is denoted by S. The spectrum, essential spectrum, point spectrum, discrete spectrum, absolutely continuous spectrum, and resolvent set of a closed linear operator in the underlying Hilbert space will be denoted by σ(·), σ ess (·), σ p (·), σ d (·), σ ac (·), and ρ(·), respectively. The Banach spaces of linear bounded, compact, and Hilbert-Schmidt operators in a separable complex Hilbert space are denoted by B(·), B ∞ (·), and B 2 (·), respectively. The orthogonal complement of a subspace S of the Hilbert space H will be denoted by S ⊥ . The symbol SL 2 (R) will be used to denote the special linear group of order two over R, that is, the set of all 2 × 2 matrices with real entries and determinant equal to one. At last, we will use the abbreviations "iff" for "if and only if", "a.e." for "almost everywhere", and "supp" for the support of functions throughout this paper. The Basics on Sturm-Liouville Equations In this section we provide the basics of Sturm-Liouville equations with distributional potential coefficients. Throughout this paper we make the following set of assumptions: Hypothesis 2.1. Suppose (a, b) ⊆ R and assume that p, q, r, s are Lebesgue measurable on (a, b) with p −1 , q, r, s ∈ L 1 loc ((a, b); dx) and real-valued a.e. on (a, b) with r > 0 and p = 0 a.e. on (a, b). If, in addition, g, d 1 , d 2 , and z are real-valued, then the solution f is real-valued. For each f, g ∈ D τ we define the modified Wronski determinant W (f, g)(x) = f (x)g [1] (x) − f [1] (x)g(x), x ∈ (a, b). (2.5) The Wronskian is locally absolutely continuous with derivative Indeed, this is a consequence of the following Lagrange identity, which is readily proved using integration by parts. Lemma 2.3. For each f , g ∈ D τ and α, β ∈ (a, b) we have As a consequence, one verifies that the Wronskian W (u 1 , u 2 ) of two solutions u 1 , u 2 ∈ D τ of (τ − z)u = 0 is constant. Furthermore, W (u 1 , u 2 ) = 0 if and only if u 1 , u 2 are linearly independent. In fact, the Wronskian of two linearly dependent solutions vanishes obviously. Conversely, W (u 1 , u 2 ) = 0 means that for c ∈ (a, b) there is a K ∈ C such that Ku 1 (c) = u 2 (c) and Ku [1] 1 (c) = u [1] 2 (c), (2.8) where we assume, without loss of generality, that u 1 is a nontrivial solution (i.e., not vanishing identically). Now by uniqueness of solutions this implies the linear dependence of u 1 and u 2 . We omit the straightforward calculations underlying the proof of Lemma 2.4. Another important identity for the Wronskian is the well-known Plücker identity: for some constants C, B ∈ R. Proof. The analyticity part follows from the corresponding result for the equivalent system. For the remaining part, first note that because of Lemma 2.4 it suffices to consider the case when g vanishes identically. Now if we set for each z ∈ C with an integration by parts shows that for each Employing the elementary estimate we obtain an upper bound for v z : where ω = |p −1 | + |q| + |r| + |s|. Now an application of the Gronwall lemma yields If, in addition to the assumptions of Theorem 2.7, τ is regular at a and g is integrable near a, then the limits f z (a) and f [1] z (a) are entire functions of order 1 /2 and the bound in Theorem 2.7 holds for all x ∈ [a, β]. Indeed, this follows since the entire functions f z (x) and f [1] z (x), x ∈ (a, c) are locally bounded, uniformly in x ∈ (a, c). Moreover, in this case the assertions of Theorem 2.7 are valid even if we take c = a and/or α = a. Sturm-Liouville Operators In this section, we will introduce operators associated with our differential expression τ in the Hilbert space L 2 ((a, b); r(x)dx) with scalar product In order to obtain a symmetric operator, we restrict the maximal operator T max to functions with compact support by Since τ is a real differential expression, the operators T 0 and T max are real with respect to the natural conjugation in L 2 ((a, b); r(x)dx). We say some measurable function f lies in L 2 ((a, b); r(x)dx) near a (resp., near b) if f lies in L 2 ((a, c); r(x)dx) (resp., in L 2 ((c, b); r(x)dx)) for each c ∈ (a, b). Furthermore, we say some f ∈ D τ lies in dom (T max ) near a (resp., near b) if f and τ f both lie in L 2 ((a, b); r(x)dx) near a (resp., near b). One readily verifies that some f ∈ D τ lies in dom (T max ) near a (resp., b) if and only if f lies in dom (T max ) near a (resp., b). Proof. Under the assumptions of the lemma, τ f lies in L 2 ((a, b); r(x)dx) near a and since r(x)dx is a finite measure near a we have τ f ∈ L 1 ((a, c); r(x)dx) for each c ∈ (a, b). Hence, the claim follows from Theorem 2.6. The following lemma is a consequence of the Lagrange identity. Lemma 3.2. If f and g lie in dom (T max ) near a, then the limit exists and is finite. A similar result holds at the endpoint b. If f , g ∈ dom (T max ), then Proof. If f and g lie in dom (T max ) near a, the limit α ↓ a of the left-hand side in equation (2.7) exists. Hence, the limit in the claim exists as well. Now the remaining part follows by taking the limits α ↓ a and β ↑ b. If τ is regular at a and f and g lie in dom (T max ) near a, then we clearly have In order to determine the adjoint of T 0 we will rely on the following lemma (see, e.g., [153,Lemma 9.3] Theorem 3.4. The operator T 0 is densely defined and T * 0 = T max . Proof. If we set then from Lemma 3.2 one immediately sees that the graph of T max is contained in T 0 * . Indeed, for each f ∈ dom (T max ) and g ∈ dom (T 0 ) we infer since W (f, g) has compact support. Conversely, let f 1 , f 2 ∈ L 2 ((a, b); r(x)dx) such that f 1 , T 0 g r = f 2 , g r for each g ∈ dom (T 0 ) and f be a solution of τ f = f 2 . In order to prove that f 1 − f is a solution of τ u = 0, we will invoke Lemma 3.3. Therefore, consider the linear functionals where u j are two solutions of τ u = 0 with W (u 1 , u 2 ) = 1 and L 2 c ((a, b); r(x)dx) is the space of square integrable functions with compact support. For these functionals we have ker( 1 ) ∩ ker( 2 ) ⊆ ker( ). Indeed, let g ∈ ker( 1 ) ∩ ker( 2 ), then the function is a solution of τ u = g by Lemma 2.4 and has compact support since g lies in the kernels of 1 and 2 , hence u ∈ dom (T 0 ). Then the Lagrange identity and the property of (f 1 , (3.14) hence g = τ u ∈ ker( ). Now applying Lemma 3.3 there are c 1 , , contradicting the fact that T 0 * is the graph of an operator. The operator T 0 is symmetric by the preceding theorem. The closure T min of T 0 is called the minimal operator, In order to determine T min we need the following lemma on functions in dom (T max ). Proof. Let u 1 , u 2 be a fundamental system of τ u = 0 with W (u 1 , u 2 ) = 1 and let α, β ∈ (a, b), α < β such that the functionals are linearly independent. First we will show that there is some u ∈ D τ such that Indeed, let g ∈ L 2 ((a, b); r(x)dx) and consider the solution u of τ u = g with initial conditions u(α) = f a (α) and u [1] (α) = f [1] a (α). (3.19) With Lemma 2.4 one sees that u has the desired properties if , (3.20) where c 1 , c 2 ∈ C are the constants appearing in Lemma 2.4. But since the functionals F 1 , F 2 are linearly independent, we may choose g ∈ L 2 ((a, b); r(x)dx) such that this equation is valid. Now the function f defined by has the claimed properties. Theorem 3.6. The minimal operator T min is given by (3.23) Given some g ∈ dom (T max ), there is a g a ∈ dom (T max ) such that g a = g in a vicinity of a and g a = 0 in a vicinity of b. Therefore, . For regular τ on (a, b) we may characterize the minimal operator by the boundary values of the functions f ∈ dom (T max ) as follows: A similar result holds at b. Proof. The claim follows from W (f, g)(a) = f (a)g [1] (a) − f [1] (a)g(a) and the fact that one finds g ∈ dom (T max ) with prescribed initial values at a. Indeed, one can take g to coincide with some solution of τ u = 0 near a. Next we will show that T min always has self-adjoint extensions. Theorem 3.8. The deficiency indices n(T min ) of the minimal operator T min are equal and at most two, that is, Proof. The fact that the dimensions are less than two follows from ran (T min ± i) ⊥ = ker((T max ∓ i)), (3.27) because there are at most two linearly independent solutions of (τ ± i)u = 0. Moreover, equality is due to the fact that T min is real with respect to the natural conjugation in L 2 ((a, b); r(x)dx). Weyl's Alternative We say τ is in the limit-circle (l.c.) case at a, if for each z ∈ C all solutions of (τ − z)u = 0 lie in L 2 ((a, b); r(x)dx) near a. Furthermore, we say τ is in the limit-point (l.p.) case at a if for each z ∈ C there is some solution of (τ − z)u = 0 which does not lie in L 2 ((a, b); r(x)dx) near a. Similarly, one defines the l.c. and l.p. cases at the endpoint b. It is clear that τ is only either in the l.c. or in the l.p. case at some boundary point. The next lemma shows that τ indeed is in one of these cases at each endpoint, which is known as Weyl's alternative. Lemma 4.1. If there is a z 0 ∈ C such that all solutions of (τ − z 0 )u = 0 lie in L 2 ((a, b); r(x)dx) near a, then τ is in the l.c. case at a. A similar result holds at the endpoint b. Proof. Let z ∈ C and u be a solution of (τ − z)u = 0. If u 1 , u 2 are a fundamental system of (τ − z 0 )u = 0 with W (u 1 , u 2 ) = 1, then u 1 and u 2 lie in L 2 ((a, b); r(x)dx) near a by assumption. Therefore, there is some c ∈ (a, b) such that the function v = |u 1 | + |u 2 | satisfies for some c 1 , c 2 ∈ C by Lemma 2.4. Hence, with C = max(|c 1 |, |c 2 |), one estimates and furthermore, using Cauchy-Schwarz, Now an integration yields for each s ∈ (a, c), and therefore, Since s ∈ (a, c) was arbitrary, this yields the claim. In particular, if τ is regular at an endpoint, then τ is in the l.c. case there since each solution of (τ − z)u = 0 has a continuous extension to this endpoint. With r(T min ) we denote the set of all points of regular type of T min , that is, all z ∈ C such that (T min − z) −1 is a bounded operator (not necessarily everywhere defined). Recall that dim ran(T min − z) ⊥ is constant on every connected component of r(T min ) ([156, Theorem 8.1]) and thus dim ran (T min −z) ⊥ = dim(ker(T max − z)) = n(T min ) for every z ∈ r(T min ). Lemma 4.2. For each z ∈ r(T min ) there is a nontrivial solution of (τ − z)u = 0 which lies in L 2 ((a, b); r(x)dx) near a. A similar result holds at the endpoint b. Proof. First assume that τ is regular at b. If there were no solution of (τ − z)u = 0 which lies in L 2 ((a, b); r(x)dx) near a, we would have ker(T max −z) = {0} and hence n(T min ) = 0, that is, T min = T max . But since there is an f ∈ dom (T max ) with f (b) = 1 and f [1] (b) = 0, (4.7) this is a contradiction to Theorem 3.6. For the general case pick some c ∈ (a, b) and consider the minimal operator T c in L 2 ((a, c); r(x)dx) induced by τ | (a,c) . Then z is a point of regular type of T c . Indeed, we can extend each f c ∈ dom (T c ) with zero and obtain a function f ∈ dom (T min ). For these functions and some positive constant C, (4.8) Now since the solutions of (τ | (a,c) −z)u = 0 are exactly the solutions of (τ −z)u = 0 restricted to (a, c), the claim follows from what we already proved. If z ∈ r(T min ) and τ is in the l.p. case at a, then there is a unique nontrivial solution of (τ − z)u = 0 (up to scalar multiples ), which lies in L 2 ((a, b); r(x)dx) near a. A similar result holds at the endpoint b. Proof. If there were two linearly independent solutions in L 2 ((a, b); r(x)dx) near a, τ would be l.c. at a. τ is in the l.c. case at a if and only if there is a f ∈ dom (T max ) such that Similar results hold at the endpoint b. Proof. Let τ be in the l.c. case at a and u 1 , u 2 be a real fundamental system of τ u = 0 with W (u 1 , u 2 ) = 1. Both, u 1 and u 2 lie in dom (T max ) near a. Hence, there are f , g ∈ dom (T max ) with f = u 1 and g = u 2 near a and f = g = 0 near b. Consequently, we obtain W (f, g)(a) = W (u 1 , u 2 )(a) = 1 and W (f, f )(a) = W (u 1 , u 1 )(a) = 0, (4.11) since u 1 is real. Now assume τ is in the l.p. case at a and regular at b. Then dom (T max ) is a two-dimensional extension of dom (T min ), since dim(ker(T max − i)) = 1 by Corollary 4.3. Let v, w ∈ dom (T max ) with v = w = 0 in a vicinity of a and v(b) = w [1] (b) = 1 and v [1] (4.12) Then dom (T max ) = dom (T min ) + span{v, w}, (4.13) since v and w are linearly independent modulo dom (T min ) and they do not lie in dom (T min ). Then for each f , g ∈ dom (T max ) there are f 0 , g 0 ∈ dom (T min ) such that f = f 0 and g = g 0 in a vicinity of a and therefore, Now if τ is not regular at b we pick some c ∈ (a, b). Then for each f ∈ dom (T max ), f | (a,c) lies in the domain of the maximal operator induced by τ | (a,c) and the claim follows from what we already proved. Lemma 4.5. Let τ be in the l.p. case at both endpoints and z ∈ C\R. Then there is no nontrivial solution of (τ − z)u = 0 in L 2 ((a, b); r(x)dx). Proof. If u ∈ L 2 ((a, b); r(x)dx) is a solution of (τ − z)v = 0, then u is a solution of (τ − z)w = 0 and both u and u lie in dom (T max ). Now the Lagrange identity yields If α → a and β → b, the left-hand side converges to zero by Lemma 4.4 and the right-hand side converges to 2i Im(z) u 2,r , hence u 2,r = 0. Proof. If τ is in the l.c. case at both endpoints, all solutions of (τ − i)u = 0 lie in L 2 ((a, b); r(x)dx) and hence in dom (T max ). Therefore, n(T min ) = dim(ker(T max − i)) = 2. In the case when τ is in the l.c. case at exactly one endpoint, there is (up to scalar multiples) exactly one nontrivial solution of (τ − i)u = 0 in L 2 ((a, b); r(x)dx), by Corollary 4.3. Now if τ is in the l.p. case at both endpoints, we have ker(T max − i) = {0} by Lemma 4.5 and hence n(T min ) = 0. Self-Adjoint Realizations We are interested in the self-adjoint restrictions of T max (or equivalently the selfadjoint extensions of T min ). To this end, recall that we introduced the convenient short-hand notation 2) Proof. We denote the right-hand side of (5.2) by dom (S 0 ). First assume S is a self-adjoint restriction of T max . If f ∈ dom (S) then for each g ∈ dom (S), hence f ∈ dom (S * ) = dom (S). Conversely, assume dom (S) = dom (S 0 ). Then S is symmetric since τ f, g r = f, τ g r for each f , g ∈ dom (S). Now let f ∈ dom (S * ) ⊆ dom (T * min ) = dom (T max ), then for each g ∈ dom (S). Hence, f ∈ dom (S 0 ) = dom (S), and it follows that S is self-adjoint. The aim of this section is to determine all self-adjoint restrictions of T max . If both endpoints are in the l.p. case this is an immediate consequence of Theorem 4.6. Theorem 5.2. If τ is in the l.p. case at both endpoints then T min = T max is a self-adjoint operator. Next we turn to the case when one endpoint is in the l.c. case and the other one is in the l.p. case. But before we do this, we need some more properties of the Wronskian. and Similar results hold at the endpoint b. Proof. Choosing A similar result holds if τ is in the l.c. case at b and in the l.p. case at a. Proof. Since n(T min ) = 1, the self-adjoint extensions of T min are precisely the one-dimensional, symmetric extensions of T min . Hence some operator S is a selfadjoint extension of T min if and only if there is a v ∈ dom (T max ) \dom (T min ) with W (v, v)(a) = 0 such that Hence, we have to prove that The subspace on the left-hand side is included in the right one because of Theorem 3.6 and W (v, v)(a) = 0. On the other hand, if the subspace on the right-hand side were larger, then it would coincide with dom (T max ) and, hence, would imply v ∈ dom (T min ). Two self-adjoint restrictions are distinct if and only if the corresponding functions v are linearly independent modulo T min . Furthermore, v can always be chosen such that v is equal to some real solution of (τ − z)u = 0 with z ∈ R in some vicinity of a. It remains to consider the case when both endpoints are in the l.c. case. Theorem 5.5. Suppose τ is in the l.c. case at both endpoints. Then some operator S is a self-adjoint restriction of T max if and only if there are v, w ∈ dom (T max ), linearly independent modulo dom (T min ), with Proof. Since n(T min ) = 2 the self-adjoint restrictions of T max are precisely the twodimensional, symmetric extensions of T min . Hence, an operator S is a self-adjoint restriction of T max if and only if there are v, w ∈ dom (T max ), linearly independent modulo dom (T min ), with (5.11) such that Therefore, we have to prove that (5.14) Indeed, the subspace on the left-hand side is contained in D by Theorem 3.6 and (5.11). In order to prove that it is also not larger, consider the linear functionals The intersection of the kernels of these functionals is precisely D. Furthermore, these functionals are linearly independent. Indeed, assume c 1 , c 2 ∈ C and However, by Lemma 3.5 this yields for all f ∈ dom (T max ) and consequently c 1 v + c 2 w ∈ dom (T min ). Now since v, w are linearly independent modulo dom (T min ) we infer that c 1 = c 2 = 0 and Lemma 3.3 implies that Both f v and f w do not lie in D and are linearly independent; hence, D is at most a two-dimensional extension of dom (T min ). In the case when τ is in the l.c. case at both endpoints, we may divide the selfadjoint restrictions of T max into two classes. Indeed, we say some operator S is a self-adjoint restriction of T max with separated boundary conditions if it is of the form Conversely, each operator of this form is a self-adjoint restriction of T max by Theorem 5.5 and Lemma 3.5. The remaining selfadjoint restrictions are called self-adjoint restrictions of T max with coupled boundary conditions. Boundary Conditions In this section, let w 1 , w 2 ∈ dom (T max ) with W (w 1 , w 2 )(a) = 1 and W (w 1 , w 1 )(a) = W (w 2 , w 2 )(a) = 0, (6.1) if τ is in the l.c. case at a and if τ is in the l.c. case at b. We will describe the self-adjoint restrictions of T max in terms of the linear functionals BC 1 a , BC 2 a , BC 1 b and BC 2 b on dom (T max ), defined by 3) if τ is in the l.c. case at a and If τ is in the l.c. case at some endpoint, functions with (6.1) (resp., with (6.2)) always exist. Indeed, one may take them to coincide near the endpoint with some real solutions of (τ − z)u = 0 with W (u 1 , u 2 ) = 1 for some z ∈ R and use Lemma 3.5. In the regular case these functionals may take the form of point evaluations of the function and its quasi-derivative at the boundary point. Lemma 6.1. Suppose τ is regular at a. Then there are w 1 , w 2 ∈ dom (T max ) with (6.1) such that the corresponding linear functionals BC 1 a and BC 2 a satisfy BC 1 a (f ) = f (a) and BC 2 a (f ) = f [1] (a) for f ∈ dom (T max ) . (6.5) The analogous result holds at the endpoint b. Proof. Take w 1 , w 2 ∈ dom (T max ) to coincide near a with the real solutions u 1 , u 2 of τ u = 0 with u 1 (a) = u Using the Plücker identity one easily obtains the equality Conversely, if some ϕ a ∈ [0, π) is given, then there exists a v ∈ dom (T max ), not belonging to dom (T min ), with W (v, v)(a) = 0 and W (h, v)(a) = 0 for some h ∈ dom (T max ) such that Using this, Theorem 5.4 immediately yields the following characterization of the self-adjoint restrictions of T max in terms of the boundary functionals. Theorem 6.2. Suppose τ is in the l.c. case at a and in the l.p. case at b. Then some operator S is a self-adjoint restriction of T max if and only if there is some ϕ a ∈ [0, π) such that A similar result holds if τ is in the l.c. case at b and in the l.p. case at a. Next we will give a characterization of the self-adjoint restrictions of T max if τ is in the l.c. case at both endpoints. Theorem 6.3. Suppose τ is in the l.c. case at both endpoints. Then some operator S is a self-adjoint restriction of T max if and only if there are matrices B a , . Then a simple computation shows that In order to prove rank (B a |B b ) = 2, let c 1 , c 2 ∈ C and Hence, the function c 1 v + c 2 w lies in the kernel of BC 1 a , BC 2 a , BC 1 b and BC 2 b , and therefore, W (c 1 v + c 2 w, f )(a) = 0 and W (c 1 v + c 2 w, f )(b) = 0 for each f ∈ dom (T max ). This means that c 1 v + c 2 w ∈ dom (T min ) and hence c 1 = c 2 = 0, since v, w are linearly independent modulo dom (T min ). This proves that (B a |B b ) has rank two. Furthermore, a calculation yields that for f ∈ dom (T max ) 18) which proves that S is given as in the claim. Conversely, let B a , B b ∈ C 2×2 with the claimed properties be given. Then there are v, w ∈ dom (T max ) such that . In order to prove that v and w are linearly independent modulo dom (T min ), let c 1 , Now the rows of (B a |B b ) are linearly independent, hence c 1 = c 2 = 0. Since again the functions v, w satisfy the assumptions of Theorem 5.5. As above, one infers once again that for f ∈ dom (T max ), Hence, S is a self-adjoint restriction of T max by Theorem 5.5. As in the preceding section, if τ is in the l.c. case at both endpoints, we may divide the self-adjoint restrictions of T max into two classes. Theorem 6.4. Suppose τ is in the l.c. case at both endpoints. Then some operator S is a self-adjoint restriction of T max with separated boundary conditions if and only if there are ϕ a , ϕ b ∈ [0, π) such that . (6.24) Proof. Using (6.8) and (6.9) one easily sees that the self-adjoint restrictions of T max with separated boundary conditions are precisely the ones given in (6.23). Hence, we only have to prove the second claim. Let S be a self-adjoint restriction of T max with coupled boundary conditions and B a , B b ∈ C 2×2 matrices as in Theorem 6.3. Then by (6.11) either both of them have rank one or both have rank two. In the first case we have Since the vectors w a and w b are linearly independent (recall that rank(B a |B b ) = 2) one infers that In particular, (6.28) However, this shows that S is a self-adjoint restriction with separated boundary conditions. Hence, both matrices, B a and B b , have rank two. If we set B = B −1 b B a , then B = J(B −1 ) * J * and therefore, | det(B)| = 1; hence, det(B) = e 2iφ for some φ ∈ [0, π). If we set R = e −iφ B, one infers from the identities S has the claimed representation. Conversely, if S is of the form (6.24), then Theorem 6.3 shows that it is a selfadjoint restriction of T max . Now if S were a self-adjoint restriction with separated boundary conditions, there would exist an f ∈ dom (S) \dom (T min ), vanishing in some vicinity of a. By the boundary condition we would also have . Hence, S cannot be a self-adjoint restriction with separated boundary conditions. We note that the separated self-adjoint extensions described in (6.23) are always real (that is, commute with the antiunitary operator of complex conjugation, resp., the natural conjugation in L 2 ((a, b); r(x)dx)). The coupled boundary conditions in (6.24) are real if and only if φ = 0 (see also [160,Sect. 4.2]). The Spectrum and the Resolvent In this section we will compute the resolvent R z = (S − zI r ) −1 of a self-adjoint restriction S of T max . First we deal with the case when both endpoints are in the l.c. case. Theorem 7.1. Suppose τ is in the l.c. case at both endpoints and S is a self-adjoint restriction of T max . Then for each z ∈ ρ(S), the resolvent R z is an integral operator . For any two given linearly independent solutions u 1 , , 2}, such that the kernel is given by for x ∈ (a, b). Furthermore, since R z g satisfies the boundary conditions, we obtain for some suitable matrices B a , B b ∈ C 2×2 as in Theorem 6.3. Now since g has compact support, we infer that as well as Consequently, and the function d 1 u 1 + d 2 u 2 would be a solution of (τ − z)u = 0 satisfying the boundary conditions of S, and consequently would be an eigenvector with eigenvalue z. However, this would contradict z ∈ ρ(S), and it follows that B a M α −B b M β must be invertible. Since the constants c 1 and c 2 may be written as linear combinations of where the coefficients are independent of g. Using equation (7.3) one verifies that R z g has an integral-representation with a function G z as claimed. The function G z is square-integrable, since the solutions u 1 and u 2 lie in L 2 ((a, b); r(x)dx) by assumption. Finally, since the operator K z defined on L 2 ((a, b); r(x)dx), and the resolvent R z are bounded, the claim follows since they coincide on a dense subspace. Since the resolvent R z is compact, in fact, Hilbert-Schmidt, this implies discreteness of the spectrum. Corollary 7.2. Suppose τ is in the l.c. case at both endpoints and S is a self-adjoint restriction of T max . Then S has purely discrete spectrum, that is, σ(S) = σ d (S). Moreover, If S is a self-adjoint restriction of T max with separated boundary conditions or if (at least) one endpoint is in the l.c. case, then the resolvent has a simpler form. Theorem 7.3. Suppose S is a self-adjoint restriction of T max (with separated boundary conditions if τ is in the l.c. at both endpoints) and z ∈ ρ(S). Furthermore, let u a and u b be nontrivial solutions of (τ − z)u = 0, such that u a satisfies the boundary condition at a if τ is in the l.c. case at a, lies in L 2 ((a, b); r(x)dx) near a if τ is in the l.p. case at a, (7.13) and u b satisfies the boundary condition at b if τ is in the l.c. case at b, lies in L 2 ((a, b); r(x)dx) near b if τ is in the l.p. case at b. (7.14) Then the resolvent R z is given by Proof. The functions u a , u b are linearly independent; otherwise, they would be eigenvectors of S with eigenvalue z. Hence, they form a fundamental system of (τ − z)u = 0. Now for each f ∈ L 2 ((a, b); r(x)dx) we define a function f g by . Moreover, f g is a scalar multiple of u a near a and a scalar multiple of u b near b. Hence, the function f g satisfies the boundary conditions of S and therefore, If τ is in the l.p. case at some endpoint, then Corollary 4.3 shows that there is always a, unique up to scalar multiples, nontrivial solution of (τ − z)u = 0, lying in L 2 ((a, b); r(x)dx) near this endpoint. Also if τ is in the l.c. case at some endpoint, there exists a, unique up to scalar multiples, nontrivial solution of (τ − z)u = 0, satisfying the boundary condition at this endpoint. Hence, functions u a and u b , as in Theorem 7.3 always exist. Corollary 7.4. If S is a self-adjoint restriction of T max (with separated boundary conditions if τ is in the l.c. at both endpoints ), then all eigenvalues of S are simple. Proof. Suppose λ ∈ R is an eigenvalue and u i ∈ dom (S) with τ u i = λu i for i = 1, 2, that is, they are solutions of (τ − λ)u = 0. If τ is in the l.p. case at some endpoint, then clearly the Wronskian W (u 1 , u 2 ) vanishes. Otherwise, since both functions satisfy the same boundary conditions this follows using the Plücker identity. Since the deficiency index of T min is finite, the essential spectrum of self-adjoint realizations is independent of the boundary conditions, that is, all self-adjoint restrictions of T max have the same essential spectrum (cf., e.g., [156,Theorem 8.18]) We conclude this section by proving that the essential spectrum of the self-adjoint restrictions of T max is determined by the behavior of the coefficients in some arbitrarily small neighborhood of the endpoints. In order to state this result we need some notation. Fix some c ∈ (a, b) and denote by τ | (a,c) (resp., by τ | (c,b) ) the differential expression on (a, c) (resp., on (c, b)) corresponding to our coefficients restricted to (a, c) (resp., to (c, b)). Furthermore, let S (a,c) (resp., S (c,b) ) be some self-adjoint extension of τ | (a,c) (resp., of τ | (c,b) ). Proof. If one identifies L 2 ((a, b); r(x)dx) with the orthogonal sum then the operator . Now the claim follows, since S and S c are both finite dimensional extensions of the symmetric operator given by An immediate corollary is that the essential spectrum only depends on the behavior of the coefficients in some neighborhood of the endpoints, recovering Weyl's splitting method. The Weyl-Titchmarsh-Kodaira m-Function In this section let S be a self-adjoint restriction of T max (with separated boundary conditions if τ is in the l.c. case at both endpoints). Our aim is to define a singular Weyl-Titchmarsh-Kodaira function as introduced recently in [41], [56], and [103]. To this end we need a real entire fundamental system θ z , φ z of (τ − z)u = 0 with W (θ z , φ z ) = 1, such that φ z lies in dom (S) near a, that is, φ z lies in L 2 ((a, b); r(x)dx) near a and satisfies the boundary condition at a if τ is in the l.c. case at a. Under the assumption of Hypothesis 8.1 we may define a function m : ρ(S) → C by requiring that the solutions Proof. Let c, d ∈ (a, b) with c < d. From Theorem 7.3 and the equation we obtain for each z ∈ ρ(S) and x ∈ [c, d), and hence The left-hand side of this equation is analytic in ρ(S) since the resolvent is. Furthermore, the integrals are analytic in ρ(S) as well, since the integrands are analytic and locally bounded by Theorem 2.7. Hence, m is analytic if for each However, this holds; otherwise, φ z0 would vanish almost everywhere. Moreover, equation (8.2) is valid since the function As an immediate consequence of Theorem 8.2 one infers that ψ z (x) and ψ [1] z (x) are analytic functions in z ∈ ρ(S) for each x ∈ (a, b). for some entire functions f , g with f (z) real and g(z) real modulo iπ. The corresponding singular Weyl-Titchmarsh-Kodaira functions are related via In particular, the maximal domain of holomorphy or the structure of poles and singularities do not change. We continue with the construction of a real entire fundamental system in the case when τ is in the l.c. case at a. Theorem 8.4. Suppose τ is in the l.c. case at a. Then there exists a real entire Proof. Let θ, φ be a real fundamental system of τ u = 0 with W (θ, φ) = 1 such that φ lies in dom (S) near a. Now fix some c ∈ (a, b) and for each z ∈ C let u z,1 , u z,2 be the fundamental system of z,2 (c) = 1 and u Then by the existence and uniqueness theorem we have Furthermore, a direct calculation shows that θ z = θ z and φ z = φ z . The remaining equalities follow upon repeatedly using the Plücker identity. It remains to prove that the functions W (u z,1 , θ)(a), W (u z,2 , θ)(a), W (u z,1 , φ)(a) and W (u z,2 , φ)(a) are entire in z. Indeed, by the Lagrange identity Now the integral on the right-hand side is analytic by Theorem 2.7 and in order to prove that the limit is also analytic we need to show that the integral is bounded as x ↓ a, locally uniformly in z. But the proof of Lemma 4.1 shows that, for each for some constant K ∈ R and all z in some neighborhood of z 0 . Analyticity of the other functions is proved similarly. Corollary 8.5. Suppose τ is in the l.c. case at a and θ z , φ z is a real entire fundamental system of (τ − z)u = 0 as in Theorem 8.4. Then the corresponding singular Weyl-Titchmarsh-Kodaira function m is a Nevanlinna-Herglotz function. Proof. In order to prove the Nevanlinna-Herglotz property, we show that . This also holds if τ is in the l.c. case at b, since then ψ z1 and ψ z2 satisfy the same boundary condition at b. Now the Lagrange identity yields In particular, for z ∈ C\R, using m(z) = m(z) as well as Since ψ z is a nontrivial solution, we furthermore have 0 < ||ψ z || 2 r . We conclude this section with a necessary and sufficient condition for Hypothesis 8.1 to hold. To this end, for each c ∈ (a, b), let S D (a,c) be the self-adjoint operator associated to τ | (a,c) with a Dirichlet boundary condition at c and the same boundary condition as S at a. The Spectral Transformation In this section let S be a self-adjoint restriction of T max (with separated boundary conditions if τ is in the l.c. case at both endpoints) as in the preceding section. Furthermore, we assume that there is a real entire fundamental system θ z , φ z of (τ − z)u = 0 with W (θ z , φ z ) = 1 such that φ z lies in dom (S) near a. By m we denote the corresponding singular Weyl-Titchmarsh-Kodaira function and by ψ z the Weyl solutions of S. Recall that by the spectral theorem, for all functions f , g ∈ L 2 ((a, b); r(x)dx) there is a unique complex measure E f,g such that In order to obtain a spectral transformation we define for each f ∈ L 2 c ((a, b); r(x)dx) the transform of ff Next, we will use this to associate a measure with m(z) by virtue of the Stieltjes-Livšić inversion formula following literally the proof of [103, Lemma 3.3] (see also [56, Theorem 2.6]): In particular, In particular, the preceding lemma shows that the mapping f →f is an isometry Hence, we may extend this mapping uniquely to an isometric linear operator F from L 2 ((a, b); r(x)dx) into L 2 (R; dµ) by where the limit on the right-hand side is a limit in the Hilbert space L 2 (R; dµ). Using this linear operator F, it is quite easy to extend the result of Lemma 9.1 to functions f , g ∈ L 2 ((a, b); r(x)dx). In fact, one gets that dE f,g = Ff Fg dµ, that is, We will see below that F is not only isometric, but also onto, that is, ran(F) = L 2 (R; dµ). In order to compute the inverse and the adjoint of F, we introduce for each function g ∈ L 2 c (R; dµ) the transform Hence,ǧ lies in L 2 ((a, b); r(x)dx) with ǧ 2,r ≤ g 2,µ and we may extend this mapping uniquely to a bounded linear operator G on If F is a Borel measurable function on R, then we denote by M F the maximally defined operator of multiplication with F in L 2 (R; dµ). In order to prove that G is the inverse of F, it remains to show that F is surjective, that is, ran(F) = L 2 (R; dµ). Therefore, let f , g ∈ L 2 ((a, b); r(x)dx) and F , G be bounded measurable functions on R. Since E f,g is the spectral measure of S we get Now if we set h = F (S)f , then we obtain from this last equation Since this holds for each bounded measurable function G, we infer for almost all λ ∈ R with respect to µ. Furthermore, for each λ 0 ∈ R we can find a g ∈ L 2 c ((a, b); r(x)dx) such thatĝ = 0 in a vicinity of λ 0 . Hence, we even have Fh = F Ff almost everywhere with respect to µ. But this shows that ran(F) contains all characteristic functions of intervals. Indeed, let λ 0 ∈ R and choose f ∈ L 2 c ((a, b); r(x)dx) such thatf = 0 in a vicinity of λ 0 . Then for each interval J, the closure of which is contained in this vicinity, one may choose which yields χ J = Fh ∈ ran(F). Thus, ran(F) = L 2 (R; dµ) follows. In this case, Lemma 9.1 implies Now the spectrum can be read off from the boundary behavior of the singular Weyl-Titchmarsh-Kodaira function m in the usual way (see, e.g., [58] in the classical context and the recent [103, Corollary 3.5], as well as the references therein). Lemma 9.6. For every z ∈ ρ(S) and all x ∈ (a, b) the transform of the Green's function G z (x, · ) and its quasi-derivative ∂ [1] x G z (x, · ) are given by Proof. First note that G z (x, · ) and ∂ [1] x G z (x, · ) both lie in L 2 ((a, b); r(x)dx). Then using Lemma 9.1, we get for each f ∈ L 2 c ((a, b); r(x)dx) and g ∈ L 2 c (R; dµ) Hence, for almost all x ∈ (a, b). Using Theorem 7.3, one verifies for almost all x ∈ (a, b). Since all three terms are absolutely continuous, this equality holds for all x ∈ (a, b), which proves the first part of the claim. The equality for the transform of the quasi-derivative follows from Lemma 9.7. Suppose τ is in the l.c. case at a and θ z , φ z is a real entire fundamental system as in Theorem 8.4. Then for each z ∈ ρ(S) the transform of the Weyl solution ψ z is given by Proof. From Lemma 9.6 we obtain for each x ∈ (a, b) Now the claim follows by letting x ↓ a, using Theorem 8.4. Under the assumptions of Lemma 9.7, m is a Nevanlinna-Herglotz function. Hence, where the constants c 1 , c 2 are given by Corollary 9.8. If τ is in the l.c. case at a and θ z , φ z is a real entire fundamental system as in Theorem 8.4, then c 2 = 0 in (9.32). Proof. Taking imaginary parts in (9.32) yields for each z ∈ C\R, Using the last identity in conjunction with Lemma 9.7 and (8.19), we obtain The Spectral Multiplicity In the present section we consider the general case where none of the endpoints are supposed to satisfy the requirements of the previous section. Therefore, let S be a self-adjoint restriction of T max (with separated boundary conditions if τ is in the l.c. case at both endpoints). In this situation, the spectral multiplicity of S is potentially two and hence we will work with a matrix-valued spectral transformation. The results in this section extend classical spectral multiplicity results for second-order Schrödinger operators originally due to Kac [85], [86] (see also Gilbert [59] and Simon [151]) to the general situation discussed in this paper. We fix some interior point x 0 ∈ (a, b) and consider the real entire fundamental system θ z , φ z of solutions of (τ − z)u = 0 with the initial conditions θ z (x 0 ) = φ [1] z (x 0 ) = cos(ϕ a ) and θ [1] z for some fixed ϕ a ∈ [0, π). The Weyl solutions are defined by such that for all c ∈ (a, b), ψ z,− ∈ L 2 ((a, c); r(x)dx) and ψ z,+ ∈ L 2 ((c, b); r(x)dx). , z ∈ C\R, (10.5) and observes that det(M (z)) = −1/4. Moreover, a brief computation shows that the function M is a matrix-valued Nevanlinna-Herglotz function and thus has a representation where C 1 is a self-adjoint matrix, C 2 a nonnegative matrix, and Ω is a self-adjoint, matrix-valued measure which is given by the Stieltjes inversion formula Im(M (λ + iε))dλ, λ 1 , λ 2 ∈ R, λ 1 < λ 2 . (10.7) It will be shown in Corollary 10.4 that one actually has C 2 = 0 in (10.6). Furthermore, the trace Ω tr = Ω 1,1 + Ω 2,2 of Ω defines a nonnegative measure and the components of Ω are absolutely continuous with respect to Ω tr . The respective densities are denoted by R i,j , i, j ∈ {1, 2}, and are given by where the limit exists almost everywhere with respect to Ω tr . One notes that R is nonnegative and has trace equal to one. In particular, all entries of R are bounded, Furthermore, the corresponding Hilbert space L 2 (R; dΩ) is associated with the inner product where for each f ∈ L 2 c ((a, b); r(x)dx), one defines the transform,f of f , aŝ In the following lemma, we will relate the 2 × 2 matrix-valued measure Ω to the operator-valued spectral measure E of S. If F is a measurable function on R, we denote with M F the maximally defined operator of multiplication with F in the Hilbert space L 2 (R; dΩ). Proof. This follows by evaluating Stone's formula Im ( R λ+iε f, g r ) dλ, Lemma 10.1 shows that the transformation defined in (10.11) uniquely extends to an isometry F from L 2 ((a, b); r(x)dx) into L 2 (R; dΩ). Theorem 10.2. The operator F is unitary with inverse given by where the limit exists in L 2 ((a, b); r(x)dx). Moreover, one has S = F * M id F. Proof. Because of Lemma 10.1, it remains to show that F is onto. Since it is straightforward to verify that the integral operator on the right-hand side of (10.14) is the adjoint of F, we can equivalently show that ker(F * ) = {0}. To this end, let g ∈ L 2 (R; dΩ), N ∈ N, and z ∈ ρ(S). Then since interchanging integration with differentiation can be justified using Fubini's theorem. Taking the limit N → ∞, one concludes that By Stone-Weierstraß, one concludes in addition that F * M F g = F (S)F * g for any continuous function F vanishing at infinity, and by a consequence of the spectral theorem (see, e.g., the last part of [153, Theorem 3.1]), one can further extend this to characteristic functions of intervals I. Hence, for g ∈ ker(F * ) one infers that for any compact interval I. Moreover, after taking derivatives, one also obtains I g(λ) θ [1] λ (x) φ for any compact interval I, and thus g = 0, as required. As in Lemma 9.6, one can determine the transform of the Green's function upon employing Theorem 7.3 and equation (10.16). Lemma 10.3. For every z ∈ ρ(S) and all x ∈ (a, b) the transform of the Green's function G z (x, · ) and its quasi-derivative ∂ [1] x G z (x, · ) are given by and (10.20) As a consequence, one obtains the following refinement of (10.6): We note that the vanishing of the linear term C 2 z in (10.6) is typical in this context and refer to [8,Ch. 7] and [111] for detailed discussions. Furthermore, this allows us to investigate the spectral multiplicity of S. Lemma 10.6. If we define (10.32) then M id = M id·χΣ 1 ⊕ M id·χΣ 2 and the spectral multiplicity of M id·χΣ 1 is one and the spectral multiplicity of M id·χΣ 2 is two. Combining (10.5) with (10.8), one concludes that where the first factor is bounded by 1/4. At this point Lemma 10.6 yields the following result. Theorem 10.7. The singular spectrum of S has spectral multiplicity one. The absolutely continuous spectrum of S has multiplicity two on the subset σ ac (S + ) ∩ σ ac (S − ) and multiplicity one on σ ac (S)\(σ ac (S + ) ∩ σ ac (S − )). Here S ± are the restrictions of S to (a, x 0 ) and (x 0 , b), respectively. Proof. Using the fact that Σ s is a minimal support for the singular part of S one obtains S s = S pp ⊕S sc = E(Σ s )S and S ac = (1−E(Σ s ))S. Thus, evaluating (10.33) using (10.30), one infers that the singular part has multiplicity one by Lemma 10.6. For the absolutely continuous part, one uses that the corresponding sets Im(m ± (λ + iε)) < ∞} (10.34) are minimal supports for the absolutely continuous spectra of S ± . Again, the remaining result follows from Lemma 10.6 upon evaluating (10.33). (Non-)Principal Solutions, Boundedness from Below, and the Friedrichs Extension In this section we develop various new applications to oscillation theory, establish the connection between non-oscillatory solutions and boundedness from below of T 0 , extend a limit-point criterion for T 0 to our present general assumptions, and characterize the Friedrichs extension S F of T 0 . Assuming Hypothesis 2.1, we start by investigating some (non-)oscillatory-type properties of real-valued solutions u ∈ D τ of the distributional Sturm-Liouville equation − u [1] + su [1] + qu = λur for fixed λ ∈ R. (11.1) Throughout this section, solutions of (11.1) are always taken to be real-valued, in accordance with Theorem 2.2. In addition, we occasionally refer to p as being sign-definite on an interval I ⊆ R, by which we mean that p > 0 or p < 0 a.e. on I. We begin with a Sturm-type separation theorem for the zeros of pairs of linearly independent real-valued solutions of (11.1). Theorem 11.1. Assume Hypothesis 2.1 and suppose that u j , j = 1, 2, are two linearly independent real-valued solutions of (11.1) for a fixed λ ∈ R. If x j ∈ (a, b), j = 1, 2, are two zeros of u 1 with x 1 < x 2 and p is sign-definite on (x 1 , x 2 ), then u 2 has at least one zero in [x 1 , x 2 ]. If, in addition, τ is regular at the endpoint a and x 1 = a, then u 2 has a zero in [a, x 2 ]. An analogous result holds if τ is regular at the endpoint b. Proof. Since the Wronskian of two real-valued solutions of (11.1) is a constant (cf. the discussion after Lemma 2.3), for some c ∈ R. If u 2 has no zero in [x 1 , x 2 ] then the quotient u 1 /u 2 is absolutely continuous on [x 1 , x 2 ] and (11.2) implies Subsequently, integrating the equation in (11.3) from x 1 to x 2 and using u 1 (x j ) = 0, j = 1, 2, one obtains The sign definiteness assumption on p implies the integral appearing in (11.4) is nonzero, and, consequently, one concludes c = 0. Therefore, u 1 and u 2 must be linearly dependent real-valued solutions of (11.1). The result now follows by contraposition. To prove the remaining statement, one may simply repeat the above argument, noting that regularity of τ at the endpoint a guarantees that the function appearing in the right hand side of (11.3) is integrable on (a, x 2 ). Note also that all zeros are simple in the sense that (nontrivial) solutions must change sign at a zero. Lemma 11.2. Assume Hypothesis 2.1 and suppose that u is a nontrivial real-valued solution of (11.1) for a fixed λ ∈ R. If x 0 ∈ (a, b) is a zero and p is sign-definite in a neighborhood of x 0 , then u must change sign at x 0 . Under the assumption that τ −λ is non-oscillatory at the endpoint b, and that p is sign-definite a.e. on (c, b), the next result establishes the existence of a distinguished solution which is, in a heuristic sense, "smaller" than any other solution near b. An analogous result holds if (11.1) is non-oscillatory at a. Theorem 11.4. Assume Hypothesis 2.1 and let λ ∈ R be fixed. In addition, suppose that there exists c ∈ (a, b) such that p is sign-definite a.e. on (c, b). If τ − λ is non-oscillatory at b, there exists a real-valued solution u 0 of (11.1) satisfying the following properties (i)-(iii) in which u 1 denotes an arbitrary real-valued solution of (11.1) linearly independent of u 0 . (i) u 0 and u 1 satisfy the limiting relation (11.7) (iii) Suppose x 0 ∈ (c, b) strictly exceeds the largest zero, if any, of u 0 , and u 1 (x 0 ) = 0. If u 1 (x 0 )/u 0 (x 0 ) > 0, then u 1 has no (resp., exactly one) zero in (x 0 , b) if W (u 0 , u 1 ) ≷ 0 (resp., W (u 0 , u 1 ) ≶ 0), in the case p ≷ 0 a.e. on (c, b). On the other hand, if u 1 (x 0 )/u 0 (x 0 ) < 0, then u 1 has no (resp., exactly one) zero in (x 0 , b) if W (u 0 , u 1 ) ≶ 0 (resp., W (u 0 , u 1 ) ≷ 0) in the case p ≷ 0 a.e. on (c, b). Proof. Let u and v denote a pair of linearly independent real-valued solutions of (11.1). Then their Wronskian is a nonzero constant, say c ∈ R\{0}. If x 0 ∈ (c, b) strictly exceeds the largest zero, if any, of v, then u/v ∈ AC loc ((x 0 , b)), and one verifies (as in (11.3)) that (11.8) In particular, since p is sign definite a.e. on (x 0 , b), the right-hand side of equation (11.8) is sign definite a.e. on the same interval; therefore, the function u/v is monotone on (x 0 , b). Consequently, exists, where C = ±∞ is permitted. By renaming u and v, if necessary, one may take C = 0. Indeed, in the case C = ±∞ in (11.9), one simply interchanges the roles of the functions u and v. If 0 < |C| < ∞, then one replaces the solution u by the linear combination u − Cv. Choosing u 0 = u, a real-valued solution u 1 of (11.1) is linearly independent of u 0 if and only if it is of the form u 1 = c 0 u 0 + c 1 v with c 1 = 0. In this case, C = 0 implies and, consequently, (11.6). This proves item (i). In order to prove item (ii), we first note a useful consequence of (11.8). To this end, suppose u and v are real-valued solutions of (11.1) and that x 0 strictly exceeds the largest zero of v, so that (11.8) holds as before. Integrating (11.8) from x 0 to x ∈ (x 0 , b) and using sign-definiteness of p yields To prove item (ii), let u 1 denote a real-valued solution linearly independent of u 0 (with u 0 the solution constructed in item (i)) and choose x 0 ∈ (c, b) strictly exceeding the largest zero of u 0 and the largest zero of u 1 . Choosing u = u 0 and v = u 1 (resp., u = u 1 and v = u 0 ) in (11.11), taking the limit x ↑ b, and applying (11.6) establishes convergence (resp., divergence) of the first (resp., second) integral appearing in (11.7). This completes the proof of item (ii). Evidently, a result analogous to Theorem 11.4 holds if τ − λ is non-oscillatory at a. More specifically, one can establish the existence of a distinguished real-valued solution v 0 = 0 of (11.1) which satisfies the following analogue to (11.6): If v 1 is any real-valued solution of (11.1) linearly independent of v 0 , then Analogues of item (ii) and (iii) of Theorem 11.6 subsequently hold for v 0 and any real-valued solution v 1 linearly independent of v 0 . Definition 11.5. Assume Hypothesis 2.1 and suppose that λ ∈ R. If τ − λ is nonoscillatory at c ∈ {a, b}, then a nontrivial real-valued solution u 0 of (11.1) which satisfies lim x→c x∈(a,b) u 0 (x) u 1 (x) = 0 (11.14) for any other linearly independent real-valued solution u 1 of (11.1) is called a principal solution of (11.1) at c. A real-valued solution of (11.1) linearly independent of a principal solution at c is called a non-principal solution of (11.1) at c. If τ − λ is non-oscillatory at c ∈ {a, b}, one verifies that a principal solution at c is unique up to constant multiples. The main ideas for the proof of Theorem 11.4 presented above are taken from [71, Theorem 11.6.4]; the notion of (non-)principal solutions dates back at least to Hartman [70] and was subsequently also used by Rellich [133]. If the differential expression τ −λ is non-oscillatory at c ∈ {a, b}, one can use any nonzero real-valued solution to construct a non-principal solution in a neighborhood of c. The procedure for doing so is the content of our next result. For simplicity, we consider only the case when τ − λ is non-oscillatory at b. An analogous technique allows one to construct (non-)principal solutions near a when τ −λ is non-oscillatory at a. Theorem 11.6. Assume Hypothesis 2.1 and suppose that τ − λ is non-oscillatory at b. In addition, suppose that there exists c ∈ (a, b) such that p is sign-definite a.e. on (c, b). Let u = 0 be a real-valued solution of (11.1) and let x 0 ∈ (c, b) strictly exceed its last zero. Then is a non-principal solution of (11.1) on (x 0 , b). If, on the other hand, u is a nonprincipal solution of (11.1), then is a principal solution of (11.1) on (x 0 , b). Analogous results hold at a. Proof. Suppose that u = 0 is a real-valued solution of (11.1) and define u 1 by (11.15). Evidently, u 1 is real-valued and u 1 ∈ AC loc ((x 0 , b)). In addition, and one verifies τ u 1 = λu 1 on (x 0 , b). Moreover, u 1 is linearly independent of u since W (u, u 1 ) = 1, and u 1 is not a principal solution on (x 0 , b) because It follows that u 1 is a non-principal solution on (x 0 , b). Under the additional assumption that u is a non-principal solution, one again readily verifies that u 0 defined by (11.16) is a solution on (x 0 , b), and that u 0 is linearly independent of u. Next, we write u 0 = c 0 u 0 + c 1 u on (x 0 , b), where u 0 is a principal solution on (x 0 , b) and c 0 , c 1 ∈ R. Then after dividing through by u, one computes 19) and it follows that u 0 = c 0 u 0 is a principal solution on (x 0 , b). The following result establishes an intimate connection between non-oscillatory behavior and the l.p. case for τ at an endpoint. More specifically, we derive a criterion for concluding that τ is in the l.p. case at an endpoint in the situation where τ −λ is non-oscillatory at the endpoint and p has fixed sign in a neighborhood of the endpoint. The proof of this result relies on the existence of principal solutions, as established in Theorem 11.4, as well as the technique for constructing nonprincipal solutions described in Theorem 11.6. This condition is well-known within the context of traditional three-term Sturm-Liouville differential expressions of the form τ 0 u = r −1 [−(pu ) + qu], where p > 0, r > 0 a.e. and p −1 , r, q ∈ L 1 loc ((a, b)), etc. It was first derived by Hartman [70] in the particular case p = r = 1 in 1948. Three years later, Rellich [133] extended the result to the general three-term case under some additional smoothness assumptions on p, r, and q. These smoothness restrictions, however, are inessential (see also [52,Lemma C.1]). The following result extends this l.p. criterion to the general case governed by Hypothesis 2.1. Theorem 11.7. Assume Hypothesis 2.1 and suppose that there exists c ∈ (a, b) such that p is sign-definite a.e. on (c, b). In addition, suppose that τ − λ is nonoscillatory at b for some λ ∈ R. If b |r(x)/p(x)| 1/2 dx = ∞, then τ is in the l.p. case at b. An analogous result holds at a. Proof. Since τ − λ is non-oscillatory at b, there exists a principal solution, say u 0 , of (11.1) by Theorem 11.4. If x 0 strictly exceeds the largest zero of u 0 in (c, b), then by Theorem 11.6, u 1 defined by |r Corollary 11.8. Assume Hypothesis 2.1. Suppose τ − λ a is non-oscillatory at a for some λ a ∈ R and that τ − λ b is non-oscillatory at b for some λ b ∈ R. If p is sign-definite in neighborhoods of a and b (the sign of p may be different in the two neighborhoods), and a |r(x)/p(x)| 1/2 dx = ∞, b |r(x)/p(x)| 1/2 dx = ∞, (11.24) then T min = T max is a self-adjoint operator. Proof. By Theorem 11.7, τ is in the l.p. case at a and b. The result now follows from Theorem 5.2. Theorem 11.9. Assume Hypothesis 2.1 and that p > 0 a.e. on (a, b). Suppose there exist λ a , λ b ∈ R such that τ − λ a is non-oscillatory at a and τ − λ b is nonoscillatory at b. Then T 0 and hence any self-adjoint extension S of the minimal operator T min is bounded from below. That is, there exists γ S ∈ R, such that u, Su r ≥ γ S u, u r , u ∈ dom (S) . in the following manner: 32) f 3 ∈ dom T 0,(cn,cn+1) = g| (cn,cn+1) g ∈ dom (T max ) , supp(g) ⊂ (c n , c n+1 ) , Obviously, T 0 defined by (3.3) is an extension of the direct sum T 0,⊕ defined by Moreover, T 0,⊕ ⊂ T 0,⊕ ⊂ T min , and any self-adjoint extension of T min is a selfadjoint extension of T 0,⊕ . Since the deficiency indices of T min are at most 2, it suffices to show that T 0,⊕ is bounded from below. (11.34) Subsequently, by [156, Corollary 2, p. 247], (11.34) implies that any self-adjoint extension of T 0,⊕ (hence, any self-adjoint extension of T min ) is bounded from below since the deficiency indices of T 0,⊕ are finite (in fact, they are at most 2N + 2). It suffices to show that the symmetric operators (11.30)-(11.32) are separately bounded from below; a lower bound for T 0,⊕ is then taken to be the smallest of the lower bounds for (11.30)-(11.32). The proof that T 0,(a,c) and T 0,(d,b) are bounded from below relies on the nonoscillatory assumptions on τ − λ a and τ − λ b . Since (τ − λ a )f a = 0 a.e. on (a, b) and f a does not vanish on (a, c), one can recover q pointwise a.e. on (a, c) by for a.e. x ∈ (a, c). (11.35) Let u ∈ dom T 0,(a,c) be fixed. Using (11.35) in conjunction with the fact that functions in dom T 0,(a,c) vanish in neighborhoods of a and c (to freely perform integration by parts), one computes Denoting the integrand on the right-hand side of (11.36) by F u (x) a.e. in (a, c), algebraic manipulations using the definition of the quasi-derivative yield ≥ 0 for a.e. x ∈ (a, c). (11.37) Therefore, the integral appearing in the right-hand side of (11.36) is nonnegative. Since u ∈ dom T 0,(a,c) is arbitrary, one obtains the lower bound u, T 0,(a,c) u L 2 ((a,c);r(x)dx) ≥ λ a u, u L 2 ((a,c);r(x)dx) , u ∈ dom(T 0,(a,c) ). (11.38) The analogous strategy, using the solution f b , establishes the lower bound for (d,b) ). (11.39) To show that each T 0,(cn,cn+1) , 1 ≤ n ≤ N − 1, is semi-bounded from below, one closely follows the strategy used above to prove semi-boundedness of T 0,(a,c) , noting that since f a is nonvanishing on (c n , c n+1 ), q can be solved for a.e. on the interval (c n , c n+1 ) in the same manner as in (11.35). Then if u ∈ dom T 0,(cn,cn+1) , one obtains an identity which formally reads like (11.36) with the interval (a, c) everywhere replaced by (c n , c n+1 ). Factoring the integrand according to the factorization appearing on the right-hand side of the equality in (11.37) (this time a.e. on (c n , c n+1 )), one infers that u, T 0,(cn,cn+1) u L 2 ((cn,cn+1);r(x)dx) ≥ λ a u, u L 2 ((cn,cn+1);r(x)dx) , u ∈ dom T 0,(cn,cn+1) , 1 ≤ n ≤ N − 1. (11.40) Together, (11.38), (11.39), and (11.40), yield (11.34), and hence (11.25). Corollary 11.10. Assume Hypothesis 2.1 and suppose that p > 0 a. e. on (a, b). If τ is regular on (a, b), then T 0 and hence every self-adjoint extension of T min is bounded from below. Proof. We claim that the differential expression τ is non-oscillatory at a. Indeed, if τ were oscillatory at a, then τ u = 0 has a nontrivial, real-valued solution u a with zeros accumulating at a. Let v denote a nontrivial, real-valued solution of τ u = 0 linearly independent of u a . Then Theorem 11.1 implies that v also has zeros accumulating at a. By Theorem 2.6, u a , v, and their quasi-derivatives have limits at a; by continuity, which yields a contradiction since the Wronskian of u a and v equals a fixed, nonzero constant everywhere in (a, b). Similarly, one shows that τ is non-oscillatory at b. The result now follows by applying Theorem 11.9, with, say, λ a = λ b = 0. Corollary 11.10, under our present general assumptions, has originally been proved by Möller and Zettl [124] using a different approach (and for the general even-order case considered in [157] with a positive leading coefficient). Corollary 11.11. Assume Hypothesis 2.1 and suppose p is sign-definite a.e. in (a, b). If τ is regular on (a, b) and λ ∈ R, then any nontrivial, real-valued solution of τ u = λu has only finitely many zeros in (a, b). Proof. By absorbing λ into τ , it suffices to consider the case λ = 0. A nontrivial, real-valued function u satisfying τ u = 0 cannot have zeros accumulating at a point in [a, b]. Definition 11.12. Assume Hypothesis 2.1. The operator T 0 (defined by (3.3)) is said to be bounded from below at a if there exists a c ∈ (a, b) and a λ a ∈ R such that u, T 0 u r ≥ λ a u, u r , u ∈ dom (T 0 ) such that u ≡ 0 on (c, b). (11.43) Similarly, T 0 is said to be bounded from below at b if there exists a d ∈ (a, b) and a λ b ∈ R such that u, T 0 u r ≥ λ b u, u r , u ∈ dom (T 0 ) such that u ≡ 0 on (a, d). (11.44) Theorem 11.13. Assume Hypothesis 2.1. If T 0 is bounded from below at a and p is sign-definite a.e. near a, then there exists an α ∈ R such that for all λ < α, τ − λ is non-oscillatory at a. A similar result holds if T 0 is bounded from below at b. Proof. By assumption, there exists a c ∈ (a, b) such that each self-adjoint extension S (a,c) of τ (a,c) with separated boundary conditions in L 2 ((a, c); r(x)dx) is bounded from below by some α ∈ R. More precisely, this follows from Definition 11.12 and [156, Corollary 2 on p. 247]. Then for each λ < α, the diagonal of the corresponding Green's function G (a,c),λ (x, x), x ∈ (a, c) is nonnegative (cf. [84, Lemma on p. 195]). In fact, since G (a,c),λ is continuous on (a, c) × (a, c) one has y ∈ (a, c), ε > 0. (11.46) Indeed, if x ∈ (a, c), then by continuity along the diagonal, for any δ > 0, there exists an ε(δ) > 0 such that As a result, ε < ε(δ), δ > 0. (11.48) Therefore, one obtains ,c),λ (x, x) + δ, δ > 0, (11.49) and the analogous inequality with "lim inf" replaced by "lim sup." Subsequently taking δ ↓ 0 yields (11.45). Now let u a and u c be solutions of (τ − λ)u = 0 lying in L 2 ((a, c); r(x)dx) near a and c respectively and satisfying the boundary conditions there (if any). If u a had a zero x in (a, c), then y → G (a,c),λ (y, y) would change sign there (note that u c is nonzero in x since otherwise λ would be an eigenvalue of S (a,c) ). Hence u a cannot have a zero in (a, c) which shows that τ − λ is non-oscillatory at a. Corollary 11.14. Assume Hypothesis 2.1 and suppose p > 0 a.e. on (a, b). Then T 0 is bounded from below if and only if there exist µ ∈ R and functions g a , g b ∈ AC loc ((a, b)) such that g [1] a , g [1] b ∈ AC loc ((a, b) e. near a, q ≥ µr − s g e. near b. (11.50) Proof. We first assume in addition that (11.51) Then for the necessity part of the corollary, Theorem 11.13 permits one to choose g a and g b as principal solutions of (τ − µ)u = 0 at a and b, respectively, for µ less than a lower bound of T 0 . For the sufficiency part, one replaces λ a by µ, "=" by "≥", and f a by g a in (11.35) and (11.36). The endpoint b is handled analogously. As originally pointed out in [88,Sect. 3] in the context of traditional Sturm-Liouville operators (i.e., those without distributional potentials), one may replace condition (11.51) by the condition that one (resp., both) of the integrals appearing in (11.51) is (resp., are) convergent. Indeed, the sufficiency proof of Corollary 11.14 is carried out independent of the condition in (11.51). For necessity, Theorem 11.13 permits one to choose g a or g b as a non-principal solution, yielding equality in (11.50). The disconjugacy property has been extensively studied for Sturm-Liouville expressions with standard L 1 loc -coefficients, and in this connection we refer to the monograph by Coppel [29]. The proof of Theorem 11.13 immediately yields the following disconjugacy result for the distributional Sturm-Liouville expressions studied throughout this manuscript. Corollary 11.16. Assume Hypothesis 2.1, and suppose p > 0 a.e. on (a, b). If T 0 is bounded from below, then there is an α ∈ R such that (τ − λ) is disconjugate for every λ < α. If τ is regular on (a, b), then there exists a α 0 ∈ R, such that for λ < α 0 , each nontrivial solution to (τ − λ)u = 0 has at most one zero in the closed interval [a, b]. Proof. Repeating the proof of Theorem 11.13 with c = b shows that there is an α ∈ R such that for each λ < α there is a solution of (τ − λ)u = 0 which has no zero in (a, b). Now the claim follows immediately from Theorem 11.1. To prove the final statement, let α denote a real number (shown to exist in the first part of the corollary) such that for every λ < α there is a solution of (τ − λ)u = 0 which has no zeros in (a, b). Now, let α 0 = min{α, inf(σ(S 0,0 ))}, where S 0,0 denotes the Dirichlet extension of T min defined by (6.23) with ϕ a = ϕ b = 0 and the functionals BC 1 a and BC 1 b chosen such that (cf. Lemma 6.1) If for some λ < λ min a solution to (τ − λ)u = 0, call it u 0 , has more than one zero, then necessarily u 0 (a) = u 0 (b) = 0, as u has no zeros in (a, b) because λ < α. Consequently, u 0 is an eigenfunction of S 0,0 with eigenvalue λ < inf σ S 0,0 , an obvious contradiction. We conclude this section with an explicit characterization of the Friedrichs extension [47] of T 0 (assuming the latter to be bounded from below). Before proceeding with this characterization, we recall the intrinsic description of the Friedrichs extension S F of a densely defined, symmetric operator S 0 in a complex, separable Hilbert space H (with scalar product denoted by (·, ·) H ), bounded from below, due to Freudenthal [46] in 1936. Assuming that S 0 ≥ γ S 0 I H , Freudenthal's characterization describes S F by Then, as is well-known, (11.56) Equations (11.55) and (11.56) are intimately related to the definition of S F via (the closure of) the sesquilinear form generated by S 0 as follows: One introduces the sesquilinear form Since S 0 ≥ γ S 0 I H , the form q S0 is closable and we denote by q S0 the closure of q S0 . Then q S0 ≥ γ S 0 is densely defined and closed. By the first and second representation theorem for forms (cf., e.g., [96,Sect. 6.2]), q S0 is uniquely associated with a self-adjoint operator in H. This operator is precisely the Friedrichs extension, S F ≥ γ S 0 I H , of S 0 , and hence, (11.58) The following result describes the Friedrichs extension of T 0 (assumed to be bounded from below) in terms of functions that mimic the behavior of principal solutions near an endpoint. The proof closely follows the treatment by Kalf [88] in the special case s = 0 a.e. on (a, b). (For more recent results on the Friedrichs extension of ordinary differential operators we also refer to [112], [124], [125], [128], [129], [136], and [159].) Theorem 11.17. Assume Hypothesis 2.1 and suppose p > 0 a.e. on (a, b). If T 0 is bounded from below by γ 0 ∈ R, T 0 ≥ γ 0 I r , which by Corollary 11.14 is equivalent to the existence of µ ∈ R and functions g a and g b satisfying g a , g b , g [1] a , g [1] b ∈ AC loc ((a, b)), g a > 0 a.e. near a, g b > 0 a.e. near b, (11.59) and q ≥ µr − s g [1] a g a + g [1] a g a a.e. near a, q ≥ µr − s g e. near b, (11.60) then the Friedrichs extension S F of T 0 is characterized by In particular, (11.62) Proof. Let S denote the operator defined by (11.61) and S F the Friedrichs extension of T 0 . We begin by showing S is symmetric. In order to do this, it suffices to prove S is densely defined and u, Su r ∈ R, u ∈ dom (S) . (11.63) Since functions in dom (T 0 ) are compactly supported one has dom (T 0 ) ⊂ dom (S), which guarantees that S is densely defined. Hence it remains to show (11.63). To this end, let a < c 0 < d 0 < b such that g a > 0 on (a, c 0 ], g b > 0 on [d 0 , b) and consider the self-adjoint operator S (c0,d0) on L 2 ((c 0 , d 0 ); r(x)dx) induced by τ with the boundary conditions The proof of Theorem 11.13 shows that the solutions u λ of (τ − λ)u = 0, λ ∈ R, satisfying the initial conditions u λ (c 0 ) = g a (c 0 ) and u [1] λ (c 0 ) = g [1] a (c 0 ), are positive as long as λ lies below the smallest eigenvalue λ 0 of S (c0,d0) (which is bounded from below by assumption). In particular, this guarantees that the eigenfunction u λ0 is nonnegative on [c 0 , d 0 ] and hence even positive since it would change sign at a zero. As a consequence, the function h defined by is positive on (a, b) and satisfies h ∈ AC loc ((a, b)), h [1] ∈ AC loc ((a, b)). Note that in particular h is a scalar multiple of g b near b and hence (11.59) and (11.60) hold with g b replaced by h. Now fix some f ∈ dom (S) and let a < c < d < b. In light of the following analog of Jacobi's factorization identity, in (a, b), (11.66) one computes Taking P = ph 2 and v = f /h in the subsequent Lemma 11.18, one infers that ∈ (a, b), (11.69) where the function H γ is defined as in (11.85). We note that H γ is well-defined for any γ ∈ (a, b) in light of the fact that 1/p ∈ L 1 loc ((a, b); dx) and the function h ∈ AC loc ((a, b)) is strictly positive on any compact subinterval of (a, b). Subsequently, an application of Hölder's inequality yields ∈ (a, b), (11.70) noting that square integrability of P 1/2 v near x = a is guaranteed by the condition f ∈ dom (S). Moreover, the integral Since f ∈ dom (S) was arbitrary, (11.63) follows. We now show that S coincides with S F , the Friedrichs extension of T 0 . It suffices to show S F ⊂ S; self-adjointness of S F and symmetry of S then yield S F = S. In turn, since S F is a restriction of T max (because the self-adjoint extensions of T 0 are precisely the self-adjoint extensions of T min , and the latter are self-adjoint restrictions of T max ), it suffices to verify the two integral conditions appearing in (11.61) are satisfied for elements of dom (S F ). Freudenthal's characterization of the domain of the Friedrichs extension for the present setting is (11.77) that lim j→∞ f j − f 2,r = 0 and lim j,k→∞ Let f ∈ dom (S F ) and {f j } ∞ j=1 a sequence with the properties in (11.77). Define f j,k = f j − f k , j, k ∈ N, and choose numbers c and d in the interval (a, b) such that g a and g b are positive on (a, c] and [d, b), respectively. Then using the identities On the other hand, choosing ν ∈ R such that the existence of such a ν being guaranteed by Lemma A.3 (cf., in particular, (A.34)), and taking κ = |µ| + |ν|, one obtains (11.82) Moreover, the left-hand side of (11.82) goes to zero as j, k → ∞, and as a result, there exist functions f a and f b such that on (a, c) and (d, b), respectively. Consequently, one infers that (11.84) and it follows that f ∈ dom (S). This completes the proof that S F ⊆ S and hence, S F = S. To prove (11.62), note that in light of the inequalities in (11.60), it suffices to prove that the positive part of q − h [1] /h + sh [1] /h times |f | 2 is integrable near a and b for each f ∈ dom (S F ). This follows immediately from (11.67) and (11.73). The proof of Theorem 11.17 relied on the following result: Lemma 11.18. ([89, Lemma 1], [88]) Let P > 0, 1/P ∈ L 1 loc ((a, b); dx), and In addition, suppose that v ∈ AC loc ((a, b)) satisfies The conditions on g a and g b in (11.59) are reminiscent of the integral conditions satisfied by principal solutions to the equation (τ − λ)u = 0, assuming the latter is non-oscillatory. One can just as well characterize the Friedrichs extension of T 0 in terms of functions g a and g b satisfying the assumptions of Theorem 11.17 but for which one (or both) of the integrals in (11.59) is convergent (these conditions are equivalent to T 0 being bounded from below, see the proof of Corollary 11.14). In these cases, the characterization requires a certain boundary condition as our next result shows. Theorem 11.19. Assume Hypothesis 2.1 and suppose p > 0 a. e. on (a, b). If T 0 is bounded from below by γ 0 ∈ R, T 0 ≥ γ 0 I r , which by Corollary 11.14 is equivalent to the existence of µ ∈ R and functions g a and g b satisfying g a , g b , g [1] a , g [1] b ∈ AC loc ((a, b)), g a > 0 a.e. near a, g b > 0 a.e. near b, (11.87) and q ≥ µr − s g e. near a, q ≥ µr − s g then the Friedrichs extension S F of T 0 is characterized by In particular, (11.90) We omit the obvious case where the roles of a and b are interchanged, but note that if (11.87) is replaced by one obtains Proof. Let S denote the operator defined by (11.89) and S F the Friedrichs extension of T 0 . To show that S is symmetric, one can follow line-by-line the argument for (11.63)-(11.68), so that (11.68) remains valid. One can then show that (11.73) continues to hold under the finiteness assumption in (11.87) (cf., the beginning of the proof of [88,Remark 3]). Repeating the argument (11.74)-(11.76) then shows that S is symmetric. In order to conclude S = S F , it suffices to prove S F ⊆ S. In turn, it is enough to prove dom (S F ) ⊆ dom (S). To this end, let f ∈ dom (S F ). Since (11.77)-(11.84) can be repeated without alteration, the problem reduces to proving lim x↓a |f (x)| g a (x) = 0. (11.93) One takes a sequence {f n } ∞ n=1 ⊂ dom (T 0 ) with the properties lim n→∞ f n − f 2,r = 0 and lim n,m→∞ f n − f m , T 0 (f n − f m ) r = 0, (11.94) and let {f n k } ∞ k=1 denote a subsequence converging to f pointwise a.e. in (a, b) as k → ∞. Since f n k , f are continuous on (a, b), f n k actually converge pointwise everywhere to f on (a, b) as k → ∞. Then the proof of (11.90) is exactly the same as the corresponding fact (11.62) in Theorem 11.17. Corollary 11.20. Assume Hypothesis 2.1 and suppose p > 0 a. e. on (a, b). If τ is regular on (a, b), then the Friedrichs extension S F of T 0 is of the form Proof. Let g a , g b be the solutions of τ u = 0 with the initial conditions g a (a) = g b (b) = 1 and g [1] a (a) = g and similarly for the endpoint b. Now the result follows from Theorem 11.19 and in particular (11.92). The Krein-von Neumann Extension in the Regular Case In this section, we consider the Krein-von Neumann extension S K of T 0 ≥ εI r , ε > 0. The operator S K , like the Friedrichs extension S F of T 0 , is a distinguished, in fact, extremal nonnegative extension of T 0 . Temporarily returning to the abstract considerations (11.53)-(11.58) in connection with the Friedrichs extension of S 0 , an intrinsic description of the Krein-von Neumann extension S K of S 0 ≥ 0 has been given by Ando and Nishio [7] in 1970, where S K has been characterized by with lim We recall that A ≤ B for two self-adjoint operators in H if dom |A| 1/2 ⊇ dom |B| 1/2 and where U C denotes the partial isometry in H in the polar decomposition of a densely defined closed operator C in H, C = U C |C|, |C| = (C * C) 1/2 . The following is a fundamental result to be found in M. Krein's celebrated 1947 paper [107] (cf. also Theorems 2 and 5-7 in the English summary on page 492): Theorem 12.1. Assume that S 0 is a densely defined, nonnegative operator in H. Then, among all nonnegative self-adjoint extensions of S 0 , there exist two distinguished ones, S K and S F , which are, respectively, the smallest and largest (in the sense of order between self-adjoint operators, cf. (12.2)) such extensions. Furthermore, a nonnegative self-adjoint operator S is a self-adjoint extension of S 0 if and only if S satisfies 3) In particular, (12.3) determines S K and S F uniquely. In addition, if S 0 ≥ εI H for some ε > 0, one has S F ≥ εI H , and (12.6) in particular, Here the symbol represents the direct (though, not direct orthogonal) sum of subspaces, and the operator inequalities in (12.3) are understood in the sense of (12.2) and hence they can equivalently be written as for some (and hence for all ) a > 0. (12.8) In addition to Krein's fundamental paper [107], we refer to the discussions in [6], [10], [11], [65]. It should be noted that the Krein-von Neumann extension was first considered by von Neumann [155] in 1929 in the case where S 0 is strictly positive, that is, if S 0 ≥ εI H for some ε > 0. (His construction appears in the proof of Theorem 42 on pages 102-103.) However, von Neumann did not isolate the extremal property of this extension as described in (12.3) and (12.8). M. Krein [107], [108] was the first to systematically treat the general case S 0 ≥ 0 and to study all nonnegative self-adjoint extensions of S 0 , illustrating the special role of the Friedrichs extension S F and the Krein-von Neumann extension S K of S 0 as extremal cases when considering all nonnegative extensions of S 0 . For a recent exhaustive treatment of self-adjoint extensions of semibounded operators we refer to [9]- [14]. For classical references on the subject of self-adjoint extensions of semibounded operators (not necessarily restricted to the Krein-von Neumann extension) we refer to Birman [22], [23], Freudenthal [46], Friedrichs [47], Grubb [64], [66], Krein [108], Straus [152], and Visik [154] (see also the monographs by Akhiezer Throughout the remainder of this section, we assume that τ is regular on (a, b) and that the coefficient p is positive a. e. on (a, b). That is, we shall make the following assumptions: Hypothesis 12.2. Assume Hypothesis 2.1 holds with p > 0 a.e. on (a, b) and that τ is regular on (a, b). Equivalently, we suppose that p, q, r, s are Lebesgue measurable on (a, b) with p −1 , q, r, s ∈ L 1 ((a, b); dx) and real-valued a.e. on (a, b) with p, r > 0 a. e. on (a, b). Assuming Hypothesis 12.2, we now provide a characterization of the Krein-von Neumann extension, S K of T 0 (resp., T min ), in the situation where T 0 is strictly positive (in the operator sense). An elucidation along these lines for the case s = 0 a. e. on (a, b) was set forth in [26]. Theorem 12.3. Assume Hypothesis 12.2 and suppose that the associated minimal operator T min is strictly positive in the sense that there exists ε > 0 such that Then the Krein-von Neumann extension S K of T min is given by (cf. (6.24)) g [1] (a) , (12.10) where Here u j (·) j=1,2 are positive solutions of τ u = 0 determined by the conditions Proof. The assumption that T min is strictly positive implies that 0 is a regular point of T min (cf. the paragraph preceding Lemma 4.2), and since the deficiency indices of T min are equal to two (one notes that it is this fact that actually implies the existence of solutions u j , j = 1, 2, satisfying the properties (12.12)), it follows that dim ker T max = 2 (12.13) and a basis for ker T max is given by u j (·) j=1,2 . In this situation, the Krein-von Neumann extension S K of T min is given by (cf. (12.5)), dom S K = dom T min ker T max . (12.14) Alternatively, since S K is a self-adjoint extension of T min , its domain can also be specified by boundary conditions at the endpoint of (a, b) which we characterize next. If u ∈ dom S K , then in accordance with (12.14), for certain functions f ∈ dom T min and c 1 , one infers that u(a) = c 2 and u(b) = c 1 . (12.17) Consequently u [1] (x) = f [1] (x) + u(b)u [1] 1 (x) + u(a)u [1] 2 (x), x ∈ [a, b]. (12.18) Evaluating separately at x = a and x = b, yields the (non-separated) boundary conditions that u must satisfy; u [1] (a) = u(b)u [1] 1 (a) + u(a)u [1] 2 (a), u [1] (b) = u(b)u [1] 1 (b) + u(a)u [1] 2 (b). (12.19) Since u [1] 1 (a) = 0 (otherwise, u 1 (·) ≡ 0 on [a, b]), the boundary condition in (12.19) may be recast as 20) with R K given by (12.11). Moreover, R K ∈ SL 2 (R). To see this, first note that the entries of R K are real-valued. Additionally, the fact that − u [1] 1 (a) = W u 1 (·), u 2 (·) = u [1] 2 (b) (12.21) implies det R K = 1. As a result, we have shown S K ⊆ S R=R K ,φ=0 , where S R=R K ,φ=0 is the self-adjoint restriction of T max corresponding to non-separated boundary conditions generated by the matrix R K and angle φ = 0 (cf. (6.24)). On the other hand, since S K and S R=R K ,φ=0 are self-adjoint, one obtains the equality S K = S R=R K ,φ=0 . That is to say, the Krein-von Neumann extension of T min is the self-adjoint extension corresponding to non-separated boundary conditions generated by R = R K and φ = 0. Example 12.4. In the special case when q = 0 a.e. on (a, b), the above calculations become even more explicit. In this case, we denote the Krein-von Neumann restriction by S One computes and where τ (0) denotes the differential expression of (2.2) in the present special case q = 0 a.e. in (a, b). It follows that u (0) j (·) j=1,2 ⊂ dom T * min forms a basis for ker T * min = ker T max . In addition, the equalities in (12.12) are satisfied. With this pair of basis vectors, one infers that the matrix R = R K read: Positivity Preserving and Improving Resolvents and Semigroups in the Regular Case In our final section, we prove a criterion for a self-adjoint extension of T min to generate a positivity improving resolvent or, equivalently, semigroup. The notion of a positivity improving resolvent or semigroup proves critical in a study of the smallest eigenvalue of a self-adjoint restriction, as it guarantees that the lowest eigenvalue is non-degenerate and possesses a nonnegative eigenfunction. In fact, we will go a step further and prove that the notions of positivity preserving and positivity improving are equivalent in the regular case. The self-adjoint restrictions of T max are characterized in terms of the functionals BC j a and BC j b , j = 1, 2, in Section 6 (cf. (6.1) and (6.2)), and assuming Hypothesis 12.2 throughout this section, the functionals BC j a and BC j b , j = 1, 2 take the form of point evaluations of functions and their quasi-derivatives at the boundary points of (a, b) as in Lemma 6.1, that is, Since under the assumption of Hypothesis 12.2, τ is in the l.c. case at both endpoints of the interval (a, b), all real self-adjoint restrictions of T max are parametrized as described in Theorem 6.4 with φ = 0. Hence, we adopt the following notational convention: S ϕa,ϕ b denote the (real) selfadjoint restrictions of T max corresponding to the separated boundary conditions (6.23) in Theorem 6.4, that is, and S R denote the real self-adjoint restrictions of T max corresponding to the coupled boundary conditions (6.24) with φ = 0 in Theorem 6.4, that is, g [1] (b) = R g(a) g [1] (a) . Definition 13.1. A bounded operator A defined on L 2 (M ; dµ) is called positivity preserving (resp., positivity improving) if 0 = f ∈ L 2 (M ; dµ), f ≥ 0 µ-a.e. implies Af ≥ 0 (resp., Af > 0) µ-a.e. (13.9) In the special case where A is a bounded integral operator in L 2 ((a, b); r(x)dx) with integral kernel denoted by A(·, ·), it is well-known that The next and principal result of this section provides a necessary and sufficient condition for a (necessarily real) self-adjoint restriction of T max (resp., extension of T min ) to generate a positivity preserving resolvent and semigroup. We recall that positivity preserving requires reality preserving and hence it suffices to consider real self-adjoint extensions of T min . In fact, we will prove more and show that the notions of positivity preserving and positivity improving are, in fact, equivalent in the regular case. (i) In the case of separated boundary conditions, all self-adjoint extensions of T min lead to positivity improving semigroups and resolvents. More precisely, for all ϕ a , ϕ b ∈ [0, π), e −tSϕ a ,ϕ b is positivity improving for all t ≥ 0, equivalently, (S ϕa,ϕ b − λI r ) −1 is positivity improving for all λ < inf(σ(S ϕa,ϕ b )). In addition, (13.12) is positivity improving, implying the inequality ). (13.13) In particular, S 0,0 )). (13.14) Here G z,ϕa,ϕ b (·, ·), z ∈ ρ(S ϕa,ϕ b ) (resp., G z,0,0 (·, ·), z ∈ ρ(S 0,0 )), denotes the Green's function (i.e., the integral kernel of the resolvent ) of S ϕa,ϕ b (resp., of S 0,0 ). In order to establish necessity of the conditions R 1,2 < 0 or R 1,2 = 0 and R 1,1 > 0, suppose that e −tS R is positivity preserving for all t ≥ 0. Then by the Beurling-Deny criterion, Theorem 13.2 (iii), condition (13.41) holds. In particular, for R 1,2 = 0, equation (13.7) and inequality (13.41 is real-valued, then one verifies that |f | [1] = sgn(f )f [1] a.e. in (a, b), where sgn(f ) equals f /|f | if f = 0 and is zero otherwise, as a special case of (13.45). Consequently, in the case where f is real-valued, the integral appearing in (13.52) vanishes, and the inequality reduces to ∈ AC([a, b]) and f 0 (a)f 0 (b) < 0, one infers that f 0 ∈ dom(Q S R ). Taking f 0 as a test function in (13.53), one concludes that R 1,2 < 0. On the other hand, if R 1,2 = 0, equation (13.8) yields that the implication and the inequality (13.41) are satisfied provided the boundary condition h(b) = R 1,1 h(a) in dom(Q S R ) holds. This necessitates the condition R 1,1 > 0. The statement concerning positivity preserving of the resolvents follows from Theorem 13.2 (iii). This completes the proof that the conditions in (13.15) are necessary and sufficient for positivity preserving of e −tS R for all t ≥ 0, or equivalently, positivity preserving of (S R − λI r ) −1 for all λ < inf(σ(S R )). We chose to rely on different strategies of proof of positivity preserving in the case of separated and coupled boundary conditions to illustrate the different possible approaches in this context. The principal observation in the proof of Theorem 13.3 in connection with separated boundary conditions is the statement in (13.21) that the corresponding Green's function is nonnegative along the diagonal, and follows from nonnegativity of the resolvent (in the operator sense) at points below the spectrum of S ϕa,ϕ b . A much more general result regarding nonnegativity along the diagonal of the (continuous) integral kernel associated with a nonnegative integral operator may be found in [84,Lemma on p. 195] in connection with Mercer's theorem [84,Theorem 8.11]. In the particular case where p = r = 1, q = s = 0 a.e. on (a, b) in Theorem 13.3, the positivity preserving result has been derived by Feller [44] (see also [48, p. 147]). In fact, he considered a more general situation involving a Radon-Nikodym derivative (i.e., he worked in the context of a measure-valued coefficient). We also mention that the sign of the Green's function associated with the periodic Hill equation has been studied in connection with the existence of so-called comparison principles in [25] (and the references therein). The fact that positivity preserving and positivity improving are equivalent notions in the regular case appears to be a new result. Appendix A. Sesquilinear Forms in the Regular Case In this appendix we discuss the underlying sesquilinear forms associated with selfadjoint extensions of T min in the regular case with separated boundary conditions, closely following the treatment in [54, Appendix A]. The standing assumption throughout this appendix will be the following: Hypothesis A.1. Assume Hypothesis 2.1 holds with p > 0 a.e. on (a, b) and that τ is regular on (a, b). Equivalently, we suppose that p, q, r, s are Lebesgue measurable on (a, b) with p −1 , q, r, s ∈ L 1 ((a, b); dx) and real-valued a.e. on (a, b) with p, r > 0 a.e. on (a, b). Our goal is to explore relative boundedness of certain sesquilinear forms in the Hilbert space L 2 ((a, b); r(x)dx) defined in connection with τ . Assuming Hypothesis A.1, one may use the function q to define a sesquilinear form in L 2 ((a, b); r(x)dx) as follows f, g ∈ dom(Q q/r ) = h ∈ L 2 ((a, b); r(x)dx) (|q|/r) 1/2 h ∈ L 2 ((a, b); r(x)dx) . Regarding item (ii), we only show A * α,β = A + α,β as the case A + α,β * = A α,β is handled analogously. Moreover, since A ∞,∞ ⊆ A α,β (this follows by definition of the operators) implies A * α,β ⊆ A * ∞,∞ , we only prove A * ∞,∞ = A + ∞,∞ , the other cases follow from an additional integration by parts. Therefore, first note that Consequently, ran A ∞,∞ is contained in the kernel of the linear functional k → k, f − g r , k ∈ L 2 ((a, b); r(x)dx). On the other hand, since υKg 0 = g 0 for all g 0 ∈ L 2 ((a, b); r(x)dx), one infers that g On the other hand, (A.17) shows that f − g is orthogonal to ran A ∞,∞ , and because of (A.18), there exists a constant c such that f = g+c(pr) −1/2 e x a s(t)dt . It is a simple matter to check that (pr) −1/2 e x a s(t)dt ∈ dom A + ∞,∞ (in fact, υ + applied to (pr) −1/2 e x a s(t)dt is zero). Therefore, by (A.18), f ∈ dom A + ∞,∞ , completing the proof of item (ii). Proof. It suffices to consider the Dirichlet case α = β = ∞, the other cases being similar. We denote by S ∞,∞ the operator defined in (A.39) for α = β = ∞ and by S ∞,∞ the unique operator associated with Q ∞,∞ . Choose u ∈ dom (Q ∞,∞ ) and v ∈ dom S ∞,∞ . Then an integration by parts yields extraordinary efforts exerted in refereeing our manuscript, and for the numerous comments and suggestions kindly provided to us. G.T. gratefully acknowledges the stimulating atmosphere at the Isaac Newton Institute for Mathematical Sciences in Cambridge during October 2011 where parts of this paper were written as part of the international research program on Inverse Problems.
2013-04-27T18:16:58.000Z
2012-08-23T00:00:00.000Z
119697000
s2orc/train
v2
Oversimplification and Misplaced Blame will Not Solve the Complex Kidney Underutilization Problem
Oversimplification and Misplaced Blame will Not Solve the Complex Kidney Underutilization Problem : Disclosures: D. Stewart reports the following: Consultancy: Hansa (consulting through UNOS Solutions).; Veloxis.; Research Funding: Hansa Biopharma; and Advisory or Leadership Role: SRTR Task 5 steering committee; SRTR Review Committee Ex officio member; CMS/TAQIL/ ETCLC National Faculty member; CMS ESRD Treatment Choices Model Collaborative, member of National Faculty. G. Gupta reports the following: Consultancy: CareDx; Research Funding: Merck Pharmaceuticals; Honoraria: Alexion; CareDx; Mallinckrodt; Natera; Veloxis; Advisory or Leadership Role: Frontiers of Medicine; Speakers Bureau: Alexion; CareDx; Mallinckrodt; Veloxis; and Other Interests or Relationships: National Kidney Foundation Virginia, AST KPOP Executive Committee, AST Transplant Nephrology Fellowship Accreditation Committee. B. Tanriover has nothing to disclose. The Washington Post article, "70 deaths, many wasted organs are blamed on transplant system errors," 1 and Senate Hearings of August 3, 2022, 2 both gave the misleading impression that the organ discard problem is primarily attributable to transportation-related mistakes, other human or system errors, and outdated computer technology that slows down the organ allocation process. As in all medical fields, 3 transplantation is not immune to avoidable mishaps, which have indeed led directly to organs being rendered unusable. 4 And the process of allocating less-than-ideal organs can indeed be painfully slow: so slow, in fact, that viable organs sometimes go unused due to the combined risk of elevated cold ischemia time (a common offer refusal reason) 5 and whatever factors led to the organ being deemed less-thanideal in the first place. 6,7 But the predominant drivers of a nearly 25% kidney discard rate are not mishaps or poor technology. Rather, the US transplant system suffers from an organ offer refusal problem: far too many offers of imperfect but transplant-quality kidneys are refused on behalf of (or directly by) patients, prolonging the time it takes to find a clinically suitable 'home' (patient) at a transplant center willing to take on the risk. In fact, the mean match run sequence number among accepted kidneys was recently estimated at 665, indicating it is not atypical for hundreds -even thousands -of offers to be refused prior to finally securing an acceptance. 8 Once a kidney has a firm acceptance, the discard rate is only about 5%, 8,9 indicating that approximately 80% of the discard problem is attributable to inability to find an acceptor; a minority of discards occur post-acceptance, after an unexpected incident such as a positive crossmatch, transportation delay, etc. The most common kidney discard reason reported by Organ Procurement Organizations (OPO) is, in fact, "no recipient located -list exhausted," 8,10,11 suggesting the OPO attempted in vain to find an acceptor among all possible candidates. In an era of organ scarcity, where even the lowest quality kidneys have been shown to confer a survival (and likely quality of life) benefit over dialysis for many patients, [12][13][14] how can it be that transplant decision-making seems to reflect an era of plenty? Three fundamental realities Recent critics of the US transplant system seemingly fail to appreciate three fundamental realities of kidney transplantation relevant to the organ utilization challenge: (1) Not all donated kidneys are created alike (2) The kidney allocation system is still largely tethered to the "first-come, first-served" fairness principle (3) Major changes to organ allocation policy do not come easy Critical comments about the U.S. kidney discard rate seem to imply that the pool of available kidneys resembles a homogenous, fungible commodity, glossing over the fact that vast clinical differences exist among kidneys offered for transplant. Evaluating a deceased donor organ offer in many ways parallels shopping for a used car. A 20-year old Civic with 175,000 miles on it might be perfectly adequate for "point A to B" travel for a few years, but clearly isn't in the same league as a near-mint Lexus with under 10,000 miles and coming off a short-term lease. Analogously, deceased donor kidneys vary substantially along a quality spectrum that portends highly differential expected graft longevity, depending on donor age, medical history, etc. The decision to accept any particular kidney for any specific patient involves consideration of two key risks -graft failure and disease transmission 15 -along with tremendous uncertainty in how things will turn out for any given case, 16,17 and may very well be the most complex decision in all of medicine. 18 A third, critical dimension driving the complexity of the kidney acceptance decision is the very real possibility that another, "better" kidney soon will be offered to the patient. 19-21 If so, the right decision may indeed be to decline. And despite being substantially overhauled in 2014, 22 the "first-come, first-served" principle -a hallmark of fairness in the US -is still largely entrenched into the kidney allocation system. While arguably "fair," the byproduct is that the transplant candidates that tend to be the first ones offered less-than-ideal kidneys are also among the first to receive offers for much higher quality kidneys, due to having accrued substantial qualified waiting (or dialysis) time. This aspect of the system induces a disincentive to accept imperfect kidneys for the candidates at the top of the match run. These early refusals slow down the placement process, leading to a cascade of further refusals as the cold ischemia clock keeps ticking, 23 and the combination of the organ being 'too old' and 'too cold' requires a boldness that not even the most risk-tolerant transplant program is willing to take. Ways to improve the system There is certainly room for improvement in the operational parameters and DonorNet system features that govern the mechanics of the organ offering process, most notably in the manner and timing in which offers are distributed and responded to. 24 For example, the inefficient use of the 'provisional yes' response has been a long-recognized pain point in the organ placement process. 25 Encouragingly, the entire paradigm for sending and responding to offers is now being reexamined by UNOS, with guidance from the OPTN Operations & Safety Committee. 26 And UNOS has recently implemented or begun piloting a number of sophisticated and potentially impactful DonorNet system enhancements: allowing users to see the complete donor record and respond to offers on a mobile device; 27 allowing programs to avoid receiving unwanted offers by establishing multicriteria donor filters; 28,29 and displaying novel predictive analytics (e.g., "time to next offer") to combat decision complexity. 30 Still, the impact of operational and system features designed to foster faster progression down the match list has limited potential to address the discard problem if the allocation system still results in first-offered candidates having a built-in disincentive to accept less-thanideal kidneys. One of UNOS's stated aspirational goals is that no matter where on the quality spectrum an organ lies, the first person offered the organ should be the right one to accept it, in terms of medical suitability, fairness, and the decision-calculus surrounding the risks and benefits of accepting vs. waiting for another. But the current system, built on a foundation emphasizing equity -not placement efficiency or maximizing organ utilization -is antithetical to that aspired reality. So how do we go from here -a system with built-in disincentives to accept offers -to there -a system that is still equitable but also "tuned for acceptance"? Kidney allocation policy should be modified in two key ways: (1) by changing the way waiting/dialysis time is used to prioritize patients, and (2) codifying expedited placement pathways to aid OPOs in finding homes for hard-to-place kidneys. In some European countries' implementation of the Senior Program, in which older-donor kidneys are preferentially offered to older candidates, senior patients are required to choose one list -the older (age 65+) donor kidney list, or the all other kidney list -from which to receive offers: they cannot remain on both. 31,32 The choice is clear -wait longer for a higher quality organ, or get transplanted more quickly with a shorter-longevity kidney. The Extended Criteria Donor (ECD) and high Kidney Donor Profile Index (KDPI) programs were implemented in the U.S. such that candidates who choose to receive these offers also remain on the list for ideal quality organs, weakening any incentive to accept the former given the very real possibility of receiving the latter. Reducing patients' options by segmenting the allocation system and forcing a choice between a shorter wait for a shorter longevity kidney, or vice versa, is not the only (nor necessarily best) way to tune the decision-making calculus toward acceptance. Altering how waiting time is used to prioritize patients across the donor quality (e.g., KDPI) spectrum, according to a paradigm coined as "dealing from the bottom of the deck", 33, 34 may be a more effective approach to consider as the OPTN migrates to the continuous distribution framework. 35,36 If a patient just added to the list with little or no waiting time priority for the best kidneys was given first dibs on a higher KDPI kidney, the incentive equation may change in a way that fosters securing offer acceptance earlier in the placement process. 37 Given the drastic differences in kidney utilization practices among kidney programs 38 , the OPTN is implementing a new monitoring framework designed to exert upward pressure on and reduce variability in offer acceptance rates. 39 Since the longstanding, hyper-focus on early post-transplant success rates has contributed to risk aversion, "balancing the scorecard" in way that calls out overly selective acceptance practices may help nudge the system toward transplanting more organs. However, since significant program-to-program variation is likely to persist, codifying into kidney allocation policy a center-targeted expedited placement pathway may have even greater potential to reduce avoidable discards. 40,41 Currently, to salvage an organ at high risk of discard, OPOs are permitted to deviate from the prescribed patient order and expedite placement to centers with a track record of accepting similar organs, bypassing higher-priority candidates at other centers. However, this practice is not standardized and varies widely, and thus is likely suboptimal in terms of utilization and may be inequitable in terms of organ distribution. 42 Codifying an expedited placement system, as recommended by a National Kidney Foundation panel, 43 into KAS would include a pre-determined, evidence-driven set of triggers that identify scenarios with an unusually high probability of discard under the standard (sequential) allocation approach. 44 Determining the right parameters for an effective expedited placement allocation system may not be easy 45, 46 but could make a significant impact if well-engineered. Practical challenges to realizing change But are such ideas --prioritizing just-listed patients ahead those with years on dialysis, and bypassing more medically or ethically justified patients to expedite placement to patients at another center -fair, equitable, and legally permissible? Though these strategies would only be applied to a subset of donated organs, would such bold changes be perceived by patients and the broader transplant community as "unfair," potentially risking the foundation of trust that holds up the entire system? The OPTN Final Rule requires allocation policies be equitable. But a viable kidney that is discarded benefits no one. The transplant community may need to sacrifice some degree of geographic equity -where patients listed at the most aggressive programs will receive transplants faster than patients listed elsewhere -in order to have a meaningful impact on utility, recognizing that more transplants indirectly benefits all patients in need, as a rising tide lifts all boats. 47 The organ allocation policy development process in the U.S. is intentionally deliberative, involving numerous stakeholders, committee evidence gathering, formal public comment periods, and ultimately Board of Directors' approval and implementation. The OPTN aims to achieve broad consensus in developing and implementing new, often highly complex policies as expeditiously as possible, a colossal and underappreciated balancing act made all the more challenging due to vested interests resistant to change. 48 Achieving an acceptable balance between equity, utility, and efficiency has taken years 49, 50 -a reality that should be recognized by critics who may assume existence of an 'easy button' for quickly improving such a complex system. Concluding thoughts Individuals and institutions responsible for preventable errors should be held accountable to drive down the rate of mishaps. The logistics of organ transportation, for example, remains an area ripe for process improvement and technological innovation. 51 55,59 The risk of contracting a donor-derived infection should be recognized to be extremely low (approximately 0.18%), 60 far lower than risks associated with remaining on dialysis. 61,62 And OPOs should be held accountable to high standards through better metrics and tangible consequences of underperformance, to ensure that significant opportunities for donation and transplantation are not being missed. [63][64][65] Though the juxtaposition of "wasted organs" and "system errors" makes for a good headline, despite the system's flaws, the number of kidney transplants has increased 37% over the past 6 years, from 18,597 in 2015 to a record 25,490 in 2021. The transplant community and its critics should recognize that the roots of the unacceptably high, 20-25% discard rate in the U.S. run deeper than the soundbites might suggest. Only once the true nature of the kidney discard problem (decision-making complexity) and the challenges in overcoming it (revising an allocation system still largely anchored in the deeply ingrained American ethic: "no cutting in line!") are fully appreciated, will the transplant community be in a position to thoughtfully develop and enact truly impactful solutions. The OPTN should not migrate kidney policy to the continuous distribution framework without incorporating bold policy changes that squarely address the kidney discard problem. Funding None.
2022-10-07T15:08:58.067Z
2022-10-05T00:00:00.000Z
252747830
s2orc/train
v2
Relationship between obstructive sleep apnea-hypopnea syndrome and osteoporosis adults: A systematic review and meta-analysis
Relationship between obstructive sleep apnea-hypopnea syndrome and osteoporosis adults: A systematic review and meta-analysis Objective This study is undertaken to explore the relationship between obstructive sleep apnea-hypopnea syndrome (OSAHS) and osteoporosis, including the relationship between OSAHS and osteoporosis incidence, lumbar spine bone mineral density (BMD), and lumbar spine T-score. Method Cochrane Library, PubMed, Embase, Web of Science, and other databases are searched from their establishment to April 2022. Literature published in 4 databases on the correlation between OSAHS and osteoporosis,lumbar spine BMD,lumbar spine T-score is collected. Review Manager 5.4 software is used for meta-analysis. Results A total of 15 articles are selected, including 113082 subjects. Compared with the control group, the OSAHS group has a higher incidence of osteoporosis (OR = 2.03, 95% CI: 1.26~3.27, Z = 2.90, P = 0.004), the lumbar spine BMD is significantly lower (MD = -0.05, 95% CI: -0.08~-0.02, Z = 3.07, P = 0.002), and the lumbar spine T-score is significantly decreased (MD = -0.47, 95% CI: -0.79~-0.14, Z = 2.83, P = 0. 005). Conclusion Compared with the control group, the OSAHS group has a higher incidence of osteoporosis and decreased lumbar spine BMD and T-score. In order to reduce the risk of osteoporosis, attention should be paid to the treatment and management of adult OSAHS, and active sleep intervention should be carried out. Introduction Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a sleep disorder characterized by recurrent episodes of apnea that lead to hypoxia, hypercapnia, and sleep disruption (1). Osteoporosis is a bone metabolism disorder characterized by decreased bone mass, destruction of bone microstructure, and susceptibility to fractures (2). It is generally believed that OSAHS is associated with a higher incidence of osteoporosis (3)(4)(5), and spinal deformity is one of its main clinical manifestations, as well as kyphosis, limited spinal extension, etc., causing great distress to the affected population and warranting further research. Several studies have explored the relationship between OSAHS and lumbar osteoporosis. Studies by Liguori, Chen et al. (4,6) suggest that OSA may be a risk factor in bone mineral density (BMD), leading to osteopenia and osteoporosis. The reason may be that hypoxia slows down the growth of osteoblasts, it promotes the activation of osteoclasts. Sforza et al. (7) showed that the protective effects of intermittent hypoxia on bone metabolism, after taking into account the age-related decrease in BMD, reduced the risk of osteopenia and osteoporosis in elderly people with OSAHS. The results of these studies are inconsistent, which has not only caused great trouble for clinicians, but also affected the prevention and treatment of lumbar osteoporosis in patients with OSAHS. The purpose of this study is to conduct a meta-analysis of existing clinical studies so as to explore the relationship between OSAHS and the occurrence of lumbar osteoporosis and BMD changes, thereby providing evidence-based prevention and intervention for lumbar osteoporosis in patients with medical evidence of OSAHS. Literature inclusion and exclusion criteria Inclusion criteria (1): the subjects of the study were adults over 18 years of age (2); the article types were cohort studies, casecontrol studies, and cross-sectional studies, observing BMD and T-score in patients with sleep apnea or obstructive sleep apnea, evaluating the incidence or prevalence of osteoporosis, and comparing them with the control group (3); OSAHS was diagnosed by polysomnography or portable sleep monitor, and the severity was evaluated by the apnea hypopnea index (AHI), which is the sum of the average number of apnea and hypopnea events per hour (10) (4); lumbar spine BMD (measured in g/cm 2 ) and/or T-score were measured by dual energy X-ray densitometer, and osteoporosis was defined as BMD and T-score < -2.5 SD (11) (5); based on different reports from the same research population, the articles with the largest sample sizes were included. Exclusion criteria (1): languages other than English (2); studies without a control group (3); studies where the effect size cannot be extracted or calculated (4); studies for which the authors did not respond to contact or could not provide meta-analysis data (5); application of glucocorticoids or other drugs that affect BMD. 2.3 Literature screening, quality assessment, and data extraction provided literature with differences to the third researcher for analysis to decide whether or not it should be included. The methodological quality of the included literature was assessed. The quality of the included studies was assessed using the Newcastle-Ottawa Scale (NOS) (12). Only high-quality articles rated higher than 6 stars were included.The extracted data included the first author, study area, publication time, study type, sample size, age, AHI, OSAHS assessment method, BMD, T-score, outcome measures, and adjustment for confounders. After data extraction, the data was checked, and inconsistent data was extracted again. After checking, the data was analyzed. Ending and exposure The lumbar spine BMD (measured in g/cm 2 ) and/or T-score of the subjects were obtained by dual-energy X-ray densitometry, and OSAHS was diagnosed by polysomnography or portable sleep monitor. The incidence of osteoporosis, BMD, and lumbar spine T-score in the OSAHS group and control group were used as outcome indicators. The difference in the incidence of osteoporosis between the OSAHS group and control group indicated the correlation between OSAHS and osteoporosis; the difference in BMD between the OSAHS group and control group indicated the effect of OSAHS on BMD; when the OSAHS group was compared with the control group, the level of lumbar spine T-score was different, indicating the influence of OSAHS. Statistical methods Statistical analysis was performed using Review Manager 5.4 software. MD and OR values were used for effect evaluation, and 95% CI was calculated. The heterogeneity of the studies was analyzed using the I 2 statistic test and Q test. I 2 < 50% and P > 0.1 indicated no significant heterogeneity among the studies, while I 2 > 50% and P < 0.1 indicated statistical heterogeneity. If there is obvious heterogeneity, the random effect model is used for analysis. Sensitivity analysis can also be conducted to eliminate articles with obvious heterogeneity, and then fixed effect model meta-analysis can be conducted. The presence of publication bias was estimated by funnel plot and Egger's test. For the analysis results with heterogeneity, the included studies will be stratified according to differences in countries and regions, population age differences, gender, and OSAHS severity for subgroup analysis. Literature screening results A total of 887 articles were retrieved, 603 were obtained after deduplication, 248 were excluded by reading the titles and abstracts, and 15 were finally included after reading the full text ( Figure 1). The study populations were from China; Taiwan, China; Turkey; Croatia; Italy; and France. The basic characteristics of the literature included in the study are shown in Table 1. Quality assessment of included studies The quality of the included observational studies was assessed using the NOS scale, which is shown in Table 2. The lowest overall rating was 6★ and the highest was 7★, all moderate to high quality, with low to moderate risk of bias, and no studies were excluded for poor quality (< 5★). Results Our results include: ① the relationship between OSAHS and the incidence rate of osteoporosis; ② The relationship between OSAHS and lumbar bone mineral density; ③ The relationship between OSAHS and lumbar T-score. We describe the corresponding statistical results in detail below and we have summarized the effect size value for the mean difference of each study, as shown in Table 3. Association of OSAHS with osteoporosis incidence Three studies (4,13,14) provided specific numbers of patients with osteoporosis among their study subjects. All three studies were included in the analysis ( Figure 2). The results of the heterogeneity test indicated that there was statistical heterogeneity among the studies (P = 0.1, I 2 = 57%), so a random effect model was used. The results showed that compared with the control group, the OSAHS group had a higher incidence of osteoporosis (OR = 2.03, 95% CI: 1.26~3.27, Z = 2.90, P = 0.004). Two studies (4, 13) provided specific numbers of osteoporosis in men, women, elderly people (> 65 years), and middle-aged people (40~65 years). To reduce the clinical heterogeneity of the study subjects, subgroup analyses were performed by gender ( Figure 3) and age ( Figure 4). The results showed that, compared with the control group, in the gender subgroup analysis of the OSAHS group, the combined heterogeneity of the two groups was (P = 0.36, I 2 = 7%), and there was no statistically significant heterogeneity among the male group (P = 0.36, I 2 = 7%) = 0.2, I 2 = 38%), female group (P = 0.76, I 2 = 0%), so a fixed effect model was used. After gender subgroup analysis, the heterogeneity of the osteoporosis studies was significantly reduced, male (OR = 1.90, 95% CI: 1.33-2.72, Z = 3.53, P < 0.001), female (OR = 2.56, 95% CI: Each ★ represents a quality score of 1 point, and the sum of all ★ is the final quality score. 1.96-3.34, Z = 6.95, P < 0.001), The incidence of osteoporosis in OSAHS group was higher and statistically significant; the combined final effect size of the gender subgroup analysis was (OR = 2.29, 95% CI: 1.86-2.83, Z = 7.68, P < 0.001); The incidence of osteoporosis in OSAHS group is high and statistically significant. In the subgroup analysis of age, the combined heterogeneity of the two groups was (P = 0.19, I 2 = 38%), and there was slight heterogeneity in the statistics. The elderly (> 65 years old) group was (P = 0.69, I 2 = 0%) and the middle-aged (40~65 years old) group was (P = 0.25, I 2 = 24%), so a fixed effect model was used. After age subgroup analysis, the heterogeneity of the osteoporosis studies was significantly reduced. The elderly (> 65 years old) group was (OR = 2.62, 95% CI: 1.86~3.71, Z = 0.89, P < 0.001) and the middle-aged (40~65 years old) group was (OR = 1.73, 95% CI: 1.31~2.28, Z = 3.31, P < 0.001), so the OSAHS group had a higher incidence of osteoporosis, which was statistically significant. The combined final effect size of the age subgroup analysis was (OR = 2.02, 95% CI: 1.63~2.51, Z = 6.42, P < 0.001); the OSAHS group had a higher incidence of osteoporosis, which was statistically significant. The forest plot analysis of OSAHS and the incidence of osteoporosis suggest that OSAHS is associated with the prevalence of osteoporosis and is a risk factor for the disease. Forest plot of the incidence of osteoporosis in OSAHS group and control group. Study Year Outcome Results Association of OSAHS with lumbar spine BMD Ten studies (3,6,7,15,16,(18)(19)(20)(21)(22) were included in a metaanalysis of lumbar spine BMD ( Figure 5). Compared with the control group, lumbar spine BMD was significantly lower in the OSAHS group (MD = -0.05, 95% CI: -0.08~-0.02, Z = 3.07, P = 0.002). There was moderate heterogeneity between studies (I 2 = 66%, P = 0.002), so a random effect model was used. The elderly population is at increased risk for OSAHS (24,25) due to changes in the anatomy and function of the upper airway (26), and the frequent coexistence of other medical conditions such as diabetes, hypertension, and cardiovascular disease. Meanwhile, BMD gradually decreases with age (27). Risk factors such as old age, diabetes, hypertension, and some diseases that affect osteoporosis may affect the results of osteoporosis research, leading to unstable results for the association between OSAHS and lumbar spine BMD. To further verify the relationship between OSAHS and lumbar spine BMD, and further reduce the clinical heterogeneity of the study subjects, we conducted a subgroup analysis after excluding osteoporosis-related risk factors, including a subgroup analysis of AHI grouping by OSAHS diagnostic criteria and regional subgroup analysis. The research population of Sforza2013 (7) was older than 65 and accompanied by hypertension, diabetes, and other diseases; in Terzi2015 (16), some of the research subjects had hypertension complications; and all of the above may affect the results of a lumbar spine BMD study. AHI subgroup analysis In a subgroup analysis of AHI grouping with OSAHS diagnostic criteria (after the exclusion of osteoporosis-related risk factors) ( Figure 6), 8 studies were included (3,6,15,(18)(19)(20)(21)(22), and the groups were combined for heterogeneity (P = 0.001, I 2 = 71%). There was moderate heterogeneity in the statistics. In the subgroup analysis, the OSAHS diagnostic heterogeneity criteria of AHI > 5~10 events/h group was (P = 0.26, I 2 = 24%), and the Forest plot of incidence of osteoporosis in male and female subgroups. Forest plot of the incidence of osteoporosis in elderly (>65 years old) and middle-aged (40-65 years old) subgroups. grouping heterogeneity of AHI > 15 events/h was (P = 0.09, I 2 = 58%), so a random effect model was used. After the subgroup analysis of OSAHS diagnostic criteria AHI grouping, the correlation between OSAHS and lumbar spine BMD was different. The quality was significantly reduced. The results of subgroup analysis showed that compared with the control group, the lumbar spine BMD of the OSAHS group with AHI > 5~10 events/h was slightly lower (MD = -0.02, 95% CI: -0.05~-0.00, Z = 2.19, P = 0.03), the lumbar spine BMD in the AHI > 15 events/h group was significantly decreased (MD = -0.09, 95% CI: -0.14~-0.03, Z = 3.02, P = 0.003), and the difference was statistically significant. The effect size of lumbar spine BMD in the OSAHS group AHI > 15 events/h was higher than that of the OSAHS diagnostic criteria AHI > 5~10 events/h group, indicating that in patients grouped by OSAHS diagnostic criteria AHI > 15 events/h, compared with AHI > 5~10 events/h, the risk of lumbar BMD decline was higher, so the severity of OSAHS may be related to lumbar BMD. The combined effect size of the AHI group was (MD = -0.05, 95% CI: -0.08~-0.01, Z = 2.79, P = 0.005). The OSAHS group had lower lumbar spine BMD, the results remained unchanged after excluding risk factors for osteoporosis, and the difference was statistically significant. (MD = -0.08, 95% CI: -0.19~0.02, Z = 1.51, P = 0.13), so the lumbar spine BMD was lower in each regional grouping, but the difference was not statistically significant. The combined effect size of the regional grouping was (MD = -0.05, 95% CI: -0.08~-0.01, Z = 2.79, P = 0.005), indicating that the OSAHS group had lower lumbar spine BMD, and the subgroup analysis was performed after excluding osteoporosis risk factors when the results remained stable and the difference was statistically significant. The forest plot analysis of OSAHS and lumbar spine BMD studies suggested that OSAHS was associated with lumbar spine BMD, OSAHS was a risk factor for the decrease in lumbar spine BMD, and the severity of OSAHS may be related to lumbar spine BMD. Association of OSAHS with lumbar spine T-score Ten studies (3,6,7,15,17,18,(20)(21)(22)(23) were included in the meta-analysis of lumbar spine T-score ( Figure 8). Compared with the control group, the lumbar spine T-score was significantly lower in the OSAHS group (MD = -0.47, 95% CI: -0.79~-0.14, Z = 2.83, P = 0.005). There was high heterogeneity between studies (I 2 = 87%, P < 0.001), so a random effect model was used. Similarly, risk factors such as old age, diabetes, hypertension, and some diseases that affect osteoporosis may affect the results of osteoporosis research, leading unstable results for the association between OSAHS and lumbar spine T-score. In order to further verify the relationship between OSAHS and lumbar spine T-score, and further reduce the clinical heterogeneity of the studies, we conducted a subgroup analysis after excluding risk factors related to osteoporosis, including subgroups grouped by AHI according to the OSAHS diagnostic criteria. The research population of Sforza2013 (7) was older than 65 and accompanied by hypertension, diabetes, and other diseases; the mean age of the research population of Wang2015 (17) was more than 65 and accompanied by chronic obstructive pulmonary disease; as both may affect the results of FIGURE 8 Forest plot of lumbar spine T-score between OSAHS group and control group. Forest plot of subgroup analysis of lumbar spine BMD in OSAHS group and control group (after excluding risk factors related to osteoporosis). lumbar spine BMD studies, they were not included in the subgroup analysis. AHI subgroup analysis In a subgroup analysis of AHI grouping according to OSAHS diagnostic criteria (after the exclusion of osteoporosisrelated risk factors) (Figure 9), 8 studies were included (3, 6, 15, 18, 20-23) with heterogeneous groupings. In the subgroup analysis, the grouping heterogeneity of OSAHS diagnostic criteria AHI > 5~10 events/h was (P = 0.004, I 2 = 74%). %), and the grouping heterogeneity of AHI ≥ 15 events/h was (P = 0.04, I 2 = 69%), so a random effect model was used. After the OSAHS diagnostic criteria AHI grouping subgroup analysis, The heterogeneity in the correlation study between OSAHS and lumbar spine T-score was significantly reduced, and the heterogeneity of AHI ≥ 5~10 events/h group and AHI ≥ 15 events/h group were reduced to moderate heterogeneity. The results of the subgroup analysis showed that compared with the control group, the lumbar spine T-score in the OSAHS AHI ≥ 5~10 events/h group was decreased (MD = -0.45, 95% CI: -0.82~-0.09, Z = 2.42, P < 0.001), the lumbar spine T-score in the AHI > 15 events/h group was significantly decreased (MD = -0.72, 95% CI: -1.22~-0.21, Z = 2.79, P = 0.005), and the difference was statistically significant. The lumbar spine Tscore effect size of the OSAHS diagnostic criteria AHI ≥ 15 events/h group was higher than that of the OSAHS diagnostic criteria AHI ≥ 5~10 events/h group, indicating that compared with AHI ≥ 5-10 events/h group, The risk of lumbar T-score decline was higher in the the patients in the OSAHS diagnostic criteria AHI ≥ 15 events/h group, and the severity of OSAHS may be related to lumbar T-score. The combined effect size of the AHI group (MD = -0.55, 95% CI: -0.86~-0.24, Z = 3.46, P < 0.001), the lumbar spine T-score of the OSAHS group was also lower, excluding the risk factors related to osteoporosis. After group analysis, the results remained stable and the difference was statistically significant. Regional subgroup analysis In the regional subgroup analysis (after the exclusion of osteoporosis-related risk factors), (Figure 10), 8 studies were included (3,6,15,18,(20)(21)(22)(23), and the combined group heterogeneity was (P = 0.0001, I 2 = 76%), indicating a high degree of statistical heterogeneity. In the subgroup analysis, the grouping heterogeneity in East Asia was (P = 0.13, I 2 = 57%), that in the Middle East was (P = 0.13, I 2 = 57%). = 0.002, I 2 = 80%), and that in Europe was (P = 0.01, I 2 = 84%), so a random effect model was used; after the regional subgroup analysis, the heterogeneity of OSAHS and lumbar spine T-score correlation analysis was lower than before, among which the heterogeneity of East Asian grouping was reduced to moderate heterogeneity. The results of the subgroup analysis showed that, compared with the control group, the OSAHS group had a statistically significant difference in the Middle East group (MD = -0.58, 95% CI: -1.02~-0.13, Z = 2.53, P = 0.01); in the East Asian group (MD = -0.33, 95% CI: -0.94~0.29, Z = 1.04, P = 0.30), lumbar spine T-score was lower, but the difference was not statistically significant. In the Europe group (MD = -0.69, 95% CI: -1.55~0.17, Z = 1.57, P = 0.12), the lumbar spine T-score was lower, and the difference was also not statistically significant. The combined effect size of regional grouping (MD = -0.55, 95% CI: -0.86~-0.24, Z = 3.46, P < 0.001), the lumbar spine T-score of the OSAHS group was lower, After excluding the risk factors for osteoporosis, the results remained stable and the difference was statistically significant. The forest plot analysis of OSAHS and lumbar spine T-score studies indicated that OSAHS is associated with lumbar spine Tscore, OSAHS is a risk factor for lumbar spine T-score reduction, and the severity of OSAHS may be related to the lumbar spine T-score. Forest plot of lumbar spine T-score "AHI grouping" subgroup analysis between OSAHS group and control group (grouped according to OSAHS diagnostic criteria, after exclusion of osteoporosis-related risk factors). Sensitivity analysis In this study, a sensitivity analysis was performed for the results with high heterogeneity. In the sensitivity analysis of the incidence of osteoporosis, lumbar bone mineral density, and lumbar spine T-score between OSAHS and the control group, the results and studies were combined after excluding any literature. There was no significant change in heterogeneity. Publication bias The presence of publication bias was assessed using Egger's method ( Figure 11). There were 3 literatures related to OSAHS and the incidence of osteoporosis in the control group, P = 0.291>0.05 ( Figure 11A), and the results indicated that there was no publication bias; there were 10 literatures related to OSAHS and the bone mineral density of the lumbar spine in the control group, P = 0.433 >0.05 ( Figure 11B), the results suggest that there is no publication bias. There are 10 related literatures about lumbar spine T-score between OSAHS and control group, P=0.042<0. 05 ( Figure 11C), the results suggest that there is mild publication bias, there may be reasons:1.The number of literatures included in our meta-analysis is small, which is easy to cause certain bias.2. Some study populations combined with other diseases may also have certain biases, which was further confirmed by our subgroup analysis. Discussion Osteoporosis is a common human skeletal disease characterized by osteopenia, microarchitectural deterioration, and fragility fractures (28). According to World Health Organization (WHO) standards, it is estimated that 15% of postmenopausal Caucasian women in the United States and 35% of women over the age of 65 have significant osteoporosis. One in two Caucasian women will experience an osteoporotic fracture at some point in their life. As early as 1994, a study showed that fragility fracture patients received more than 400,000 hospitalizations and more than 2.5 million doctor visits each year, causing a serious economic burden (29). Tomiyama et al. (30) first reported the correlation between OSAHS and abnormal bone metabolism in 2008. They studied the abnormal bone metabolism of 50 OSAHS patients and found that compared with the control group, a marker of bone resorption (urinary Type I collagen crosslinked C-terminal peptide) was significantly increased in the OSAHS group, and the elevated bone resorption marker levels decreased somewhat after three months of continuous positive airway pressure therapy. Subsequently, experimental and epidemiological studies have continuously explored the relationship between OSAHS and osteoporosis, BMD, and Tscore, and its possible mechanism, but the results have not agreed. Sikarin,Wang et al. (31,32) conducted a meta-analysis of the correlation between OSAHS and bone marrow.Too few studies were included in the Sikarin's analysis, and Sensitivity analysis, meta-regression, and publication bias were not performed.Wang's analysis is mainly based on the Chinese population, so the research objects may not be representative of the general population, and there may be selection bias. The two metaanalyses did not conduct a global multi-regional population study, nor did further analysis based on the severity of OASHS, and the analysis indicators were relatively single. Therefore, in response to these problems, we conducted an update of the metaanalysis of the correlation between OSAHS and bone marrow. This meta-analysis included 15 studies. The included studies were all high-quality studies with 6 stars and above according to FIGURE 10 Forest plot of lumbar spine T-score "regional grouping" subgroup analysis between OSAHS group and control group (after excluding osteoporosis-related risk factors). the NOS quality evaluation.The results of the meta-analysis on the correlation between OSAHS and osteoporosis showed that in males, females, middle-aged people (40~65 years old) and elderly people (> 65 years old), patients with OSAHS had osteoporosis. Although the incidence of osteoporosis is high, only three articles were included in the literature, so more research is needed to further stabilize the results; in addition, the study population included in the literature may have been combined with old age, hypertension, diabetes, cardiovascular disease, COPD, etc., which may have affected the osteoporosis risk factors, and there is currently no prevalence data to exclude the relevant risk factors; thus, further investigation and analysis cannot be carried out, and the results are not stable. BMD is an important indicator reflecting bone mineral content per unit area. It is mainly used to detect the osteoporosis degree, predict the risk of fracture, and provide a strong laboratory test basis for fractures caused by osteoporosis. BMD is clinically the gold standard for the diagnosis of osteoporosis (33). In recent years, many studies have focused on the relationship between OSAHS and BMD. Tomiyama, Sforza, and Chen et al. (7,18,30) found that compared with the control group, OSAHS patients had significantly higher BMD levels; and more studies, including Liguori, Uzkeser, Yuceege, Terzi, Qiao, Pazarli, Ma, and Vilovic et al. (3,6,15,16,(19)(20)(21)(22) found that compared with the control group, the bone marrow of OSAHS patients was significantly higher than that of the control group. In order to further confirm the relationship between OSAHS patients and BMD, we conducted a meta-analysis of the correlation between OSAHS and lumbar spine BMD which showed that compared with the control group, the OSAHS group had lower lumbar spine BMD. After further subgroup analysis, the combined effect size still confirms that the OSAHS group has lower lumbar spine BMD compared with the control group. In the subgroup analysis of AHI grouping with OSAHS diagnostic criteria, compared with the control group, the lumbar spine BMD of the OSAHS AHI > 5~10 events/h group and AHI > 15 events/h group were decreased, and the difference was statistically significant. The effect size of lumbar spine BMD in the OSAHS diagnostic criteria AHI > 15 events/h group was higher than that of the OSAHS diagnostic criteria AHI > 5~10 events/h group. Compared with the AHI > 5~10 events/h group, the risk of lumbar spine BMD decline is higher, so the severity of lumbar spine BMD may be related. In the regional subgroup analysis (after excluding risk factors related to osteoporosis), the results showed that compared with the control group, the OSAHS group had lower lumbar spine BMD in the East Asian group, Middle East group, and Europe group, but the differences were not statistically significant. After subgroup analysis, the heterogeneity of the studies could be further reduced, and the research bias was also reduced, indicating that the conclusions of this study are more reliable. Regarding the possible mechanism of the association between OSAHS and decreased BMD: 1. OSAHS may lead to a state of vitamin D deficiency and induce secondary hyperparathyroidism, which may lead to bone demineralization and decreased BMD (34); 2. Hypoxia is closely related to changes in bone turnover, and recent in vitro studies have shown that lower nighttime oxygen levels are a feature of OSAHS, while hypoxia promotes osteoclast formation and activity while inhibiting osteoblast function, thus determining bone resorption (35,36). The T-score is also an important basis for detecting the degree of osteoporosis. According to the results of BMD and the WHO standard, patients are divided into three groups: normal BMD (Tscore > -1.0 SD), osteopenia (T-score -1.0 to -2.5 SD) and osteoporosis (T-score < -2.5 SD) (37). In recent years, many studies have focused on the relationship between OSAHS and lumbar spine T-score. Sforza, Chen et al. (7,18) showed that compared with the control group, the lumbar spine T-score level of OSAHS patients was significantly higher; and more studies by Liguori, Uzkeser, Yuceege, Wang, Terzi, Qiao, Pazarli, Ma and Vilovic et al. (3,6,(15)(16)(17)(20)(21)(22)(23) showed that compared with the control group, the lumbar spine T-score level of OSAHS patients was significantly lower. In order to further confirm the relationship between OSAHS patients and lumbar spine T-score levels, we conducted a meta-analysis of the correlation between OSAHS and lumbar spine T-score levels which showed that compared with the control group, the OSAHS group had lower lumbar spine T-score levels. After excluding the related factors of osteoporosis, further subgroup analysis was performed, and the combined effect size still confirmed that the lumbar spine T-score level was lower in the OSAHS group compared with the control group. In the subgroup analysis of AHI grouping with OSAHS diagnostic criteria (after excluding osteoporosis-related risk factors), compared with the control group, OSAHS diagnostic criteria AHI > 5~10 events/h group and AHI > 15 events/h group, the lumbar spine T-score of all groups decreased, and the difference was statistically significant. The effect size of the lumbar spine Tscore in the OSAHS diagnostic criteria AHI > 15 events/h group was higher than that in the OSAHS diagnostic criteria AHI > 5~10 events/h group, and the risk of lumbar spine T-score decline was higher than that in the AHI > 5~10 events/h group, indicating that the severity of OSAHS may be related to the lumbar spine T-score. In the regional subgroup analysis (after excluding risk factors related to osteoporosis), the results showed that compared with the control group, the OSAHS group had lower lumbar spine BMD in the East Asian group, Middle East group, and Europe group, but only the Middle East subgroup was statistically significant. In conclusion, compared with the control group, the OSAHS group had lower lumbar spine T-score levels in the OSAHS diagnostic criteria AHI > 5~10 events/h group, AHI > 15 events/h group and Middle East group. The heterogeneity and research bias can be further reduced, indicating that the conclusions of this study are more reliable. At present, there is a lack of research that clearly clarifies the relationship between T-score and BMD and osteoporosis. However, because T-score is scored according to BMD, the possible mechanism of the correlation between OSAHS and T-score reduction can also be understood. This study has certain limitations: first, the number of included studies on the relationship between OSAHS and osteoporosis was small, and combined with the related risk factors of osteoporosis, the results were not stable, so more research is needed to further stabilize the study. Second, the diagnostic methods and grading methods of OSAHS in each study were slightly different, and the study populations were from different ethnic groups, which may have led to greater heterogeneity in the results. Third, osteoporosis is more common in women (38), but there were fewer women OSAHS patients in our meta-analysis, which may have generated a selection bias. Fourth, the study sample size was relatively small compared to a large, multicentric, randomized controlled trial. Fifth, the quality of some included literature was not very high, and there may have been selection bias. Therefore, the conclusions should be interpreted with caution. In conclusion, the results of this study suggest that OSAHS patients have a higher incidence of osteoporosis, and both lumbar spine BMD and lumbar spine T-score are reduced. The severity of AHI may be related to lumbar spine BMD and lumbar spine T-score. Understanding the incidence of osteoporosis in patients with OSAHS and the effect of OSAHS on lumbar spine BMD and T-score provides medical evidence. However, a homogeneous and large-scale prospective study with further adjustment for factors such as age and related diseases affecting osteoporosis is still needed to clarify whether OSAHS is a risk factor for osteoporosis and whether OSAHS has an effect on lumbar spine BMD and T-score. Many drugs have been developed to treat osteoporosis (39), and patients should receive treatment if they have osteoporosis, and should be treated with preventive measures if they have osteopenia. Obviously, prevention is much better than treatment. Through the metaanalysis of this paper, it can be concluded that the effective management of OSAHS can effectively reduce the risk of osteoporosis. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Author contributions JHL, CW, ZPZ, ZZZ, XC and YZ are the guarantor of the manuscript and take responsibility for the content of this manuscript. JHL, QY, RC and CW contributed to the design of the study. HC, JZ, JYL and HZL were involved in the data analysis. ZPZ, JYL, HWL and RC contributed to the acquisition of primary data. ZPZ, CW and RC wrote the initial draft of the manuscript. QY, RC and JHL contributed significantly to the revision of the manuscript. All authors read and approved the final manuscript. Funding This study was funded by the Natural Science Foundation of Guangdong Province (2021A1515011373).
2022-11-18T14:19:47.140Z
2022-11-17T00:00:00.000Z
253599210
s2orc/train
v2
Ewing's sarcoma in the spinal canal of T12-L3: A case report and review of the literature
Ewing's sarcoma in the spinal canal of T12-L3: A case report and review of the literature Primary Ewing's sarcoma (ES) is rare, especially when it occurs in the spinal canal during middle or old age. The rarity of Ewing's sarcoma breakpoint region 1 fusion-negative ES has been reported in the literature. The present case report describes a 60-year-old Chinese patient who was diagnosed with ES originating from the spinal canal in 2016. The patient was hospitalized with pain resembling electric shock in the waist and buttocks, which occurred intermittently for 1 month, and incontinence for 1 week. Magnetic resonance imaging demonstrated multiple inhomogeneous, oval-shaped nodules in the intradural and cauda equina spaces of T12-L3. The largest nodule was ~23×11×10 mm. The patient underwent right adrenal tumour resection. A histopathologic examination of the focal area revealed that the tumour consisted of small, circular haematoxylin stained cells that formed typical Homer-Wright rosettes. Immunohistochemical analysis confirmed that the patient suffered from ES due to positive staining for membranous cluster of differentiation 99 (CD99), cytokeratin (CK) and nuclear foetal-liver infusion 1 (FLI-1). In conclusion, the histopathological presence of Homer-Wright rosettes and immunohistochemical markers such as CD99, FLI-1 and CK are valuable factors for the diagnosis of ES, although cytogenetic analysis is considered the gold standard. Complete surgery is the most effective treatment option for ES treatment. Adjuvant radiotherapy and combination chemotherapy can also improve the survival rate of patients postoperatively. Introduction Ewing's sarcoma (ES) commonly affects the growth of metaphyseal bones. Although primary ES of the spine is rare (1), it is commonly observed in the sacrum. The highest incidence of ES is observed in patients in their 20s, and mostly involves the long bones and the pelvis. Spinal ES is divided into two groups: i) The sacral type, which includes the sacrum and coccyx, the incidence of which is <5% (2) and ii) the non-sacral type, which includes the cervical, dorsal and lumbar vertebrae (3). The incidence of non-sacral ES is >0.9%. In the majority of cases, vertebrae are affected following the metastasis of ES, which originates elsewhere. It is very rare to encounter primary vertebral ES if the sacrum is excluded. Surgery combined with chemotherapy and radiotherapy to control the progression of neurological deficits is associated with a preferable outcome (3). The treatment of ES is challenging, and there is currently no global uniform treatment standard. The present report describes a case of a middle aged patient with multiple ES in the spinal canal of T12-L3 who was admitted to the First Affiliated Hospital of Guangdong Pharmaceutical University (Guangzhou, China) in November 2016. Case report A 60-year-old male Chinese patient presented with pain resembling electric shock in the waist and buttocks, which occurred intermittently for 1 month, and incontinence for 1 week. A neurophysical examination demonstrated weakness of the lower extremities (power grade IV/V), decreased sensation below the ankle joint (the right side was more affected compared with the left side), two-sided knee tendon hyperreflexia and a positive Babinski sign. Magnetic resonance imaging (MRI) is a medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body (4). MRI of the patient revealed multiple inhomogeneous, oval-shaped nodules in the intradural and cauda equina spaces of T12-L3. The largest nodule was ~23x11x10 mm, with a high signal in T2-weighted, T2 spectral presaturation with inversion recovery and T1-weighted imaging (Fig. 1), which was inhomogeneously reinforced following gadolinium injection (Fig. 2). The oedema signal of T1 and T2 was strip-like in the adjacent thoracic cord. The preoperative diagnosis was spinal cord or cauda equina occupying lesions, considering multiple metastases of the cauda equina in the spinal canal of T12-L3. The patient underwent a T12-L3 laminectomy. Opening of the dura mater revealed an expansion of the conus medullaris and two non-enveloped, soft, greyish-white occupied lesions of the cauda equina (Fig. 3). The occupied lesions were partly abscised using microsurgical tools. Sarcoma was diagnosed with frozen section analysis while the surgery was being performed. The nerve tracts were saved, and the remaining occupied lesions were completely removed (Fig. 4). Opening of the white expansion of the conus medullaris revealed soft, greyish-brown tissue (Fig. 5). The tissue was removed using microsurgical tools, and the histopathological results were consistent with ES. Postoperative MRI did not detect the existence of a remaining tumour. Soft tissue swelling was presented after surgery (Fig. 6). Immunohistochemistry of the tumour revealed hypercellular areas, tissue with monomorphic small blue circular cells lacking cytoplasm with focal clearing of the cytoplasm arranged in sheets and compact nest patterns (Fig. 7). The tumour cells were not arranged in well-formed rosettes. The round-to-oval nuclei had finely distributed chromatin and small nucleoli. Mitotic figures were infrequent. Additionally, foci of necrosis, haemorrhage and oedema within the tumour were observed. Immunohistochemistry was performed on tissue sections to investigate the presence of certain proteins in tumour cells. The antibodies (OriGene Technologies, Inc.) used in the immunohistochemistry experiment were monoclonal antibodies secreted by a B lymphocyte clone. The immunohistochemical analysis revealed positivity for cluster of differentiation 99 (CD99; clone PCB1) (Fig. 8), foetal-liver infusion 1 (FLI-1; clone MRQ-1) (Fig. 9), cytokeratin (CK; clone AE1/AE3) (Fig. 10), negativity for CD34, CD20, CD3, SOX-10, glial fibrillary acidic protein, progesterone receptor, α-fetoprotein, CD117, placental alkaline phosphatase, synapsin, glycoprotein hormone α chain, prostate-specific antigen, napsin A, transcription termination factor 1, interleukin-12 subunit β and tumour protein P63, and 80% of the cells were positive for Ki-67, which supported the diagnosis. Based on the patient's positive immunohistochemistry results, an MRI at the corresponding position was performed to identify metastatic tumors and primary lesions. No suspicious lesions were identified in the patient's brain, lungs or prostate (Fig. 11). Fluorescence in situ hybridization (FISH) is a molecular cytogenetic technique that uses fluorescent probes that bind to the parts of a nucleic acid sequence with a high degree of sequence complementarity; fluorescence microscopy can be used to detect the fluorescent probe bound to the chromosomes (5,6). Genetic FISH analysis of the Ewing's sarcoma breakpoint region 1 (EWSR1) gene in 200 interphase cells from the patient demonstrated no specific cytogenetic abnormalities. Each signal pattern exhibited the following: 2F, 32.0%; 1F, 10%; 3F, 45.0%; 4F, 4.0%; 1G1R1F, 2.5%; 1G1R2F, 2.0% and 5F, 4.5% (F, G and R stand for fusion, green and red, respectively; Fig. 12). FISH analysis shows fusion cell <10%, indicating that the specimen was considered FISH-negative. Adjuvant chemotherapy was suggested; the patient accepted radical surgery followed by combination chemotherapy, but the disease continued to progress. Following chemotherapy treatment, the patient suffered from depression and refused any further treatment; the patient was lost to follow-up as they did not return. Discussion ES is a developmental tumour characterized by balanced chromosomal translocations and the formation of new fusion genes. ES is an aggressive tumour with high occurrence of metastasis in children and young teenagers, and is caused by chromosomal fusion in EWSR1 genes (7). ES can affect any bone, but mostly affects the lower extremities (45%), followed by the pelvis (20%), upper extremities (13%), axial skeleton and ribs (13%) or face (2%) (7). The femur is affected the most frequently, especially in the midshaft. However, ES is rarely observed in the spinal canal. Typically, the tumour consists of small circular cells with 1 (magnification, x400). The arrow indicates that the positive product expression site is placed in the center of the field of view. regular circular nuclei containing finely dispersed chromatin and inconspicuous nucleoli, as well as a narrow rim of a clear or pale cytoplasm, which can be observed by light microscopy (2). Ultrastructural examination demonstrates that the tumour includes primitive cells with a smooth nuclear surface, scanty organelles and cytoplasmic glycogen in pools or aggregates (8). Tumours with similar histology also arise in soft tissues, such as peripheral primitive neuroectodermal tumours, neuroepitheliomas and Askin tumours (8). Pain is the most common symptom in patients with ES. Usually, the disease is concealed, and the pain may exist for a long time before the patient seeks medical treatment. Initial pain may be mild and intermittent and may respond to nonsurgical treatment (8). The average delay between symptom onset and diagnosis is 34 weeks. According to a previous report, the average time is 15 weeks between symptom onset and the first visit, and 19 weeks between the initial visit and a correct diagnosis (8). If the patient continues to experience symptoms, a radiographic examination is important as it can identify the primary lesion, when first diagnosed and during follow-up. In addition to pain, patients may experience fever, erythema and swelling (9). Laboratory tests may reveal an increase in the white blood cell count, the red blood cell rate (erythrocyte sedimentation rate) and C-reactive protein, which may be misdiagnosed as osteomyelitis (9). The patient described in the present case report was a 60-year-old male patient who admitted to hospital following a surgical pathology diagnosis. Clinical literature attributes this type of pathology to rare diseases (10), and a limited number of reports of ES in the spine have been published (10,11). ES is rare not only in middle-aged patients, but also in the spinal canal. In the present case, no tumours were identified in the spinal cord or vertebrae; they were distributed in the spinal canal, and the patient's EWSR1 FISH test was negative. No previous reports describing similar symptoms, diagnosis and complications were found in published literature. The diagnosis is further complicated by the similarity of the biopsy results of ES to those of pus (11), and the tissue may be sent to the microbiology department instead of the pathology department; the majority of biopsy specimens should be sent for bacterial culture and pathology. A number of common immunohistochemical markers expressed in ES, such as CD99, FLI-1 and CK, provide valuable support for ES diagnosis. CD99 is a 32 kDa glycoprotein encoded by the MIC2 gene, and is used to diagnose ES with high sensitivity (11,12); the sensitivity is 95%, although the specificity is low (12)(13)(14). According to a previous report by Vural et al (14), FLI-1 expression was detected in 7/8 (87.5%) ES cases, and CD99 expression was detected in 10/11 (90%) ES cases. CD99 is the most sensitive immunohistochemical marker for ES. However, the expression of these markers has also been described in T-lymphoblastic Figure 10. Immunohistochemical staining demonstrating that the tumour cells were positive for cytokeratin (magnification, x400). lymphoma, rhabdomyosarcoma, synovial sarcoma and small cell anaplastic osteosarcoma (15)(16)(17)(18). Therefore, ES may be misdiagnosed if based solely on the expression of CD99. FLI-1, as well as CK, is sensitive but less specific for ES compared with CD99 (11,13). Previous studies have demonstrated (19)(20)(21)(22)(23)(24)(25) that both markers are expressed in various other round cell tumours (19). The immunohistochemistry results of at least CD99 and FLI-1 are primarily markers of high specificity in the diagnosis of ES (20)(21)(22). EWSR1 is the most commonly translocated gene in sarcoma and is associated with a variety of mesenchymal lesions, such as ES, proliferative small circle cell tumour, clear cell sarcoma, vascular-like fibres, histiocytoma, extramedullary mucinous chondrosarcoma and mucinous liposarcoma (23)(24)(25). The detection of genetic FISH showed rupture abnormalities of the EWSR1 gene can assist in the differential diagnosis of ES and peripheral primitive neuroectodermal tumours; however, positivity does not necessarily indicate ES, and not all cases of ES are EWSR1-positive, which suggests that EWSR1 is not specific to ES (26,27). EWSR1 rearrangement can be visualized by FISH; as soft tissue ES is diagnostically challenging, FISH analysis is a useful confirmatory diagnostic tool (28). However, as in the majority of instances in which a split-apart approach is used, the results of molecular genetics must be evaluated in the context of morphology. The patient described in the present report was subjected to FISH analysis to detect a fusion partner, but no rearrangements were identified. According to the presence of the EWSR1 gene rearrangement in only one of the three carcinomas (Ewing's sarcoma and hidradenoma of the skin and mucoepidermoid carcinoma of the salivary glands) studied by Möller et al (28), it may be concluded that the patient described in the present study was negative for EWSR1 gene rearrangement, suggesting that other mechanisms may be involved in the pathogenesis of this tumour type. This case was sent to Professor Anjia Han at the Department of Pathology of the First Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) and other professors, who considered the lesion to be a malignant tumour, but did not exclude ES or metastatic cancer. Therefore, this case may be described as an EWSR1-negative ES in the primary spinal canal. Surgical treatment of tumours in the spine, pelvis or even the spinal canal is controversial (29). In addition to surgery and chemotherapy, several novel molecular targets for ES treatment have recently been identified and investigated in preclinical and clinical settings; treatments targeting the function of receptor tyrosine kinases, fusion protein EWSR1 and mTOR have demonstrated promising results (30). There has also been an increasing interest in the immune responses of patients with ES; immunotherapies using T cells, NK cells, cancer vaccines and monoclonal antibodies have been considered for ES, especially for recurrent cases (30). In the patient described in the present report, due to the sudden occurrence of spinal cone constriction and following multidisciplinary consultation, as well as consultation with the patient and his family, it was decided to immediately perform surgery to remove the tumour mass and relieve the spinal cord compression. The median survival time of patients with all stages of ES is 26.14 months, and the median survival of patients in the late metastatic stage is 5.6 months (31); although no randomized studies of ES occurrence in the spinal canal have been published, ifosfamide-based chemotherapy may have a positive impact on overall survival (32). A multidrug chemotherapy regimen for bone ES treatment can be administered; however, different responses to chemotherapy regimens have been observed. The most effective drugs include vincristine, actinomycin D, high-dose cyclophosphamide, doxorubicin, etoposide and ifosfamide (32). In the present case, radical surgery was performed, followed by the administration of combination chemotherapy, but the disease continued to progress. Thus, the best solution for each patient may be established following extensive discussions with the patient and his/her family. In conclusion, the present report described for the first time a rare case of a Chinese patient with localized ES occurring in the spinal canal, as confirmed by genetic and ultrastructure analyses. This case and previous reports have revealed that surgery combined with chemotherapy and radiotherapy may contribute to significant improvements in survival; however, for each patient, the best treatment plan can be established through discussions with the patient and his/her family. Ethics approval and consent to participate This study was approved by the Ethical Committee of the First Affiliated Hospital of Guangdong Pharmaceutics University (Guangzhou, China). The study conformed to the relevant regulatory standards. Patient consent for publication Written informed consent was obtained from the patient for this study. All patient identifiers have been removed.
2019-10-10T09:30:49.765Z
2019-10-03T00:00:00.000Z
208488300
s2orc/train
v2
Socio-cultural characteristics of the Russian Indigenous communities in the Barents region: Political and legal perspectives
Socio-cultural characteristics of the Russian Indigenous communities in the Barents region: Political and legal perspectives to maintain stability and Legal regulation of socio-cultural processes taking place in the Barents Euro-Arctic Region (BEAR) is not effective without involving indigenous communities, which is a specific population group representing the traditional ethno-cultural landscape of the region, unchanged for centuries and affected by globalization. Owing to the cross-border nature of the BEAR, socio-cultural characteristics acquire special synergetic properties determining the vectors of intergovernmental cooperation. With areas of its land making up the largest part of the BEAR, Russia seems to be the key actor in its socio-cultural development. It is dif ficult to estimate the progress of the unity without understanding the legal and political background of Russia in the field of indigenous issues. This paper sheds light on the policy and regulations on indigenous-related issues in Russia by examining five cases of regions which are part of the Russian Arctic Zone and the BEAR members. The authors compare the essential actions and measures the Russian Arctic regions have taken for complying with the international indigenous agenda. The paper also makes an effort to examine the potential for cooperation in the BEAR through establishing institutional, political and legal mechanisms. The conclusion arrived at is that Russia needs to take major steps to ensure progress and development of its part of the BEAR and to contribute to the BEAR development by fostering further discussion and cooperation on indigenous rights, joint decision-making, clean environment, education and native languages, peace and security. cooperation, and to create an identity opportunity for its civil society development (Hønneland, 1998), with indigenous peoples as the most important group. The cultural and everyday life of the BEAR members provides a certain background upon which to form their common identity: "People in the area all live in a region characterized by a harsh climate, a vulnerable nature, long distances to national centers, and a sparse population, which allegedly gives them some kind of common world" (Barents cooperation, 2017). In order to coordinate approaches to their societies' development, to meet the challenges and to cooperate at the intergovernmental level within the BEAR, specific "governance institutions" have been established, as listed in Table 1 below. It is important to understand that all elements of the BEAR structure aim exclusively at an intergovernmental and interparliamentary regional forum. A broad range of issues that fall within the scope of the Barents Regional Council, the Barents Regional Committee and the International Barents Secretariat are discussed within various working groups. The structure is constantly changing and being updated as a "living organism" to meet emerging challenges: for example, one of the novelties has been the working group on youth, even though at the time of the Kirkenes Declaration youth issues had not been regarded as a specific area of cooperation (Averianova, 2014). Participation in regional cooperation in the North and in the Arctic is a major component of Russian foreign policy. The country's government prioritizes this interaction as a useful and effective tool to foster stability, confidence and sustainable development in the North (Program, 2007). Special attention is paid to the indigenous peoples living in the region and, in particular, to such issues as education, public health, traditional activities, environmental protection and infrastructure improvement. For these purposes certain administrative measures have been taken -the Indigenous Peoples Office in Murmansk was established and the Indigenous Peoples Action Plan for 2005-2008 was proposed (Program, 2007). In 2013, the Russian Prime Minister (the Chairman of the Russian Government) Dmitry Medvedev underlined the importance of integration within the BEAR in a speech, stating: "We are encouraging the development of tourism, small business, scientific, cultural and educational exchange. We believe that a move to visa-free traveling is in the interest of the Barents Region" ( RIA Novosti, 2013). The complication of political relations during the last five years and the international sanctions policy of 2014-2019 caused by the Ukrainian crisis did not affect cooperation between Russia and the countries of the BEAR. The representative of the Ministry of Foreign Affairs of the Russian Federation expressed this very sentiment on the sidelines of the international session "Cooperation in the Arctic", a part of the International Murmansk Business Week (Russian Foreign Ministry, 2018). The BEAR is one of the most important directions undertaken in Russian foreign policy in the North-West. Russian public authorities perceive the region as a political instrument for sharing best practices of interregional development and supporting democratic institutions in the modern "multilayer" civil society. "The human dimension" is the key element of the BEAR shaping its social and cultural landscape and including ethnic, cultural and religious components, e.g. indigenous communities of the region, their traditional lifestyle, economies and crafts. Those are the focus of a special working group on indigenous issues within the BEAR framework. The Russian Federation has the largest territory among the BEAR members, and also the greatest number of indigenous peoples. The country is home for three indigenous peoples' groups in the region: (1) the Sámi, (2) the Veps and (3) the Nenets, each with their unique culture and identity. Their social and cultural development is crucial as being the most important element of sustainable development in Northwest Russia and having the potential to ensure national security and to promote further international integration. 'Sustainable development' in this realm means the combination of three dimensions: economic (balance between indigenous economies and non-indigenous industries), social (indigenous communities' development and well-being) and environmental (mitigation of adverse environmental impacts on the territories which are the essential part of indigenous style of life) (Russian Federation, 2017). Social and cultural diversity of the BEAR is the rationale for strengthening international cooperation and joint projects, as the region has a unique capacity to sustain the ethnic and cultural landscape and to contribute to the protection of the national interests of the BEAR members. The environmental factor has one of the key impacts on the life and activities of indigenous peoples, since their entire culture is associated with nature, life in the forest-tundra and tundra, and the deterioration of the environmental situation resulting from the industrial development of the Arctic directly affects not only the reindeers' food base and the peoples' quality of life (an increase in the content of harmful toxic Committee of Senior Officials "operational level", coordinates the BEAC work Barents Regional Council (BRC) regional level, indigenous peoples Barents Regional Committee "operational level", coordinates the BRC work International Barents Secretariat coordination level substances in the human body through the consumption of fish and poultry, in which these pollutants accumulate), but also the inability to fully realize their right to cultural customs and traditions, as nature degrades, and former sacred places of worship begin to collapse. That is why there is such a procedure as "ethnological expert examination", which includes an assessment of the impact of industrial development on the development of an ethnos and its culture. In accordance with Russian law, "Ethnological expert examination is a scientific research into the influence of changes in the original living environment of indigenous small-numbered peoples of the Russian Federation and the sociocultural situation on the development of an ethnic group." The objective of our paper is to analyze the political and legal background of the Russian Federation related to its membership and activities in the BEAR. Based on this analysis we endeavor to estimate opportunities for Russia's integration into interregional cooperation and promotion of the social dimension in the Arctic. The main focus is indigenous communities and their role in the region's development. To provide a holistic qualitative study of Russian law and policy within this scope we employ the method of content analysis of state and regional legislation and strategies, the case study method focusing on five regions of the Russian Federation and statistical data analysis. Case studies provide a deeper understanding of phenomena, events, people or organizations (Gustafsson, 2017). The comparison of Russian Arctic regions that are members of the BEAR is introduced in order to understand the ethnic composition and cultural distinctiveness of Russia and to illustrate the specifics of the Russian governance in the Arctic, i.e. priorities, directions and main mechanisms. Locations were selected based on the zoning process that recently took place in Russia under which land territories of the Russian Arctic Zone were designated. We explore five cases of the Russian Federation regions -the Murmansk Oblast, the Republic of Karelia, the Nenets Autonomous District, the Komi Republic and the Arkhangelsk Oblast -describing social infrastructure, economic activities, relations between indigenous peoples and other social actors as well as particularities of political and legal frameworks typical for these regions. Sixty legislative acts, bylaws and strategies of federal and regional levels have been analyzed in total as well as statistical data analysis having been conducted in order to present an adequate 'picture' of the socio-cultural situation with the Russian indigenous communities in the Barents Region. Our study provides the grounds on which to determine the major steps Russia and the BEAR need to take to ensure progress and mutually beneficial development of the integrated Arctic region. Russian national interests and international cooperation in the Barents region a. Russian Arctic "concepts" and "strategies" The geographical area of the BEAR covers 1.75 million km² of which about 75% is located in Russia (The BEAR, 2019). Russia's social and cultural development in the Barents region is contemplated by the Arctic-related strategic legal documents which have been enacted in the country in the last decade. Russian strategic and conceptual documents on the Arctic reflect both the interests of the government and the civil society, and justify Russia's activities in the Circumpolar and Polar regions. The documents shown in Table 2 below demonstrate the recent trends in the Russian legal system's development regulating the main directions of national and foreign policy on the basis of presidential decrees in accordance with the Constitution (part 3 art. 80). Once approved by the President in the form of a "legal concept", rather declarative in nature and having soft norms, such legal acts are further developed and supported by the package of laws and regulations which contribute to the 'hard law' of the country. In addition, the federal legislation enables the government to utilize the target development programs in the most crucial areas (education, industry, etc.). This approach is exemplified by the Acts of the President approving two strategic Arctic documents -Russian Arctic Foundations and Russian Arctic Strategy. The acts outline the basic national interests in the Arctic, which are the exploitation of the natural resources of Russia's Arctic, protection of its ecosystems, use of the seas as a transportation system, and ensuring the Arctic as a zone of peace and cooperation. Established in response to the Russian Arctic Foundations, the Russian Arctic Strategy further identifies the priority areas for the Arctic: integrated socioeconomic development; advancement of science and technology; improvement of infrastructure; environmental security; international cooperation; military security and protection of the state borders in the Arctic (Gladun, 2015). The "basic set" of the Arctic strategic documents is "Russian Arctic Foundations" The Presidential Act "The Development Strategy of the Arctic Zone of the Russian Federation" (Russian Federation, 2013e) "Russian Arctic Strategy" The Presidential Decree "Foundations of the State Cultural Policy" (Russian Federation, 2014e) "Cultural Foundations of Russia" The Presidential Act "Foreign Policy Concept of the Russian Federation" (Russian Federation, 2016) "Foreign Policy Concept of Russia" destined to be supplemented by federal laws in the near future. To date, the strategic documents are not backed up by national legislation, as the Russian Arctic Strategy still lacks direct Arctic laws and regulations. The draft law 'Russian Arctic Zone Act', debated intensively in recent years (Jensen and Hønneland, 2015), was introduced in the State Duma in spring 2017. If adopted, the law will provide a regulatory and legal environment for long-term sustainable development of the Russian Arctic. It will also introduce a special regime of funding and management within the economic, environmental and social dimensions (Gladun, 2019). Presidential order of May 2, 2014 "On Land Territories of the Arctic Zone of the Russian Federation" geographically contours the Russian Arctic Zone (Russian Federation, 2014d). It identifies all territories -members of the BEAR: the Murmansk Oblast, the Arkhangelsk Oblast, the Nenets Autonomous District and some parts of the Republic of Karelia and the Komi Republic -as the Arctic Zone of the Russian Federation (AZRF). Jointly, the legal acts just mentioned influence the BEAR structure and development. Russia's strongest interregional cooperation in the last ten years is seen in the formation of the BEAR. The main reasons for this are both geographical and historical in nature -territorial proximity, shared history and a long, since Soviet times, international partnership, as well as the general process of forming a single European economic space in the second half of the 20th century (Pelyasov, 2015). The "Foreign Policy Concept of Russia" (Russian Federation, 2016) in its paragraph 76 identifies the BEAR Council as one of the key platforms for crossborder cooperation development in the Arctic: Russia considers that the Arctic States -members of the BEAR have a special responsibility for sustainable development in the region, and in this connection advocates further cooperation in the Arctic Council, the coastal Arctic Five and the Barents Euro-Arctic Council. Russian Arctic Foundations outline the national interests, social and cultural development in the BEAR and the Arctic (in sub. "з", para 7, part 6, Chapter II), mentioning that the strategic priorities of the Russian Federation policy in the Arctic are to improve the quality of life of the indigenous population and to provide social conditions for economic activities (Russian Federation, 2008b). In addition, the Foundations propose the key measures to implement new social and economic development policy in the AZRF, namely: -Educational programs for indigenous peoples: training for children to develop the necessary skills to enable them to survive in extreme environmental conditions and to do well in the challenging circumstances of modern society; -Equipment for distance learning; -Programs of rational environmental management. Cultural Foundations of Russia is the official guidelines for the country's cultural policy encompassing multiple dimensions: -"Cultural distinctiveness" of Russia; -Acceptance and role of "traditional" religions of Russia; -Recognition of the "social atomization" phenomenon, i.e. social connections gap (friends, families, neighbors) as one of the most serious problems of the Russian culture; -Restoring of "family education" as an important mechanism of a quality education; -Promotion of "traditional family values", etc. (Zaikov et al, 2017). It is obvious that the key directions of AZRF development reflect the objectives of the BEAR stated in the Kirkenes Declaration (Kirkenes Declaration, 1993), which are economic cooperation, science and technology, regional infrastructure, environment, tourism, educational and cultural exchange, as well as projects particularly aimed at improving the situation of indigenous peoples in the North. b. Russia's international cooperation within the BEAR The key legal act on the development of the Russian part of the BEAR is the order adopted by the Russian Government "Strategy for the Social and Economic Development of the North-West Federal District until 2020" (Russian Federation, 2011). In conjunction with the Concept of Border Cooperation in the Russian Federation (Russian Federation, 2001), the document outlines the development priorities for the region and defines the main directions of Russia's border cooperation based on the European Outline Convention on Transfrontier Cooperation between Territorial Communities or Authorities. All the BEAR countries have signed the Convention on Border Cooperation (Council of Europe, 1980). One of the main objectives of this document is the social progress of the border areas stipulated in its Preamble. The absence of specialized regional treaties within the framework of the BEAR is compensated by bilateral agreements between the member states, as well as by other agreements related to the BEAR. The goals and principles of indigenous development as well as the current situation for the indigenous peoples of the region is described in the "Action Plan for Indigenous Peoples in the Barents Euro-Arctic Region 2016-2018" worked out by the Working Group of Indigenous Peoples (WGIP) in Murmansk in 2017 (WGIP, 2017). The Action Plan contains measures and projects aimed at development of the indigenous peoples' communities and societies within the BEAR, at strengthening the cooperation between indigenous peoples in the BEAR as well as goals striving for a wider cooperation between indigenous peoples of the BEAR regions. The main fields in focus are development of trade and business, language and media, health and social-related issues, environment and culture. The "Action Plan for Indigenous Peoples in the Barents Euro-Arctic Region 2017-2018", in its paragraph 3, sets the goals and objectives related to indigenous issues which are formal guidelines for the working groups and other bodies formulating and finalizing program documents. Underlining the general objective for indigenous peoples' development, the Action Plan articulates eight intermediate objectives, which we generalize in Table 3 below. All the objectives echo the global trends in protection of indigenous peoples' rights since the adoption of the UN Declaration on the Rights of Indigenous Peoples (United Nations, 2007) and do not have any specific regulations within the BEAR framework. At the same time, one cannot fail to mention that, despite the fact that Russia is not a signatory state to the UN Declaration on the Rights of Indigenous Peoples, Russia is a state party to a number of acts that to one degree or another affect the rights of indigenous peoples, in particular: -Genocide Convention (1948) All of these tools can be used to protect the rights of indigenous peoples. Moreover, according to the expert on the rights of indigenous peoples Mikhail Todyshev, after the World Conference on Indigenous Peoples in 2014 and the adoption of the final document, and following the adoption of the Declaration on the Rights of Indigenous Peoples and the principles contained therein by four resolutions of the UN General Assembly (September 2007, September 2014, December 2016and September 2017, the Declaration has acquired the status of a universally recognized norm of international law, and the principle of "free, prior and informed consent" has become a universally recognized principle of international law. Furthermore, Article 69 of the Constitution of the Russian Federation comes into action. Additionally, according to Article 15 (para 4), "universally recognized principles and norms of international law are an integral part of its legal system". This does not entail the obligation to execute them, but theoretically creates a legal opportunity to include the provisions of the Declaration and fix the guarantees of compliance with the principle of "free, prior and informed consent" in the norms of federal and regional legislation. Social structure in the Russian part of the BEAR The Presidential Decree "On Land Territories of the Arctic Zone of the Russian Federation" determines the status of certain Russian regions as "Arctic territories". First of all, the Decree identifies the Arctic Zone of the Russian Federation administratively (Russian Federation, 2014d). Additionally, these regions are meant to be the "foreposts" of the Russian Arctic and engines of the country's economic growth with substantial financial support allocated through governmental target programs. Each region has a specific economic background and various objectives for its development. This approach can give rise to a new system of effective resource allocation when people, industries and natural resources provide for comprehensive social and economic development projects aimed at achieving strategic interests and ensuring national security in the Arctic regions (Gladun, 2019). As mentioned earlier, the "Arctic territories" include the Murmansk Oblast, the Arkhangelsk Oblast, the Nenets Autonomous District and some parts of the Republic of Karelia and the Komi Republic which are members of the BEAR. These regions, despite some heterogeneity, have certain commonalities (Zaikov, 2014): 1) They are heterogeneous with diverse social groups including ethnic groups, indigenous peoples and migrants; 2) They have their own "regional" legislation on indigenous, environmental, cultural and other issues derived from the federal legal frameworks; 3) The scope and coverage of regional legislation vary greatly depending on various factors (types of ethnic groups residing in the region, economic potential of the territories, political will of the regional leaders); 4) All regions share common legislative gaps and shortcomings (many legal acts are declarative, duplicate federal laws or, contrarily, use concepts and terms unknown in federal laws, etc.). It is interesting to note that a number of other regions of Russia which are considered to be most successful in The Arctic is home to four million people most of whom live in northern Scandinavia and Russia. This includes three indigenous peoples: the Sami, the Inuit and the Nenets in the European part of the Arctic. A small percentage of the Komi peoples can also be found in this region. The European Arctic is defined from Greenland in the west to the Ural Mountains in Russia to the east. In this part of the Arctic the Sami peoples live in Northern Norway, Sweden, Finland and Russia (Arctic region briefing, 2015). In Russia, the northern indigenous peoples traditionally inhabit huge territories stretching from the Kola Peninsula in the west to the Bering Strait in the east, which make up about two-thirds of the Russian territory (Batyanova et al., 2009). The northern indigenous peoples use the environment and natural resources for their living sustainably (Park, 2008). They are bearers of valuable and unique knowledge about the Arctic landscapes and possess traditional values, culture and skills (AMAP, 2004). Their life-support system is closely linked to traditional lands and land use, to the challenging conditions of the climate and geographysevere weather, limited natural resources and dispersed settlements. In small groups the indigenous peoples of the Arctic can easily respond to major climatic and environmental changes by altering group sizes, relocating and being flexible with seasonal cycles in hunting or employment (Park, 2008). Smaller herds and camps of nomadic indigenous peoples are able to respond more flexibly to ecological changes because they can exploit smaller patches of pastures, including those surrounded by industrial installations. Thus their number sees almost no growth, while the birth rate does remain sufficiently high. However, the same factors that ensure the high degree of adaptability of northern populations to their extreme living conditions also make it difficult for them to integrate with other cultures and to adjust to the continuing development of their primordial territories (Artyunov, 2015). The Murmansk Oblast According to the Russian National Population Census 2010, the Murmansk Oblast was home to 795,400 people (Federal State Statistic Service, 2010). The area had been losing population for the previous six years. The At present, no ethnic or religious tension is observed in the Murmansk Oblast, and outbreaks of extremism are rare (Sova, 2019). At the same time, a number of negative tendencies, typical for many Russian regions and having a negative effect on ethnic relations, can be mentioned, such as loss of ethical and traditional values; legal nihilism; and negative ethnic stereotypes (Russian Federation, 2013b). The key ethnic policy measures are described in the bylaw of the Murmansk Oblast "On the Strategy of Social-Economic Development until 2020 and for the Period till 2025" (Russian Federation, 2013b). The Strategy reveals the priorities of the governmental policy aimed at strengthening civil coherence, harmonization of ethnic relations and promotion of ethnic and cultural diversity, promotion of the all-Russia civic identity, support of ethnic peace and harmony; support of the indigenous peoples -Sámi -cross-border cooperation, etc. The Charter of the Murmansk Oblast (Russian Federation, 1997) provides the legal framework of ethnic policy and is the basis for indigenous-related regulations -the regional law on the Sámi traditional resource use support (Russian Federation, 2006b), the regional law regulating land relations and identifying the list of the remote areas and Sámi territories -Kovdorsky District, Kolsky District, Lavozersky District and Tersky District (Russian Federation, 2003b). A special law focuses on northern reindeer breeding (Russian Federation, 2008). The Charter of the Murmansk Oblast does not mention any specific ethnic or national policies in the oblast, but it stipulates protection of the indigenous peoples' rights, mainly the Sámi's. It is important to note that the Charter's reference to other "representatives of the indigenous minorities of the North" and their right to "traditional environmental management" does not have any real legal effect, since only the Sámi are in the "List of indigenous minorities of the Russian Federation"(Russian Federation, 2000). The Sámi people live in the Murmansk Oblast and enjoy the corresponding legal status (e.g. they have a right to create communities and use the system of preferences). Other indigenous peoples can claim rights and obtain social benefits only if they live on Sámi territories (Russian Federation, 2009b). The Russian Sámi are represented in the "Sámi Parliament of the Kola Peninsula" -a quasi-state representative body (Zadorin, 2015) established by Sámi peoples and the Department for legal activity and local authority reforms of the Government of the Murmansk Oblast within the program "Economic and social development of the small indigenous peoples of the Murmansk Oblast 2006-2008" (Russian Federation, 2009c. De jure, the Sámi Parliament, called the "Council of Representatives", is a consulting body of the Government of the Murmansk Oblast. The term "Sámi Parliament" is not the official name. Some Sámi peoples are employed in non-traditional sectors of the economy, i.e. traditional occupations are secondary for them and, vice versa, non-Sámi people who live on Sámi territories are involved in the traditional Sámi economy. The economic activities of both groups are regulated by the regional legislation (Russian Federation, 2009b). The regional law "On Northern Reindeer Herding in the Murmansk Oblast" establishes legal, economic, environmental and social norms for northern reindeer herding -a traditional economic activity. The law promotes effective economic measures and supports the traditional way of life and culture. Also, legal definitions of key concepts are provided -' ethnic communities', 'reindeer herding', ' capacity of the reindeer pasture', 'reindeer breeding brigade ', etc. (Russian Federation, 2003c). For example, the law defines the term 'ethnic community' as "a group of citizens who are permanent residents of indigenous peoples' territories and are involved in the indigenous economy". According to the law, the right to be involved in reindeer herding activities and to enjoy special guarantees, rights and support is not exclusively limited to indigenous peoples. Representatives of other ethnic communities can be engaged in traditional economies (reindeer herding) as well (Russian Federation, 2003c). Both federal (Russian Federation, 1999b) and regional legislation (Russian Federation, 2008) aim to regulate and develop traditional culture focusing on the main areas: 1) Infrastructure, libraries, museums; 2) Cultural activities; 3) Support of cultural associations of indigenous peoples; 4) Collaboration and networking with other indigenous peoples in Russia and abroad; 5) Support of educational programs with local history studies, folklore and handicrafts of indigenous peoples; 6) Media and radio broadcasts in indigenous languages. The Republic of Karelia The Article 11 of the Constitution secures the state language in the Republic of Karelia (Russian), but the Republican authorities have a right to establish other state languages if the will of the population is expressed through a referendum. The Constitution also secures the right of people to preserve their native languages, their study and support. Regional Law "On State Support of the Karelian, Vepsian and Finnish Languages in the Republic of Karelia" establishes a wide range of rights in the sphere of language policy for Karelians, Veps and Finns through the adoption of governmental support programs, the use of these languages for geographical naming, the discussion of political issues, social interaction, economic, cultural and family relations (Russian Federation, 2004). The Nenets Autonomous District (NAD) One of the largest indigenous communities resides in the Nenets Autonomous District: there, the Nenets community numbers 7,504 (18.6% of the total population). Other ethnic groups of the region are Russians -26,648 (66.1%); Komi -3,623 (9.0%); Ukrainians -987 (2.4%); Belarusians -283 (0.7%); Tatars -209 (0.5%); Azerbaijanis -157 (0.4%). In the region, 19 Nenets indigenous, non-governmental organizations are registered, and they are the most numerous social actors compared to other organizations -one national-cultural autonomy of Dagestanis, and five religious organizations (Orthodox Christians of the Moscow Patriarchate and old believers). In addition, there are two active public movements at the regional level -the Union of Reindeer Herders and the Association of Nenets people "Yasavey" (NAD, 2019). The social and political situation as well as ethnic relations in the NAD have always been balanced and secured; no preconditions for social or ethnic tension have been identified. Local authorities support collaboration with local ethnic associations and all the district residents are encouraged to keep ethnic peace and to prevent ethnic conflicts. The priority issues of ethnic policy in the district are equality of the NAD residents, keeping the peace, strengthening the social and economic background for effective implementation of ethnic policy, and social and cultural adaptation of "new comers" in the district (NAD Administration, 2016). Legal support of ethnic policy in the region is provided through a set of legal documents including the Charter of the Nenets Autonomous District (Russian Federation, 1995); the NAD Law "On State Support of Socially Oriented Non-Profit Organizations" (Russian Federation, 2011b); and the NAD Law "On the Nenets Language in the Nenets Autonomous District" (Russian Federation, 2013d). In addition, extensive secondary legislation adopted in the region provides well-structured legal and financial support in various areas related to indigenous peoplesgovernmental target programs for the Nenets (Russian Federation, 2013c); local grants for non-profit, socially oriented organizations (Russian Federation, 2011b; Russian Federation, 2014); and international and interethnic relations programs (Russian Federation, 2014b). The regional government's prioritized functions are to administer the Nenets' territories of traditional use (Russian Federation, 2001b) and to implement the federal Ethnic Policy Strategy at the regional level (Russian Federation, 2014c). The NAD Charter describes the general principles of ethnic policy in the district and emphasizes the need to protect the Nenets' interests. In its Preamble, the Charter characterizes the Nenets as a key element of the region's society: "The Charter is a legal act of direct action expressing the will and interests of the Nenets and other peoples on the territory of the district" (Russian Federation, 1995). Article 14 of the Charter says: The state institutions and administration of the district shall recognize and guarantee the rights of the Nenets people, preserve and develop their way of life, culture, language, environment, and traditional industries in accordance with the generally recognized principles and norms of international law and international treaties of the Russian Federation, federal and district legislation, and implement the policy of protectionism. It is important to note here that, from the point of view of international law, the term 'protectionism' is not a politically correct one. It implies some ' civilizational backwardness' of the indigenous peoples, which is a hidden discrimination. More correct scientific and legal terms are 'the right to self-determination' and 'the right to development', and they are currently prevalent, as long as the indigenous peoples are perceived as carriers of an alternative traditional culture related to sustainable develop-ment. Indigenous people's practices preserve the biological diversity by applying ecologically friendly ways of natural resource management in order to transfer their knowledge and skills to future generations. On the other hand, the Russian researcher and lawyer A.V. Akhmetova, referring to her colleague V.A. Kryazhkov, believes that only protectionism contributes to the real equality of peoples and thus ensures social justice (Akhmetova, 2012). Another Russian researcher P.V. Gogolev goes further. He supports paternalism and defines it as policy based on national traditions and the will of the people of a sovereign state … [It is a] … responsible policy towards certain categories of population and ethnic groups in order to ensure the right to development, social equality, preservation and development of additional measures to protect rights through active participation of the interested categories in the government and social management (Gogolev, 2014). Article 15 of the NAD Charter states that the Nenets people and other indigenous peoples of the North are involved in the decision-making process at the regional or municipal level not only through representation but also through other forms of direct democracy in accordance with the district laws. An important norm should be mentioned in Article 16 of the Charter. It states that the NAD authorities and the Association of Nenets people "Yasavey" make collaborative decisions on all social and economic issues of the Nenets people. "Yasavey" is a member of the Coordination Council of the Association of Indigenous Peoples of the North, Siberia and the Far East of the Russian Federation (RAIPON, 2019) and it has a right of legislative initiative under paragraph 1 of Article 29 of the Charter. Article 17 requires the support of the traditional lifestyle and environmental management in accordance with the NAD laws in order to preserve the unique culture of the Nenets. The territories of the traditional habitat are mostly used for traditional activities and occupations of the indigenous peoples according to district laws and regulations (Russian Federation, 2001b). However, the traditional territories are not numerous, the main examples are the District Territory "Dawn of the North", created within the boundaries of traditional indigenous collective farms (Russian Federation, 2002b) and the traditional territory "Kolguev" (Russian Federation, 2002c). It is important to note that the process of arranging traditional territories is at a standstill (Toriya, 2011), and they exist only at the municipal level. Articles 14, 17 and 18 of the NAD Charter secure the priority of the social and economic interests of the Nenets and other indigenous peoples especially when it comes to extraction and development of mineral resources. Furthermore, paragraph 2 of Article 57 of the Charter states that the traditional lands of the Nenets are allocated for industrial purposes only after a prior consultation and consent of the local authorities or through a local referendum (Russian Federation, 1995). The Nenets language is considered the most important element of the culture and traditional way of life of the Nenets. Article 7 of the Charter enables the NAD Administration to approve, implement and fund governmental targeted programs to support the Nenets language (Russian Federation, 1995). The Komi Republic (KR) The ethnic composition of the Komi Republic is very complex, with more than 100 ethnic groups: Russians -66%; Ukrainians -6.8%; Tatars -2.5%; Komi -1.5%; Belarusians -1.3% and other ethnic groups (Russian Federation, 2015). The other six governmental programs focus on various aspects of the ethnic policy: "Culture of the Komi Republic", "Development of Education", "Development of the State and Municipal Administration", "Protection of the population and territories of the Komi Republic from emergency situations, provision of fire safety and human safety at water facilities", "Economic Development" and "Development of Physical Education and Sport". The legal fundamentals of the indigenous peoples of the Komi Republic are in its Constitution (Russian Federation, 1994). Article 3 states that, "The foundation of the Komi Republic and its name are related to the original residents of its territory -the Komi people." Such a provision is the legal embodiment of the collective right to ethnic identity that continues to be a constitutional principle. It is worth noting that legal 'ring-fencing' of the Komi people from the ' other' people attests to its vital role in the history of the Komi Republic and claim of recognition. The claim resulted in equality of cultures and recognition of the Komi language as a second language of the Republic along with Russian. Article 75 of the Constitution of the Komi Republic secures a right to legislative initiative of the Komi indigenous peoples' association "Komi Voityr". The Law "On State Languages" establishes two official languages of the Republic -Russian and Komi. Their use and application are also clarified (Russian Federation, 1992). Legal regulation of language issues in the Komi Republic has a mixed dispositive-mandatory character. The abovementioned law establishes the equal use of the Russian and Komi languages in public institutions, enterprises and organizations. Every resident has a right to choose the language of public service and education. At the same time, the law regulates use of languages by public authorities and administration solely. According to Article 20 of Law "On Education", all schools of the Komi Republic should have compulsory courses of Komi literature, history and geography: "The study of the Komi and Russian languages -official languages of the Komi Republic -shall be mandatory in all state-accredited educational organizations" (Russian Federation. 2006). However, the results of annual monitoring of ethnic relations show that, "Parents are not satisfied with compulsory schooling in the Komi language and the general unpreparedness of the Komi Republican school education system to a high quality of teaching" (Rozhkin, Shabaev, 2014). The Arkhangelsk Oblast A contrary case is the case of the Arkhangelsk Oblast where several large groups of indigenous peoples live -Nenets number 8,020 (0.65%); Komi -4,583 (0.37%) and Chuvash -1,357 (0.11%). There are no indigenous organizations or associations in the region, while other minority groups are present among the regional social actors. The social and political situation in the Arkhangelsk Oblast is traditionally safe and non-confrontational. In the Arkhangelsk Oblast, no specific ethnic or cultural legislation exists; no places of traditional residence or economic activity of the Nenets is legally identified. RAIPON as the mediator between the government, Indigenous communities and industrial companies in ethnic policy, environmental security and related issues The Association of Small-Numbered Indigenous Peoples of the North, Siberia and the Far East of the Russian Federation (RAIPON) is the oldest community association protecting indigenous peoples' rights in Russia. Indigenous peoples of the BEAR are also members of this organization. The RAIPON's representatives in the BEAR are "Izvatas" (the indigenous NGO of the Komi Republic), "Society of the Veps Culture" (from the Republic of Karelia), "Association of the Kola Sámi" (represents the Murmansk Oblast) and the Association of the Nenets People "Yasavey" (the NGO uniting representatives of the Nenets Autonomous District and the Arkhangelsk Oblast). The public authorities of the regions -members of the BEAR function on the basis of the Federal Law "On the General Principles for Legislative (Representative) and Executive Bodies of the Regions in the Russian Federation" (Russian Federation, 1999) which, in its Article 26.3, stipulates support measures for socially oriented non-profit organizations; protection of cultural heritage sites; support for "national-cultural autonomies", languages of ethnic groups and other objects of culture in educational institutions; protection of the traditional way of life of the indigenous peoples; regional and inter-municipal programs and activities for children and youth, among others. Chapter 3 of the Federal Law "On General Principles of the Local Self-Government in the Russian Federation" (Russian Federation, 2003) enables the municipal authorities to interact with the public cultural organizations. The normative content on cultural development issues is essentially identical to and duplicative of the regional authorities. Nowadays we observe a great number of ethnic policy models in the world, but three of them (Table 4 below) are the most common (authors' classification). The territories of the Russian Arctic members of the BEAR use the 'integrative' ethnic policy model where the RAIPON plays the role of an 'integrating link' in relations between public authorities, indigenous communities and industrial companies. The meaning of the integrative model boils down to the fact that the state's desire to ensure the existence of a common civic identity, and only then its own ethnic and cultural one, is the fundamental principle, and the hierarchy of legal acts is built on the vertical principle, when regional acts must follow federal standards and trends. The list of federal authorities involved in ethnic policy and social and cultural relations are presented in Table 5 below. Since 2013, the RAIPON has contributed to legal discussions on a number of extremely important Arctic issues, for example, preparing the list of small-numbered indigenous peoples and executing ethnological expertise in the territories of traditional habitat allocated for industrial purposes. The Association initiated the Federal Law on Reindeer Breeding which is presently pending in the State Duma. The RAIPON advocates indigenous traditional knowledge in the environmental management system of Russia which is one of the hottest issues of environmental security in Arctic governance. Russian legislation defines ' environmental security' as protection of the environment and vital interests of human beings in respect of the negative effects of economic activities, emergencies caused by technology or human actions and their consequences (Russian Federation, 2002). The draft of the Framework Convention on Environmental Security of the CIS defines ' environmental security' as a system of political, legal, environmental, economic, technological and other measures aimed at the protection of the environment and human beings from the possible negative impact of economic and other activities, natural and technological disasters in the present and in the future (CIS, 2008). The Committee for Regional Policy and Issues of the North and the Far East; The Committee for Federative Structure and Local Self-Government Federation Council The Committee for Constitutional Legislation and State-Building; The Committee for Federative Structure, Regional Policy, Local Self-Government and Northern Affairs; The Committee for Agrarian and Food Policy and Environmental Management The focus of the Arctic governance system is on the threat posed by natural and anthropogenic factors to Arctic biodiversity and ecosystems of which human beings are a part. Special political and legal measures as well as international and national attention should be given to indigenous peoples owing to their close integration with nature and dependence on ecosystems. Principle No. 22 of the Rio de Janeiro Declaration on Environment and Development (1992) means that indigenous peoples, their communities and other local communities have a vital role to play in environmental management and based on their knowledge and traditional practices (United Nations, 1992b). The UN Convention on Biological Diversity (CBD) in its paragraph "J" of Article 8 encourages the member states to preserve and maintain traditional knowledge, innovations and practices of indigenous communities and their traditional lifestyles relevant to the sustainable use of biodiversity (United Nations, 1992). The norm promotes wider application of indigenous knowledge and equal sharing of benefits. Russia is a signatory state to these international documents and its home legislation underlines the importance of indigenous communities and their participation in environmental management, especially in the Arctic. The "Environmental Security Strategy", approved by the President of Russia (Russian Federation, 2017b) indicates that oil and petroleum spills are a primary threat with long-term impact on the environment of the Arctic areas. Paragraph 71 of the "Foreign Policy Concept" explicitly refers to the environmental interests of indigenous peoples (Russian Federation, 2016c). The Arctic (and the BEAR) is a territory with a highly vulnerable ecosystem, and it is still not the subject of universal international legal regulation (unlike Antarctica). The Permanent UN Forum on Indigenous Issues recognizes the dependence of indigenous peoples of the Arctic on four major traditional activities which are not only a means of subsistence but also elements of cultural identity (hunting, reindeer herding, fishing, gathering). The changes and threats the indigenous peoples face in the Arctic include, but are not limited to, changes in animal populations, climatic instability and altering of the ice environment (Monks, 2017). A vivid example is the fact that there have been serious climatic changes in Finland, Norway and Sweden. Rains and warm weather in winter make it difficult for reindeer to access the forage base of lichen, which is an essential source of nutrition for them. The weather forces the Sámi herdsmen to switch to expensive feed, which also affects the social, economic and cultural foundations of the Sámi community. The data of the UNESCO Institute for Information Technologies in Education (UNESCO IITE) under the project "Adaptation to climate change: traditional knowledge of indigenous peoples of the Arctic and the Far North" demonstrates a whole set of climate changes, anthropogenic pressures and related environmental problems that directly affect traditional ways of life and indigenous communities (UNESCO, 2015). The BEAR has witnessed climate change and its consequences in the tundra as well ( Table 6 below). The UNESCO data encourage the study and employment of the most effective and thoughtful traditional practices that can help mitigate the anthropogenic impact on the environment and be used for industrial purposes. The role of the indigenous peoples in environmental security can be crucial if they assist in, for example, making traditional ' economy activity calendars' to determine the optimum time and location of a species so as to reduce pressure on its population, and in identifying the most vulnerable sites that play a decisive role in the reproduction of a species. The indigenous communities are capable of sharing the traditional system of grazing and reindeer capacity which can protect the ecosystem from overgrazing and the spread of diseases. Alongside the GIS systems, traditional methods of geolocation and filling in information gaps can be used as well as traditional methods of monitoring and diagnosing sick and weakened animals. Russian Barents industry: Influence on social and cultural development of the region The Arctic region is an area of growing strategic importance in terms of increasing access to natural resources and new transport routes, as ice and snow conditions are undergoing rapid change. Economic developments are accelerating which can be beneficial for the region and the global economy, yet there will be repercussions for the Arctic's fragile environment if not managed with care. In the process of industrial development of the Arctic territory and the rise of hydrocarbon production, new sources of contamination will eventually appear resulting in a real threat to the fragile Arctic environment (Glomsrød and Aslaksen, 2009). Around 41% of the Arctic oil resources and 70% of gas resources are in Russia, thus significant economic, security and governance interests make Russia one of the most important players in the Arctic. In order to access, exploit and deliver Arctic natural resources to global markets, Russia also aims to develop critical infrastructure in the Northern Sea Route, including ports, search-and-rescue centers, route administration, ice-breaking capability and oil spill response capabilities (The Global Arctic, 2013). Recently, much effort has been taken to regulate these activities and to prioritize the national and indigenous interests of the Russian Federation in the Arctic. The integrated development of the Russian part of the Arctic as a whole, and the BEAR in particular, is impossible without economic "megaprojects" or strategic investment projects. At the legislative level, there is neither a definition of 'megaproject' nor its classification. Theoretically and according to bylaws regulating industrial development of the Arctic, we can distinguish nine categories of 'megaprojects' (Russian Federation, 2009;Russian Federation, 2010;Russian Federation, 2012;Russian Federation, 2013): 1) Integrated development of the Northern Sea Route (NSR); 2) The Arctic ecosystem protection and liquidation of ecological damage; 3) Sustainable use of marine and terrestrial bio-resources; 4) Civil ship-building development; 5) Telecommunications development; 6) Development of solid minerals and hydrocarbons 7) Tourism development; 8) Air communication development; 9) Development of environmentally safe energy systems. All directions are applicable to the BEAR territories; specific activities within these 'megaprojects' are presented in a supplementary Table S1. With the industrial development of the Arctic territory, exploration and production operations are likely to induce economic, social and cultural changes. The extent of these changes is especially important to local groups, particularly indigenous peoples who may have their traditional lifestyle affected. If controls are not managed effectively, ecological impacts may also arise from other direct anthropogenic influences such as fires, increased hunting and fishing and possibly poaching. Other complications for sustainable land use are the presence of trash, petrochemicals, noise and feral dogs near human settlements. If related problems occur, it means that much territory is functionally lost. This degradation of the territory is in addition to the indirect effects of roads and infrastructure, such as degradation of vegetation, freshwater systems and increased poaching (Forbes et al., 2009). Special attention should be paid to the relationship between 'megaproject' operators and local communities in the Arctic territories. The most socially responsible company in the Russian North is the oil company LUKOIL; its activities in the Nenets Autonomous District are in line with sustainable development objectives (Lukoil, 2019). The company cooperates successfully with reindeer-breeding farms, and money transfers to reindeer husbandry is an obligation it has under contracts it has been awarded. For example, in 2007-2016 the amount of money transferred to local communities in the region was 306.2 million rubles. The company also provides financial support for various indigenous cultural events, such as "Snowmobiles and reindeer race" and a unique medical and social project "Krasnyi Chum". "Social investing" is a new trend in industrial companies in the Russian Arctic, aimed not only at profit, but also at building real partnerships with indigenous communities, moving away from a policy of confrontation. Conclusion The social and cultural development of the BEAR depends on a variety of factors and trends taking place in all of its member states, Russia included. In Russia, political and legal initiatives introduced in the last decade aim at balancing the social, economic and cultural interests of the indigenous peoples living in the Arctic regions in the situation where state-supported industrial 'megaprojects' are being developed in the same territories. The recent tendency of the Russian Arctic-related legislation is to focus on the eight land territories which have the most potential for the country's economic development. At the same time, most of these territories are members of the BEAR which underlines their crucial importance in the sustainable development of this entity. Socio-cultural perspectives of the region can be revealed through various educational, cultural, ethnical projects administered by the governments of the Arctic regions, NGOs and research institutions, for example the "National Arctic Science and Education Consortium", established at the North (Arctic) Federal University (NArFU) and "The Arctic Floating University" -a project of the NArFU Arctic Centre for Strategic Studies (NAREC, 2019). The political and legal incentives created in eight Arctic territories have the high potential to comply with interational standard set for indigenous people's rights, in particular when large scale economic developments are taken place in the regions and the social and the environmental dimensions of the Arctic sustainable development require a special consideration. Joint international efforts should be directed to the key issues -sustaining ethnic peace, enhancing and protecting indigenous rights, for example the right to participate in the decision-making process and providing for environmental security. The goal of the BEAR members, in exploiting natural resources in the Arctic, is to maintain a balance between the industrial development and the sustainable development of the indigenous peoples. It would be wise to use the opportunities the oil and gas industry brings for socio-economic development, and to create integrated plans of efficiently governed, mutually reinforcing social-ecological-economic development. Special attention should be given to preserving cultures and languages through specialized educational programs, because some of the languages of the BEAR indigenous peoples are in danger of extinction. Traditional knowledge should become a systemic instrument used for 'social investment' and active interaction between industrialists and indigenous peoples. Studying the customs of indigenous peoples and using them to regulate public relations while respecting federal law is the way to develop a unique cultural component at the level of regional governance and local self-government. The five cases which we studied show that indigenous peoples, their communities and associations constitute a considerable part of the Arctic society and social structure of the Russian part of the BEAR. The main issues discussed at the intergovernmental, national and regional levels are cultural and ethnic diversity, peace and security, general education and traditional indigenous education, rational environmental management and the participation of indigenous peoples in the decision-making process. The Russian Federation as a member state of BEAR responds to all of these issues by adopting relevant strategies, federal laws and sub-laws, and regional legislation. The main attention is given, however, to cultural issues which are consistently prioritized in the regional sociocultural policy. Generally, Russia's recognition of the importance and value of socio-cultural development in the framework of international cooperation is expressed in a set of policy and strategic documents, as well as in laws and regulations. The socio-cultural potential of the BEAR enables Arctic Barents countries to act as a bloc and to take the lead in the discourse on sustainable development, and the indigenous agenda which will face inevitable challenges in the coming decades. Data Accessibility Statement No new data were generated for this study. Supplemental file The supplemental file for this article can be found as follows: •
2021-09-18T22:50:10.460Z
2021-07-14T00:00:00.000Z
237824110
s2orc/train
v2
A community-based validation of the International Alliance for the Control of Scabies Consensus Criteria by expert and non-expert examiners in Liberia
A community-based validation of the International Alliance for the Control of Scabies Consensus Criteria by expert and non-expert examiners in Liberia Background The International Alliance for the Control of Scabies (IACS) recently published expert consensus criteria for scabies diagnosis. Formal validation of these criteria is needed to guide implementation. We conducted a study to provide detailed description of the morphology and distribution of scabies lesions as assessed by dermatologists and validate the IACS criteria for diagnosis by both expert and non-expert examiners. Methods Participants from a community in Monrovia, Liberia, were independently assessed by two dermatologists and six non-expert examiners. Lesion morphology and distribution were documented based on the dermatologist examination. Diagnoses were classified by IACS criteria and the sensitivity and specificity of non-expert examiner assessments calculated. Results Papules were the most common lesions (97.8%). Burrows were found in just under half (46.7%) and dermatoscopy was positive in a minority (13.3%). Scabies lesions were found in all body regions but more than 90% of patients could have been diagnosed by an examination of only the limbs. Severity of itch was associated with lesion number (p = 0.003). The sensitivity of non-expert examiners to detect typical scabies ranged between 69–83% and specificity 70–96%. The sensitivity of non-expert examiners was higher in more extensive disease (78–94%). Conclusions The IACS criteria proved a valid tool for scabies diagnosis. For the purposes of implementation papules and burrows represent truly ‘typical’ scabies lesions. Non-expert examiners are able to diagnose scabies with a high degree of accuracy, demonstrating they could form a key component in population-level control strategies. Conclusions The IACS criteria proved a valid tool for scabies diagnosis. For the purposes of implementation papules and burrows represent truly 'typical' scabies lesions. Non-expert examiners are able to diagnose scabies with a high degree of accuracy, demonstrating they could form a key component in population-level control strategies. Author summary Scabies is a very common skin condition in both high-and low-income settings with hundreds of millions of people affected each year. Recently standardised criteria have been proposed to help improve the quality of scabies diagnosis, in particular in low income settings where the access to a skin specialist is very limited. In this study, conducted in Liberia, expert examiners conducted a thorough examination and recorded what different types of skin problems they found in participants with and without scabies. We then compared the accuracy of a diagnosis of scabies made by dermatologists to that made by non-specialist healthcare workers who had received a short training course over three days. We found that papules were the most common type of scabies lesion and were found in almost every single patient with scabies. A second type of skin lesion called a burrow was the next most common and was found in just under half of the participants. Other types of scabies lesions which have been described were rare in this study. We found that after the short training course the non-specialists were able to detect the majority of the cases of scabies correctly. Our study has helped provide detailed data on exactly what types of skin changes are typical of scabies and demonstrated how short training programmes can help improve the skill of non-specialist examiners in diagnosing scabies. Introduction Scabies is a severe pruritic skin disease caused by the mite Sarcoptes scabei var hominis, which is a significant public health problem in many low-income settings. Globally, there are believed to be more than 400 million cases of scabies each year [1] and it is one of the commonest dermatoses that a health care provider will encounter in low-income settings [2]. The mainstay of diagnosing scabies is a thorough history and detailed clinical examination. Clinical examination may be complemented by other techniques including dermatoscopy, non-invasive higher power imaging devices or light microscopy [3] of skin specimens which allow definitive parasitological diagnosis. However, they have low sensitivity, are time consuming and are impractical in many low-income settings due to financial and personnel constraints [3]. In most low-income settings there is an absence of trained individuals with expertise in skin disease and health systems are dependent on non-expert examiners such as clinical officers and nurses to diagnose and manage patients with skin disease. The adoption of scabies as a Neglected Tropical Disease (NTD) by the World Health Organization (WHO) has led to the development of scabies control programmes which would benefit from robust methods of diagnosis. Strategies have previously been developed and validated to aid non-expert examiners in the diagnosis of scabies, impetigo and other common dermatoses. These approaches have been shown to have acceptable sensitivity and specificity when compared to examination by a reference standard expert examiner [4][5][6][7]. Whilst promising, one challenge has been the lack of validated diagnostic criteria for scabies. The International Alliance for the Control of Scabies (IACS) [8] developed and recently published a detailed description of diagnostic criteria [3,9] for three levels of diagnostic certainty: 'A-Confirmed Scabies' which requires visualisation of the mite, ova or scybala; 'B-Clinical Scabies' and 'C-Suspected Scabies'. In addition to the presence of scabies burrows or typical lesions affecting the male genitalia the 2020 IACS criteria for Clinical Scabies include 'typical lesions' in a 'typical distribution' in an individual with itch and a history of contact with someone with scabies or someone with unexplained itch. The 'typical lesions', other than burrows, are defined as papules, nodules, vesicles and pustules. A typical distribution in adults is defined as lesions affecting one or more of the following sites: the distal forearm and hands, the axilla, the umbilical region, the groin and legs whilst in infants (children under the age of two) all body sites may be affected [3]. A complete skin examination is recommended but more limited examinations may still have a high diagnostic yield and are more practical in non-clinical settings [10]. We conducted a prospective study to validate the performance of the IACS criteria by both expert and trained non-expert examiners including a detailed description of the lesion morphology and distribution in individuals with scabies. Materials and methods This was a prospective diagnostic accuracy study conducted in urban Monrovia, Liberia in February 2020. Training of non-expert examiners Training was delivered by two specialists in dermatology with experience of managing skin disease in low income settings(expert examiners). Five non-expert examiners participated in the two-day training workshop. The first day of training consisted of classroom-based tutorials on the morphology of skin lesions and the clinical features and treatment of scabies, impetigo, infected scabies and dermatophyte infections. The second day consisted of supervised clinical training in People's United Community (PUC), Sinkor district, where the non-expert examiners performed clinical skin examination and made diagnoses. Validation study We conducted a validation exercise with the two expert examiners, the five recently trained non-expert examiners and an additional non-expert examiner who had received training 18 months earlier as part of a separate study on screening for Buruli ulcer, leprosy, yaws and lymphatic filariasis but had not received the most recent training. Residents from the Raymond Field/Barrolle Practice Ground community, Sinkor district, were invited to attend for assessment and treatment of skin problems. Each participant was examined independently by the non-expert examiners and both expert examiners in a private setting. The non-expert examiners recorded whether a participant had itch and/or a history of contact with an individual with itch. They performed a skin examination (excluding the genitals) and recorded the presence of a skin problem and whether skin lesions were typical in morphology and distribution for scabies. They recorded the number of scabies lesions (1-10, 11-49 or �50) to assess the extent of disease [11,12]. In individuals diagnosed with scabies, the non-expert examiners recorded the presence of any secondary bacterial infection and classified the number of infected lesions: 1-5, 6-10, 11-49, �50. The two expert examiners also elicited itch and contact history and each participant was asked about severity of itch using the Severity of Pruritus Scale [13]. A clinical examination with the aid of a dermatoscope (Heine Delta 20 Plus, Herrsching, Germany) was performed. Dermatoscopic findings for scabies were recorded as positive or negative. Positive dermatoscopy was categorised as visualisation of the triangular-shaped dark anterior of the burrowing mite (delta-wing sign) and/or visualisation of the 'V' shaped scale that may form at the entrance of a burrow (wake sign) [3]. For the purpose of IACS classification only the former was considered diagnostic of confirmed scabies. Individuals diagnosed with scabies had the morphology of lesions (papules, vesicles, nodules and burrows) and number of each at 19 predefined body sites recorded by one expert examiner. The consensus diagnosis of the two expert examiners was used as a reference standard with which to evaluate the performance of the non-expert examiners. The sensitivity and specificity of each non-expert examiner compared to the reference standard was calculated and their diagnoses were classified by IACS category B1 (burrows present), B3 (typical lesions in a typical distribution and two history features), C1 (typical lesions in a typical distribution and one history feature) or C2 (atypical lesions or atypical distribution and two history features). Non-expert examiners did not document specific lesion sites or examine the genitals and thus an assessment of their performance in using category B2 (male genital lesions) was not possible. It was calculated that at least 40 individuals with and without scabies were needed to detect a sensitivity of the non-expert examiners of 90% +/-10% compared to the reference standard. Data were collected anonymously directly on Android devices (Samsung Galaxy Tab A) using the Open Data Kit (ODK, Seattle, USA, 2010) application and uploaded remotely to the dedicated secure server at the London School of Hygiene and Tropical Medicine. Data were analysed using R 3.3.0 [14]. Individuals who were diagnosed with scabies by the expert examiners were offered treatment with oral ivermectin or benzyl benzoate lotion. Ethical approval was obtained from the Ethics Committees of the London School of Hygiene and Tropical Medicine (Reference 17796) and the University of Liberia-Pacific Institute for Research and Evaluation Institutional Review Board (Reference 20-01-195). Written informed consent was obtained from participants aged 18 years and older and from the parents or guardians of children. Verbal assent was obtained from children who were able to provide it. Results One hundred and forty-seven individuals were examined by the expert examiners and of these 135 were examined by all the non-expert examiners and were included in the validation analyses. The 12 participants who were not examined by all the non-expert examiners were excluded from calculations of the sensitivity and specificity of non-expert examiners for the diagnosis of scabies. Diagnoses of overall cohort One hundred and forty-seven participants underwent examination by the expert examiners. The median age was 17 years (IQR 6-31) and 97 participants (70%) were female. 128 individuals (87.1%) had a cutaneous diagnosis and 139 individuals (94.6%) reported a history of itch. Scabies was the commonest diagnosis; 44 individuals (29.9%) were diagnosed with scabies by both expert examiners. There were a further two cases where the expert examiners disagreed about a diagnosis of scabies (1.4%). Only four individuals (2.7%) had infected lesions. The median age of participants with scabies was 11 (IQR 3-23) and the majority were females (n = 28, 64.4%). The other two most common diagnoses were dermatophyte infections (n = 23, 15.6%) and atopic dermatitis (n = 17, 11.6%) ( Table 1). Clinical Features of the IACS criteria assessed by an expert examiner Forty-five of the forty-six cases of scabies diagnosed by either expert examiner underwent a comprehensive examination including full body examination, lesion counting and dermatoscopy. The IACS classification for these 45 individuals is shown in Table 2. Papules were the most common lesion type and were present in 45 individuals (97.8%) with scabies followed by burrows (n = 21, 46.7%) ( Table 2). The median number of scabies lesions was 117 (IQR-47-164) of which the majority were papules ( Table 2). All individuals with scabies reported a personal history of itch and 43 (93.5%) reported a history of contact with an individual with itch. Dermatoscopy was consistent with scabies in 19 individuals, all of whom had burrows on examination and mites were visualised in 6 of these cases ( Table 2). The number of scabies lesions was strongly associated the degree of reported itch (p = 0.003) graded using the Severity of Pruritus Scale. Individuals with mild itch had a median of 55 lesions (IQR 46-77), those with moderate itch a median of 98 lesions (IQR 41-148) and a median of 159 lesions (IQR 133-256) was found amongst individuals who reported severe itch with sleep disturbance (Fig 1). The buttocks and groin (76%), wrist (73%), torso (71%), forearm (67%), and inter-digital web-spaces (64%) were the most common location for papules ( Table 3). Compared to a full body examination for all lesion types, papules could have been detected through a limited examination involving the face, and upper limb including the axilla and fingers in 91.1% of cases of scabies. Performance of non-expert examiners The median age of the 135 participants examined by all non-expert examiners was 18 years (IQR 7-32) and 89 (65.9%) were female. The expert examiners reached a consensus diagnosis of scabies in 42 individuals (31.1%) which included all four cases of infected scabies. When we considered any of the categories B1, B3, C1 or C2 as diagnostic of scabies the sensitivity of the non-expert examiners ranged between 73% -93% and the specificity ranged between 56% and 96%. When we excluded IACS Category C2 (presence of either atypical PLOS NEGLECTED TROPICAL DISEASES Validation of IACS criteria for scabies by expert and non-expert examiners lesions or an atypical distribution) the sensitivity of the non-expert examiners ranged between 69-83% and the specificity between 70-96% ( Table 4). The sensitivity of non-expert examiners was lower in scabies affected individuals with fewer lesions (range 30-60%) and higher in those with more extensive lesions (range 78-94%). The six non-expert examiners made 160 false positive diagnoses. 104 of 160 (65%) of these false positive diagnoses were accounted for by atopic dermatitis/eczema, folliculitis, tinea corporis and capitis, lichen simplex, lichen planus and pityriasis versicolor. These seven pruritic conditions (of the 25 non-scabies diagnoses made) accounted for 65.2% of people with non-scabies diagnoses. The non-expert examiner trained 18 months previously and who attended only for the validation study was equivalent to the non-expert examiners who had undergone training and assessment in the week of the study (Table 4). Discussion This is the first study to undertake validation of the 2020 IACS Consensus Criteria for the Diagnosis of Scabies, which incorporate a comprehensive explanation of how to apply the diagnostic criteria. We found that agreement between the two expert examiners on the presence of scabies was high (96% of cases), providing confidence in our reference standard diagnosis. The symptoms and signs used to define five (A3, B1, B2, B3 and C1) of the six categories tested showed diagnostic validity in this setting with a complete skin examination performed by dermatologists and supplemented with dermatoscopy. The diagnostic components with highest sensitivity for scabies were the history components, a personal history and a contact history, present in 100% and 93.5% of cases respectively and papular lesions which were present in 97.8%. Our results highlight that papular lesions are the predominant lesion type in scabies. Given our findings it may be reasonable in this setting to define 'typical lesions' as papular lesions or burrows with neither nodules nor vesicles contributing significantly. The expert examiners performed a comprehensive skin examination which allowed us to calculate that more than 90% of cases could possibly have been detected by a more limited examination of the face, upper limb including the axilla and fingers. The sensitivity of this limited examination supports its utility as a practical alternative for use in large scale field settings such as community-wide surveys [10]. Non-expert examiners were able to diagnose scabies with a high degree of accuracy after attending a two-day training programme compared with the reference standard diagnosis made by two skin specialists. When including IACS 2020 diagnostic criteria for clinical and suspected scabies categories B1, B3 and C1, C2, the mean sensitivity of non-expert examiner diagnosis was high at 85.4% (range 73-93%), with a specificity of 71.4% (range 56-96%). In LMIC settings there are few trained skin specialists, but our results indicate that in the absence of such experts and specialist diagnostic equipment, trained non-expert examiners are able to diagnose scabies [3]. A non-expert examiner who had received training on the diagnosis of scabies 18 months previously performed to a comparable standard to more recently trained non-expert examiners. This suggests that diagnostic skills can be retained which would greatly strengthen the benefits of appropriately evaluated, structured training programmes for nonexpert examiners. However, this requires further assessment. The sensitivity achieved by the non-expert examiners in this study is higher than that reported in a comparable study examining the diagnostic accuracy of non-expert examiners in the Solomon Islands, which documented a sensitivity of 55.3% (range 41.5-64.9%) [11]. The authors also reported that the sensitivity of non-experts in this study was higher for more extensive scabies and lower in those with fewer skin lesions. Only including the IACS diagnostic criteria that consider typical lesions or distributions (i.e. excluding category C2: presence of either atypical lesions or an atypical distribution), the mean sensitivity of non-expert examiner diagnosis was moderately lower at 77% (range 69-93%) but specificity increased to a mean of 81.4% (range 70-96%). This suggests, perhaps unsurprisingly, that non-expert examiner diagnosis of scabies is more accurate when patients present with typical features. Future training of non-expert examiners would likely be improved by including training on recognition of common dermatoses which confound the diagnosis of scabies as these contribute significantly to false positive diagnoses. We have shown that the utility of the C2 categorisation of suspected scabies does not perform well in this setting when used by non-expert examiners. None of the scabies diagnoses made by the experts were categorised as C2. This may be because such a pattern of scabies, atypical lesions or distribution and history of itch and contact with an affected individual, is rare in this setting. In this setting an individual diagnosed with suspected scabies due to an atypical clinical pattern is more likely to have another of the common pruritic skin disorders. The C2 category may prove useful in other settings where atypical clinical patterns have been reported [15]. Of the cases examined by non-expert examiners, the expert examiners identified burrows in 20 individuals (47.6% of cases). Burrows are challenging lesions to locate even for very experienced clinicians [15,16] and our study confirms, as reported elsewhere [11], that this is also the case for non-expert examiners. Four of the non-expert examiners failed to correctly locate any burrows, one non-expert examiner detected burrows in three individuals. Whilst one non-expert examiner did correctly identify burrows in eight individuals (40% of cases with burrows) they also noted burrows in eight participants where they had not been noted by the expert examiners, suggesting that this increased sensitivity came at the cost of a significant decrease in specificity. It is unlikely that detection of burrows will play a significant role in the diagnosis of scabies by non-expert examiners in communities with a high burden of disease. It is likely that the length of training for the non-expert examiners would needed to have been considerably longer to enable them to develop an improved ability to identify and locate burrows. Limitations The non-expert examiners did not collect data concerning the distribution of lesions and we were not able to assess their performance in applying the B2 (male genitalia) criterion. We did not perform light microscopy of skin scrapings or a high-powered imaging device and so have not assessed two of the three criteria for confirmed scabies. We were unable to assess the diagnostic accuracy of the non-expert examiners for impetigo and infected scabies due to the low number of cases seen. Performance of the non-expert examiners in the diagnostic accuracy study undertaken in the Solomon Islands showed they had a broadly similar performance with regards to the sensitivity and specificity of impetigo diagnosis compared with their diagnosis of scabies cases [11]. Conclusions The 2020 IACS Consensus Criteria for the Diagnosis of Scabies performed well when applied by dermatologists or non-expert examiners in this setting. The high level of diagnostic accuracy achieved by the non-expert examiners in our study indicates that this cadre of health professional is an effective resource for the diagnosis of scabies in LMICs. Further work to demonstrate the validity and reliability of the IACS criteria in other settings, rural and urban, community-based and in health care facilities and with other cadres of workers is needed. Scabies presents a significant health burden in many low resource settings and our study indicates that with short, focused training sessions, non-expert examiners could form a key component of scabies control strategies. To further inform control strategy development, it will be important to ensure non-expert examiner training packages focus on the key diagnostic components of the IACS criteria and that attention is given to common differential diagnoses to reduce the likelihood of misdiagnosis. The design and delivery of sustainable educational interventions will need to be evaluated and appropriate individuals recruited to lead the training of nonexpert examiners. Well trained non-expert examiners who are able to use the clinical diagnostic criteria will be key in the accurate assessment of the burden of scabies in LMICs and the implementation of strategies to control of scabies at the population level.
2020-06-09T19:02:23.194Z
2020-06-09T00:00:00.000Z
219544810
s2orc/train
v2
Phenotypic Heterogeneity in 5 Family Members with the Mitochondrial Variant m.3243A>G
Phenotypic Heterogeneity in 5 Family Members with the Mitochondrial Variant m.3243A>G Case series Patients:— Final Diagnosis: Metabolic acidosis Symptoms: Deafness Medication: — Clinical Procedure: — Specialty: Neurology Objective: Rare disease Background: The pathogenic mitochondrial DNA variant m.3243A>G is associated with a wide range of clinical features, making disease course and prognosis extremely difficult to predict. We aimed to understand the cause of this broad intra-familial phenotypic heterogeneity in a large family carrying the variant m.3243A>G. Cases Reports: Thirteen family members were clinically affected. Clinical manifestations occurred in the brain, eyes, ears, endocrine organs, myocardium, intestines, kidneys, muscle, and nerves. Five family members carried the m.3243A>G variant. The 2 most severely affected patients were the index patient, a 60-year-old woman, and her sister, who was deceased. The phenotypic features most frequently found were hypoacusis and cerebellar atrophy. Hypertrophic cardiomyopathy was diagnosed in 3 family members. Short PQ syndrome and gestosis had not been reported to date. The broad phenotypic heterogeneity was attributed to variable heteroplasmy rates and variable mtDNA copy numbers. All affected patients benefited from symptomatic treatment. Conclusions: The mitochondrial DNA variant m.3243A>G can manifest phenotypically with a non-syndromic, multisystem phenotype with wide intra-familial heterogeneity. Rare manifestations of the m.3243A>G variant are gestosis and short PQ syndrome. The broad intra-familial phenotypic heterogeneity may be related to fluctuating heteroplasmy rates or mitochondrial DNA copy numbers and may lead to misdiagnosis for years. Background The mitochondrial DNA (mtDNA) variant m.3243A>G manifests with syndromic or non-syndromic phenotypes [1]. Among the syndromic phenotypes, mitochondrial encephalopathy, lactic acidosis, and stroke-like episodes (MELAS) syndrome is the most well-known [2]. Other syndromic phenotypes due to the variant m.3243A>G include maternally inherited diabetes and deafness (MIDD) syndrome, myoclonic epilepsy with ragged red fiber (MERRF) syndrome, Leigh syndrome, or MELAS/KSS (Kearns-Sayre syndrome) overlap syndrome [1]. Non-syndromic phenotypes commonly include multisystem disorders affecting the brain, eyes, ears, endocrine organs, myocardium, gastrointestinal tract, and the kidneys in variable combinations. Little is known about the frequency and expression of nonsyndromic m.3243A>G-related phenotypes. Here, we present a family carrying the variant m.3243A>G with a non-syndromic phenotype and broad intra-familial phenotypic heterogeneity. Case Reports The index patient (III/3) was a 60-year-old white woman, height 164 cm, weight 45 kg with a multisystem mitochondrial disorder (MID) initially classified as MIDD syndrome. The initial clinical manifestation was growth retardation resulting in short stature. At age 35 years, hypoacusis became apparent, requiring bilateral hearing devices. At age 39 years, ataxic gait developed, being attributed to cerebellar atrophy on imaging. At age 48 years, she underwent subtotal thyroidectomy because of struma multinodosa. Starting at age 52 years, she experienced recurrent episodes of vertigo, which later became permanent and were attributed to vestibular dysfunction. Starting at age 55 years, slowly progressive weakness of the upper limbs and recurrent cramps of the upper and lower limb muscles developed, preventing her from sleeping properly. Additionally, she experienced easy fatigability. Starting at age 59 years, she experienced paresthesias of the distal lower limbs. Nerve conduction studies revealed axonal, sensorimotor polyneuropathy. A genetic work-up at age 57 years by means of PCR and Sanger sequencing revealed the common variant m.3243A>G in MT-TL1, the most common MELAS mutation, with a heteroplasmy rate of 20% in blood lymphocytes and 75% in buccal mucosa cells. She had a history of smoking 10 cigarettes per day but stopped smoking at age 58 years. Chronic obstructive pulmonary disease (COPD)-II was additionally diagnosed. Blood chemical investigations revealed erythrocytosis, hepatopathy, and renal insufficiency (Table 1). Additionally, pre-diabetes was diagnosed, without requiring therapy. A neurologic exam at age 60 years revealed short stature, dry eyes, hypoacusis, mild dysarthria, sore neck muscles, diffuse weakness of the upper limbs (M4), weak foot extension (M5-), diffuse wasting of the upper and lower limbs, absent tendon reflexes on the upper and lower limbs, and gait ataxia. The family history was positive for multisystem MID in her mother (II/1), sister (III/2), and brother (III/1); in the sister (II/3) and brother (II/2) of the mother (Table 1); and in her 2 daughters (IV/1, IV/2), 2 female cousins (III/8, III/9), and 2 nieces (IV/5, IV/6) ( Figure 1). In 5 of these clinically affected relatives, the m.3243A>G variant was detected ( Figure 1). The phenotype varied considerably among these family members ( Table 1). The mother of the index patient had died at age 80 years from pulmonary embolism. During adulthood she had developed hypoacusis, cerebellar ataxia, and polyneuropathy. One brother of the mother, aged 81 years, experienced hypoacusis since age 50 years and macula dystrophy. He had 3 healthy children (2 girls, 1 boy). The sister of the mother (II/3) had a suspected visual problem and died at age 87 years. The grandmother from the mother's side (I/1) had a history of visual impairment and hearing loss and died at age 91 years. The sister of the index patient (III/2) (height 155 cm, weight 42 kg), had epilepsy, diffuse cerebral atrophy with dementia, cerebellar atrophy with ataxia, leucoencephalopathy, hypoacusis requiring cochlear implants, retinal dystrophy, diabetes (HbA1c up to 12.1), hyperlipidemia, struma nodosa requiring thyroidectomy, hCMP with intermittent short PQ syndrome and mild aortic and mitral insufficiency, hepatopathy, muscle weakness with hyper-CKemia, a single pancreatic cyst, surgery for squinting as a child, rectal discharge, near-drowning because of muscle weakness at age 62 years, and renal insufficiency. She carried the m.3243A>G variant with a heteroplasmy rate of 40-50% in buccal mucosa cells. She died at age 53 years from heart failure and renal insufficiency. The brother (III/1), aged 67 years, had a history of epilepsy, hypoacusis, macular dystrophy (fundus flavimaculatus), and hCMP ( Table 1). The 2 daughters of the index patient (IV/1, IV/2) also carried the m.3243A>G variant. The older daughter (IV/2), now aged 32 years, manifested with cerebellar atrophy, hearing loss, and gestosis requiring cesarian section in gestational week 29. Her son (V/1) was born by cesarean section in gestational week 25 and presented with mild delayed motor development. The younger daughter of the index patient (IV/1), aged 25 years, complained of impaired hearing and tinnitus. The index patient has 4 female cousins. The oldest cousin (III/8), aged 75 years, has diabetes, hearing impairment, and vestibular dysfunction. The second cousin (III/9) died from cancer at age 32 years after the birth of her fourth child (2 boys, 2 girls). One girl, now age 45 years (IV/6), has diabetes and hypoacusis. The second girl (IV/5) had diabetes and hypoacusis and died from aneurysmal bleeding at age 35 years. The 2 other cousins are healthy. Discussion The presented family is interesting for multisystem MID due to the variant m.3243A>G in 5 family members, manifesting with a non-syndromic phenotype with high intra-familial variability. MELAS was excluded as none of the family members carrying the variant manifested with a stroke-like episode (SLE). None of the mutation carriers fulfilled the Japanese or Hiranocriteria [3,4] for diagnosing MELAS. MIDD was excluded, as only 1 of the mutation carriers had deafness and diabetes together. The index patient was diagnosed only with pre-diabetes, which never required anti-diabetic medication. MERRF was excluded, as none of the mutation carriers fulfilled the diagnostic criteria for MERRF [5]. Leigh syndrome was excluded based on the non-compatible clinical presentation and absence of typical features on cerebral MRI [6]. None of the mutation carriers fulfilled the diagnostic criteria for KSS. Based on these considerations, a non-syndromic MID with wide phenotypic variability was diagnosed [7]. The index patient (III/3) manifested in the brain, eyes, ears, endocrine organs, heart, intestines, kidneys, muscle, and nerves (Table 1). Her sister (III/2) manifested in the brain, ears, endocrine organs, liver, muscle, kidneys, and heart (Table 1). Interestingly, the index patient developed gestosis during both of her pregnancies. The brother of the index patient (III/1) manifested clinically in the brain, eyes, ears, and heart ( Table 1). The older daughter of the index patient (IV/2) manifested in the brain and ears (Table 1). Her sister (IV/2) manifested only in the ears with tinnitus and mild hypoacusis. The grandson of the index patient (V/1) became apparent because of motor
2020-10-28T19:12:18.968Z
2020-11-25T00:00:00.000Z
227175510
s2orc/train
v2
Residents’ perceptions of household food waste during the COVID-19 outbreak in Korea
Residents’ perceptions of household food waste during the COVID-19 outbreak in Korea Analyzing household food waste data at the global or national level remains a challenge, especially owing to lack of statistical systems and socio-cultural differences. This study determined the factors affecting the intention of households to reduce food waste on Jeju Island and on the Korean mainland. Socio-demographic factors significantly influence household food waste generation. Therefore, studies are often conducted depending on data availability in the corresponding regions. Based on national data and the theory of planned behavior, this study analyzed data using PLS-SEM (Partial Least Squares Structural Equation Modeling) to test the influence of multiple determinants and parameters on dependent variables and investigated the awareness of household food waste in Korea, focusing on Jeju Province, Korea’s largest tourist destination. A survey of 508 local residents established that all factors evaluated in this study, except for risk concerns due to COVID-19, were statistically significant. Among the three antecedents of age, income, and family size, age significantly affected all mediators, directly affecting behavioral intentions. The results are consistent with those of preceding research on the effects of socio-demographic drivers on household food waste generation. The results also indicate that in Korea, where the COVID-19 infection level is lower than that in other countries, residents did not change their food purchasing and waste production patterns. However, a multi-group analysis revealed that the risk concerns caused by COVID-19 differed between residents of Jeju Island and mainland Korea. Overcoming the vulnerability of waste management, including food dumping, is mandatory for locals and tourists on Jeju Island. Introduction The United Nations Environment Program (UNEP) began issuing biannual index reports of global food waste from 2021 onward as part of its plan to establish a food waste scale as one of the 303 indicators of the 17 Sustainable Development Goals (SDGs) [1]. The redistribution of food resources has become an increasingly pressing issue for the equitable survival of the world population, which has been increasing rapidly over the past few decades [2,3,4]. UNEP announced that the present volume of food waste is estimated to be approximately 931 million tons annually. Over 60% of which is generated by household consumers [1] according to a report by the Food and Agriculture Organization of the United Nations (FAO), between 2000 and 2017 [5]. Households have been identified as the primary contributors to food waste generation in the supply chain [6,7,8]. Households in higher-income countries display a strong tendency to produce food waste [5,9] because they frequently buy or cook more food than they can consume [8,9,10]. For instance, annual food waste generated by an individual varies among countries, from as high as 338 kg in the Kingdom of Saudi Arabia in 2019 to as low as 7 kg in South Africa in 2018 [11]. By applying the life cycle assessment, which assesses the environmental impacts of food waste [12,13], Skaf et al. [11] found that significant differences in food waste generation across countries are proportional to their impact on climate change. However, collecting quantitative data from each country is a significant challenge [14,15,16,17] because of the lack of comprehensive knowledge on the quantification of food waste generation and methodologies related to its composition [17,18,19] and the complexity of actors at different stages of the food supply chain [20,21]. Therefore, data that classifies the type of food waste in each region and quantitative food waste emissions data by country presented by research institutes often differ [16,22,23]. This hinders the data collection process and its reliability. Even within the same country, there are differences in the amount of food waste emitted by households depending on the methodology used. A 2018 survey revealed that approximately 63 % of Korean households generated less than 500 g of food waste daily, equivalent to 180 kg annually [24]. However, the Ministry of Environment of Korea reported that in 2017, an average Korean citizen produced 1.02 kg of daily household food waste, amounting for 370 kg annually [25]. In addition to conflicts over the collection procedure and existence of regional data, food waste has become a social controversy because the resources produced for consumption are disposed of before they are completely consumed. Simultaneously, starvation creates conflicts [26] while concern about unwanted food waste and its serious impacts on earth [7] have increased. Although the problem of food waste has been researched and analyzed from economic and environmental perspectives [13,19,27,28], detecting the causes and solutions from a sociological perspective is necessary to understand food consumption and disposal holistically. Numerous factors that contribute to food waste generation cannot be presented as standard provisions or norms due to regional differences and social, cultural, and demographic backgrounds [3,17,19]. As a result, it is clear that sociologists in both developing and developed countries are responsible for establishing a consensus on food waste reduction by understanding general household members' perceptions and behavioral intentions as well as the influential local generators of food waste. The socioeconomic challenges faced by the people of countries due to COVID-19 lockdowns have made them focus on inevitable issues such as food hoarding, the shelf life of food, and restrictions on movement [29,30,31]. Also, an increase in the amount of waste caused by changes in living patterns and the challenges of disposal should also be considered. Therefore, we examined the factors that affect the degree of food waste reduction behavior of local residents in South Korea, including Jeju Province to test the influence of multiple determinants on dependent variables including the public's concern due to COVID-19. Since the effect of COVID-19 on food waste has emerged as a social issue in many aspects, it is expected that timely implications can be derived from this research. The remainder of this paper is organized as follows: section 2 presents the theoretical literature for the conceptual framework of the research model and a review of food waste including extended timely factors; sections 3 and 4 present the development of research design, methodology, hypotheses, and data analysis based on the survey questionnaires (Appendix). Finally, section 5 presents the conclusion with theoretical and practical implications, and scope for future studies. Food waste: Facts and data Food waste is defined as 'food and the associated inedible parts removed from the human food supply chain in the retail and food service sector as well as households' [1]. Food loss is the reduction in the quantity or quality of food resulting from decisions and actions by food suppliers in the chain, excluding retail, food service providers, and consumers [5]. Sustainable consumption and production plans to halve the amount of food waste produced per capita at the retail and household levels by 2030 through harvest, production, and supply chains [21,24,28,31]. Approximately one-third (approximately 1.3 billion tons) of human-produced food is wasted or lost as of 2011, and the amount lost at the retail and consumer levels is yet to be accurately estimated [4,5,11,32,33]. According to national and sectoral reports in the food and supply chain from 2000 to 2017, food loss and waste data at the consumption and household levels, including retail sectors, account for up to 37% [5]. In Korea, solid waste is classified into three categories: recyclables, food waste, and general waste. Food waste collection was stopped in 1996 in landfills that received most of the food waste in Seoul and neighboring districts, and the Ministry of Environment banned direct landfilling for food waste in 2005. The Seoul, Korea's capital city, has faced a huge social issue to dispose of food waste since then [34]. Waste recycling, resources, and circulation have increased public interest [35], and the volume-based waste fee system (VWF) or 'pay-as-you-throw' concept was implemented nationwide in Korea [32,36,37]. Thus, general and food waste are thrown in standard garbage bags that are produced based on the VWF, while recycled materials are classified into eight types: paper, plastic, glass, cans, scrap, vinyl, clothing, and Styrofoam [38]. Compared to the average annual food waste generation of 95-115 kg per capita in North America and Europe, Koreans produce 130 kg of food waste per capita annually. It has been suggested that this may be due to Koreans' tradition of enjoying side dishes such as kimchi that are typically disposed of after every meal when left over [39]. However, researchers have opined that there is a lack of statistics on Korean household food waste, according to academics, makes it difficult to make informed policy decisions [24]. Food waste recycling has increased from 2% to 95% [39] and began recycling food waste using special biodegradable plastic bags in 2013 [38,39]. According to the OECD solid waste management data in 2014, Korea has the highest recycling rate (58.1 %) among member countries [36]. Under the Resource Circulation Act 2016 [40], the Korean government set the goal of minimizing landfill and incineration for waste disposal, reducing waste generation to below 20 % in mid-to long-term plans by 2027, increasing recycling rates by 82 %, and reducing final waste disposal from 9 % to 3 % [35]. The Korean government has been piloting ICT-based Radio-Frequency Identification (RFID) devices that can track the collection status of food waste in real time since 2010, but they cover only 25% of the nation [25]. Jeju Island is the smallest province in Korea, with a population of less than 700,000, yet it attracts 10 to 15 million tourists annually [41,42]. The 628 tons of daily solid waste emissions, including approximately 34% of food waste, is one of the most controversarial social issues on the island [41]. Food waste: intentional drivers based on the theory of planned behavior The theory of planned behavior (TPB) is a modified theory that overcomes the limitation of the aggregation principle in that a single sample of behavior reflects the effects of a variety of different factors unique to a particular situation, occasion, and action observed, which are factors more closely related to that action [43]. The fact that behavioral achievement affects both motivation (intention) and ability (action control) has been applied to many learning theories related to motivation, such as learning and task performance [43,44]. The flexible research framework of the TPB allows a large number of researchers to extensively design their models to investigate the diverse determinants that support behavioral intentions for reducing food waste in contemporary contexts [17,45,46,47,48]. Economic factors such as income, price concerns, financial attitudes, food surplus, incentives in logistics, management, administration with corporate support, buying best offers [3,49,50], and environmental concerns regarding the knowledge of the amount and separation of food waste [47,51,52] are the extended predictors that support the original TPB model for researching food waste. Sociopsychological factors such as demographic characteristics [7,48,53,54], risk perception [55], habits, emotions, beliefs, personal norms, and moral elements, including feelings of guilt [3,7,9,56], government policies and restrictions [57] provide broader and more diverse determinants to researchers. The solid construction and impact of the theory have been indisputably applied as a theoretical foundation for consolidating the concept of consumer behavior [58] to reduce waste in food waste-related studies since 2013 [9]. However, the TPB does not always support every aspect of the relationship between human behavior and intention [7,54,59]. For instance, subjective norms fail to stimulate consumers' intention to reduce food waste when they are at restaurants [59]. Russell et al. [56] found that attitude does not affect the intention to reduce food waste; rather, negative emotions and habits intervene in stronger behavioral changes, yet as expected, we found that negative emotions were associated with greater intentions to reduce food waste, but contrary to our predictions they were also associated with higher levels of food waste behaviour. In other words, participants who experienced more negative emotion when thinking about food waste intended to reduce their waste but actually ended up wasting more food. Previous research has suggested that reducing food waste from individual behavior requires non-cognitive determinants in addition to great self-control and normative support. Evans [27] claims that surplus food results from material, cultural, and social conditions in the community where food is wasted, rather than from individual choices, attitudes, and behaviors. Since developed countries experience greater food loss and waste generation [17], financial causes related to the effects of individual behaviors on food consumption should be considered in the extended research model [3,5,20,59,60,61,62,63,64]. Increasing income affects food culture consumption patterns for each generation, resulting in a variety of studies based on age, education, employment status, family size, and number of children; therefore, these factors should be considered [5,6,48,62,64]. In addition to above mentioned variables, pathological factors derived by COVID-19 pandemic have changed people's food consumption habits. Many countries no longer allow the usual meeting, eating, drinking, and social contact with people they previously considered routine due to the unprecedented outbreak of COVID-19 since the end of 2019 [23,64]. Food consumers panic buy and hoard food, and mobility restrictions have significantly reduced the number of people meeting or eating out. This worrying situation continues in cities and countries experiencing lockdowns. Although COVID-19 has spread across the country, there has been no nationwide lockdown in South Korea since the end of 2019. However, as the number of confirmed cases increases day by day, the number of unemployed in the lodging and food service sectors has raised considerably [65]. However, the consumption pattern of Koreans, which has slowed down due to a surge in coronavirus cases, has returned to normal as the situation returns to normal and is now showing signs of avoiding direct social contact by using online purchasing or delivering food services [66]. Therefore, the COVID-19 pandemic has become a common indicator of people's risk concerns and precautions. In this study, we implemented the modified TPB model as an indicator of local residents' intention to reduce food waste, including consumer price consciousness and risk concerns due to COVID-19, and sociodemographic factors as antecedent variables that affect individual behavioral intentions in research related to food waste. The detailed hypotheses based on the conceptual model are discussed in the next section. Research model and hypotheses: study 1 To identify factors that affect the degree of food waste reduction behavior of general residents in South Korea, including Jeju Province, this study aimed to analyze data through a structural equation model applied with the partial least squares method for testing the influence of multiple determinants and parameters on dependent variables. In the basic concept of TPB, the predictors used as dependents become mediators, socio-demographic factors were assumed as independent variables, and the intention to reduce household food waste was defined as the dependent variable in this model. This study initially determined the relationships between demographic factors and two vulnerable factors: economic factors highly dominated by the consumer price index and the impact of COVID-19 as an emerging disaster. The study's conceptual framework is shown in Figure 1, and the hypotheses of the study to be tested are as follows: Hypotheses: study 2 This study was completed by clarifying the differences in residents' perceptions of household food waste in Korea due to regional boundaries. Although the amount of regional physical food waste can be measured locally from waste collection companies or RFID devices, it is difficult for researchers to determine residents' perceptions because there is no unified measurement tool available to analyze them as data from the devices. Thus, the researchers conducted an advanced analysis using the multi-group analysis (MGA) method of the Partial Least Squares Structural Equation Modeling (PLS-SEM) based on the conceptual research model presented above. Hypothesis 9. (H9) There are statistically significant differences in the effect of socio-demographic factors on the parameters affecting the intention to reduce household food waste between the residents of Jeju Province and mainland Korea. By comparing the path coefficients exhibiting differences in perceptions between residents of Jeju Province and other provinces in the Korean mainland, practical implications can be drawn based on the results. For this analysis, researchers examined the paths where the cognitive difference between the two groups was statistically significant among the three socio-demographic factors and the five factors used as parameters (a total of 15 paths) by applying the model of Study 1. An additional research model for this hypothesis is shown in Figure 2. Samples, data collection, and analyses With a population of 51,672,400, South Korea had 1,273,766 cumulative confirmed cases of COVID-19 on June 30,2021, and 282 deaths. Meanwhile, the Jeju Special Self-Governing Province has a population of 675,293. Since the first confirmed cases of COVID-19 were reported in February 2020, the cumulative number of confirmed COVID-19 cases was 1,262 and 1 death, and the average number of daily confirmed cases at 8.7 over the two weeks during the survey period [67,68]. In this study, we used IBM SPSS 24.0 (IBM Corp. Released 2016. IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY: IBM Corp. Respondents who participated in the survey indicated the degree to which they agreed with their subjective opinions in a self-marking-style mobile survey form for two to five questions presented for each factor. The three main independent variables (attitude, subjective norm, and perceived behavior control) that affect behavioral intention in the TPB, and price consciousness and risk concerns due to COVID-19, which were adopted as the main variables in this study, were presented excluding the question of final intention to reduce food waste. Participants marked items that fit the Likert 5 scale according to their views on the question, for a total of 5 from 'strong negative/disagree' to 'moderate/neutral' and 'strongly positive/agree.' All data collection was connected to the mobile and online link sites, and it was impossible to proceed to the next question if there was an unanswered question; hence, there was no non-response data. However, only 508 cases were used for the analysis because 5 out of 513 individual data were excluded from the analysis due to uncertain reliability. Among each latent variable group, one item from each attitude (ATT 5), perceived behavioral control (PBC1), and price consciousness (PC5) were eliminated because of low reliability/outer loadings. Considering the reliability of each factor, using IBM SPSS (International Business Machines Corporation (IBM), New York, United States), it was confirmed that the Cronbach alpha value was 0.7 or higher, which was also confirmed in the second test using Smart-PLS. The demographic characteristics investigated in this study were gender, age, educational background, monthly income, occupation, number of families, and regional variables. The need for ethical approval was waived by the XXX [Blinded for Review] University's review board as the research article collected no sensitive personal information. In the process of data collection, informed consent was obtained by individuals on an online form. Frequency analysis and model assessment The socio-demographic characteristics of the participants are presented in Table 1. The male to female ratio of the respondents was 47.8:52.2. The age group was relatively evenly distributed, with those in their 50s accounting for 24.4% (the largest group), followed by those in their 40s, 20s, and 30s, and those in their 60s (40%, 20%, 30%, and 16.7%, respectively). University undergraduates accounted for the largest proportion (53.5%), and the educational background of the respondents was distributed in the order of high school, graduate school, and college. The highest monthly income groups ranged from less than US$ 1,000 and US$ 2,000 to US$ 2,999, with 13.2% of respondents earning more than US$ 5,000 per month. Among the participants, whitecollar workers accounted for 28.3%, followed by those belonging to other occupational groups (15.6%), followed by housewives and students (14.4% and 13.8%, respectively). Of the participants, 30.9% had threeperson households, 27.8% had four-person families, 17.7% were single or two-person households with the same ratio, and 5.9% were large households with five or more members. During the survey period (from June 2021 to July 2021), when the research was conducted, the population of Jeju Province was 675,293, and the total population of 16 other metropolitan cities and provinces in South Korea was 50,997,107. Accordingly, the total population of the Republic of Korea is 51,672,400. Jeju Province is a self-governing province that accounts for approximately 1.30% of the nation's population with a unique isolated environment away from the mainland. Table 2 shows the validity and reliability assessment results for each measured variable constituting the latent variables of this research model. Three items in risk concerns due to COVID-19 were unstable, so we decided to remove them using two items with the highest loadings. In particular, two items in each variable, except for risk concerns (RC) and behavioral intention (BI), had relatively lower loadings, resulting in less than 0.50 reliability. However, all four items made above-average combinations to exhibit moderate convergent validity and overall internal consistency reliability. Variance inflation factors (VIF) is an indicator that provides a measure of the number of times the target variable is larger for multicollinear data than for orthogonal data. A VIF value above 5 indicates that there is a potential problem in the inner structure [70]. Multicollinearity was not observed in this model since all VIF values were below 2. Meanwhile, the Heterotrait-Monotrait ratio of correlations (HTMT) measures the discriminant validity in partial least equation modeling, a method more stringent than the cross-loadings or the Fornell-Larcker criterion [69]. Researchers applied HTMT as suggested by Hair et al. [71], based on the threshold values between 0.85 and 0.90. All HTMT values are lower than 0.75, and the bootstrapping results show that 1 is not contained within the interval range (based on 95% confidence level); thus, discriminant validity was also established. Research test (path analysis): study 1 As described in the previous section, three socio-demographic factors were considered as the antecedent/independent variables for the path analysis model in the study after attempting every factor of the demographic characteristics. This indicates that all other factors, such as gender, academic background, and occupation, did not affect the model. As seen in Table 3, all three independent variables have a certain effect on their mediators (H1, H2, and H3), supporting our hypothesis that socio-demographic factors affect the behavioral intention of residents to reduce food waste. The direct effects should be considered (see Table 3); however, we found that age affects all mediators (p < 0.001, t ¼ 2.906 to 6.845), while the effectiveness of income and family size was limited to certain determinants. For example, income variation can be explained only by risk concerns (p < 0.000, t ¼ 5.278), and variations in residents' family size affect only subjective norms (p < 0.01, t ¼ 2.722) and perceived behavioral control (p < 0.01, t ¼ 2.621). The three mediators, attitude, subjective norms, and perceived behavioral control from the original TPB model support our hypotheses significantly (p < 0.001, t ¼ 2.660 to 11.868) and price consciousness (p < 0.000, t ¼ 3.529), and the fourth factor in the model. Only risk concerns do not support these hypotheses. Assessing model fit in PLS-SEM is primarily based on predictive efficiencies, unlike Covariance-Based Structural Equation Modeling, which indicates the suitability of the interpretation of the research model for explaining the tests of the hypotheses depending on the path coefficients, the level of the R 2 values, and the effect size f 2 [71] in Table 3. Choosing the estimated model when summarizing the overall model fit is recommended, as it remains arguable owing to its early phase of advancement. Standardized root mean square residual (SRMR) values below 0.10 or 0.08 are considered a good fit, while NFI, the Chi 2 value of the model at 1 divided by the Chi 2 value of the null model that is closer to 1 is considered good for model fit. As a result, the NFI produces values between 0 and 1. The closer the NFI is to 1, the better is the fit [71]. The coefficient of R 2 determination that provides the explanatory power of prediction of the model is usually considered high at 0.26 or more, moderate at 0.13-0.26, and low at below 0.13 [72]. The explanatory coefficient (R 2 ) for behavioral intention to reduce HFW, the dependent variable of this study, is considered very high at 0.576, and the adjusted R 2 coefficient reaches 0.572, indicating that the slight difference does not affect the model construction. The results of the overall and specific indirect effects of all predictors in the model showed that age had very strong indirect effects (p ¼ 0.000, t ¼ 4.862) on behavioral intention. However, the path from age to behavioral intention mediated by price consciousness and risk concerns was insignificant (p ¼ 0.051, t ¼ 1.644 and p ¼ 0.106, t ¼ 1.620), showing no mediating effects of those paths in this case. All paths from income to behavioral intention also had no indirect effects. However, family size is mediated by perceived behavioral control, which indicates that it has indirect effects in the model with p < 0.01 and t ¼ 2.673. Finally, the researchers have drawn out the structural modeling of the path analysis, capturing age as the most effective predictor among sociodemographic factors and both subjective norms and perceived behavioral control as the most effective mediators in the local context for reducing household food waste. ***p < 0.001, **p < 0.01, *p < 0.05. Figure 3 shows the most influential variables for each path for the eight hypotheses of Study 1 raised in this paper. The solid line indicates significant influence, and the dotted line indicates an insignificant influence. The variable showing the most effective path among the demographic antecedents in the research investigated during the period in which COVID-19 was in the sphere of influence was age, and it was established that age and risk concerns about COVID-19 were directly proportional. However, it was found that risk concerns due to COVID-19 did not affect the intention to reduce food waste. Among the parameters, the variables that have the highest influence on the intention to reduce food waste are perceived behavioral control and attitude. In particular, perceived behavioral control was found to have the strongest effect on intention, as it was also affected by age and number of household members. As the research was conducted in both Jeju Province and the Korean mainland, the second part of the research assessed the extent to which the differences between the residents in different areas were significant, using a multi-group analysis (MGA). Research test (multi-group analysis): study 2 The minimum required size of the full dataset in this study was 222 when assuming a moderate effect size (f 2 ¼ 0.15). The actual and effective sample size for the full dataset was 508, which was sufficiently large to avoid sampling errors. The researchers examined two structural models with separate datasets using categorical variables. The regional barrier was the only variable we considered before conducting the MGA since Jeju is an isolated self-governing province in Korea. Table 4 displays the results of path coefficient (R 2 ), predictive relevance (Q 2 ), and effect sizes (f 2 ) of the datasets. The path coefficient R 2 values are the result of endogenous constructs. The R 2 values of the dependent variable (BI) in the three models were 0.530, 0.656, and 0.576, respectively, when comparing Jeju Province (n ¼ 208), other provinces (n ¼ 300), and the full dataset (n ¼ 508). The Q 2 values indicate the combined features of in-sample explanatory power and out-of-sample predictions [73]. The Q 2 values in all cases showed medium predictive relevance of 0.518, 0.650, and 0.347, respectively. The f 2 values represent the rank of the relevance predictors in the structural model. In this study, we found that PBC plays a major role in behavioral intentions. The effect sizes of PBC in the Jeju Province model (f 2 ¼ 0.312) was higher than recommended, followed by the full data set (f 2 ¼ 0.520) and other province models (f 2 ¼ 0.393), which was higher than the medium effect size. The final step of the MGA examined whether there were differences between the two regional groups and their path coefficients. The purpose of Study 2 was to examine the effects of socio-demographic factors between the island and the Korean mainland. Statistical significance was tested using the newly calculated p-values, as shown in Table 5. Three paths differed significantly between Jeju Province and the other provinces: age and PC (p < 0.05, t ¼ 2.519), age and RC (p 0.001, t ¼ 3.686), and family size and RC (p < 0.05, t ¼ -2.683). Discussion and conclusions This study established the predictors that could be applied to residents' attitude toward reducing food waste in Jeju Province, Korea's leading tourist destination, and the mainland of Korea. Based on Azjen's TPB, numerous researchers have applied several predictors to fit the research model of structural equations using the partial least equation. We evaluated the measurement variables for the model fit before analyzing the effect of each variable through path analysis for total 508 data responded to the mobile survey. Socio-demographic factors are predictors that many researchers in food waste research have applied to modified TPB models to present their results and implications. Similar to the findings of Aschemann-Witzel et al. [74], Li et al. [6], Visschers et al. [7], and van der Werf et al. [54], we identified age as the most influential antecedent, in a positive direction, for behavioral intention of household food waste. The factor most affected by age was risk concerns with the mean value drawn as negative (M ¼ -3.09, p ¼ 0.000), suggesting that senior citizens may be consuming less delivery food, even during the COVID-19 pandemic, and usually eat out less than younger people. Overall, individuals' normative beliefs and attitudes toward reducing food waste are directly and positively involved in determining their behavioral intentions as they age. The effectiveness of income factors works in the same way as age factors in the opposite direction. Individuals who earn more eat out more when there is no quarantine risk and consume more delivery food while maintaining social distance. However, generating household food waste does not always depend on income. Some lower-income households do not produce food waste compared to those with higher incomes [3,58]. This implies that there are complex determinants that control the perceptions or behavioral intention to reduce food waste in households [75]. In addition, it was established that family size does not directly affect behavioral intention in reducing household food waste; however, it is significant when mediated by perceived behavioral control in our model. Overall, individuals' incomes affected the risk concerns due to COVID-19. Risk concerns in this research are precisely about the changes in food consumption patterns over COVID-19, so that the respondents' habits could be assumed from their answers. Our results suggest that compared to respondents with lower incomes, respondents with higher incomes used to eat out more before COVID-19 and tend to consume more home delivery food during the COVID-19 pandemic; however, this does not affect the behavioral intention to reduce household food waste. We believe that it is necessary to think about the research method of the literature that has produced a positive correlation that households with higher incomes produce more food waste [3,58] or vice versa [6], among the many prior studies that clearly show a relationship between income and food waste emissions. When people are concerned about household food waste, we generally think of food waste generated before and after cooking. However, considering food waste generated by delivery or takeout food during a pandemic situation such as COVID-19, it is important to measure and track them in both food waste and plastic containers, which are unnecessarily generated waste. Family size affects only perceived behavioral control in our model, which indicates that it is not as effective as many previous studies in other countries that concluded that larger family sizes generate more food waste [6,7,62]. However, perceived behavioral control, including self-control to buy, cook, and waste less amount of food and family members' support in bigger families of households did not influence the intention to reduce household food waste. The total indirect effects of family size on behavioral intention through perceived behavioral control (β ¼ -0.050, p ¼ 0.008) proved that respondents in smaller families have higher self-control perception and intention to reduce food waste, or they may experience more psychological burden. The results of the total indirect effects of PLS-SEM are useful for ensuring the mediating effects of variables in structural equation models. Second, the results of this study confirm the significance of the TPB in this research model. Visschers et al. [7], Graham-Rowe et al. [9], Aktas et al. [49], and Stefan et al. [58] demonstrated the primary role of three independent variables (attitude, subjective norms, and perceived behavior controls) and personal norms and moral norms/attitudes in extended TPB models for reducing food waste. Particularly, in this study, socio-demographic variables were identified as effective predictors, and among the three variables above, perceived behavior controls were identified as those most affected by antecedent variables. Age, in particular, appeared to have the greatest impact on individual norms, confirming that it is in line with what was presented in the first conclusion. Finally, the mediators considered that the concept of an extended model in this paper entailed price consciousness and risk concerns from COVID-19. In conclusion, price consciousness influenced residents' intentions to reduce household food waste, while risk concerns caused by COVID-19 were not statistically significant. Interestingly, the price consciousness of consuming food is also closely connected with food taste in the restaurant context of customers' food waste behavior [59]. The direct effects of all socio-demographic variables are statistically significant to a specific extent, as published by the FAO [5]; the amount of food waste emissions is often proportional to the family size in most case studies [6,7,62,64,76]. It can be interpreted that the effects of COVID-19 in Korea change people's propensity to eat out, deliver food, and online grocery buying patterns, but are not as strong as in other countries or cities where people experience lockdown, such as Italy [31] and Spain [23]. People in these countries and cities have shown strong intentions and awareness of food waste and their availability during in an economic crisis. This suggests that it is closely related to COVID-19 transmission in the region. Fortunately, Jeju Island is exposed to the risk of influx of tourists amidst the COVID-19 pandemic, but COVID-19 did not have a significant impact on the generation of household waste. Unlike in Jeju Province, however, risk concerns due to COVID-19 differ in other Korean provinces. The results of the multi-group analysis indicated that risk concerns caused by age and family size differed significantly between the two regions. As previously suggested in this paper, the number of confirmed cases of COVID-19 in Jeju Province only exceeded 1,200 in the past 17 months. Conversely, as of July 13 in 2021, the number of confirmed cases in the province was only 1,352, accounting for only 0.85 % of the 159,655 confirmed cases in Korea [67]. However, the situation is different considering demographics. The proportion of confirmed cases in Jeju Province accounts for 0.19% of the province's total population of approximately 690,000. This is similar to Busan, which is Korea's second-largest metropolitan city with a population of nearly 3.5 million. This is understandable considering the fact that Jeju Province has a potentially complicated factor that only considers the resident population due to the influx of tourists [42]. Jeju Province is a tourist destination amid all the economic, social, and environmental phenomena and interests, and seeking cooperation and understanding from residents in all policies and administrative affairs is indispensable. Through this study, we have attempted to draw people's attention to the food waste problem caused by the altered food consumption patterns during the COVID-19 pandemic. Although we tried to compare Jeju, a tourists island destination, with the most densely populated areas, we admit that the number of data used in this study is insufficient to claim generalization. Therefore, we acknowledge the need to continue with follow-up studies to confirm whether there was a change in people's perceptions before and after the pandemic by actively expanding the sample in similar studies after the pandemic. Author contribution statement Mona Chang: Contributed reagents, materials, analysis tools or data; Wrote the paper. Walimuni Arachchilage C. S. M.: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data. Min-cheol Kim: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability statement Data will be made available on request. Declaration of interest's statement
2022-11-11T14:10:22.232Z
2022-11-01T00:00:00.000Z
253447810
s2orc/train
v2
Information-theoretic limitations on approximate quantum cloning and broadcasting
Information-theoretic limitations on approximate quantum cloning and broadcasting We prove new quantitative limitations on any approximate simultaneous cloning or broadcasting of mixed states. The results are based on information-theoretic (entropic) considerations and generalize the well known no-cloning and no-broadcasting theorems. We also observe and exploit the fact that the universal cloning machine on the symmetric subspace of $n$ qudits and symmetrized partial trace channels are dual to each other. This duality manifests itself both in the algebraic sense of adjointness of quantum channels and in the operational sense that a universal cloning machine can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. The duality extends to give control on the performance of generalized UQCMs on subspaces more general than the symmetric subspace. This gives a way to quantify the usefulness of a-priori information in the context of cloning. For example, we can control the performance of an antisymmetric analogue of the UQCM in recovering from the loss of $n-k$ fermionic particles. A direct consequence of the fundamental principles of quantum theory is that there does not exist a "machine" (unitary map) that can clone an arbitrary input state [1,2]. This no-cloning theorem and its generalization to mixed states, the "no-broadcasting theorem" [3], exclude the possibility of making perfect "quantum backups" of a quantum state and are essential for our understanding of quantum information processing. For instance, since decoherence is such a formidable obstacle to building a quantum computer and, at the same time, we cannot use quantum backups to protect quantum information against this decoherence, considerable effort has been devoted to protecting the stored information by way of quantum error correction [4][5][6]. Given these no-go results, it is natural to ask how well one can do when settling for approximate cloning or broadcasting. Numerous theoretical and experimental works have investigated such "approximate cloning machines" (see [7][8][9][10][11][12][13][14][15][16] and references therein). These cloning machines can be of great help for state estimation. They can also be of great help to an adversary who is eavesdropping on an encrypted communication, and so knowing the limitations of approximate cloning machines is relevant for quantum key distribution. In this paper, we derive new quantitative limitations posed on any approximate cloning/broadcast (defined below) by quantum information theory. Our results generalize the standard no-cloning and no-broadcasting results for mixed states, which are recalled below (Theorems 1 and 2). We draw on an approach of Kalev and Hen [17], who introduced the idea of studying no-broadcasting via the fundamental principle of the monotonicity of the quantum relative entropy [18,19]. When at least one state is approximately cloned, while the other is approximately broadcast, we derive an inequality which implies rather strong limitations (Theorem 4). The result can be understood as a quantitative version of the standard nocloning theorem. The proof uses only fundamental properties of the relative entropy. By invoking recent developments linking the monotonicity of relative entropy to recoverability [20][21][22][23][24][25], we can derive a stronger inequality (Theorem 5). Under certain circumstances, this stronger inequality provides an explicit channel which can be used to improve the quality of the original cloning/broadcast (roughly speaking, how close the output is to the input) a posteriori. This cloning/broadcastingimproving channel is nothing but the parallel application of the rotation-averaged Petz recovery map [24], highlighting its naturality in this context. Related results of ours (Theorems 6 and 7) compare a given state of n qudits to the maximally mixed state on the (permutation-)symmetric subspace of n qudits. We establish a duality between universal quantum cloning machines (UQCMs) [7][8][9] and symmetrized partial trace channels, in the operational sense that a UQCM can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. It is also immediate to observe that these channels are adjoints of each other, up to a constant. A context different from ours, in which a duality between partial trace and universal cloning has been observed, is in quantum data compression [26]. As a special case of Theorem 6, we recover one of the main results of Werner [9], regarding the optimal fidelity for k → n cloning of tensor-product pure states φ ⊗k . We also draw an analogy of these results to former results from [27] regarding photon loss and amplification, the analogy being that cloning is like particle amplification and partial trace like particle loss. The methods generalize to subspaces beyond the symmetric subspace: Theorem 8 controls the performance of an analogue of the UQCM in recovering from a loss of n−k particles when we are given a priori information about the states (in the sense that we know on which subspaces they are supported, e.g., because we are working in an irreducible representation of some symmetry group). As an application of this, we obtain an estimate of the performance of an antisymmetric analogue of the UQCM for k → n cloning of fermionic particles. The methods also yield information-theoretic restrictions for general approximate broadcasts of two mixed states. Background-The well known no-cloning theorem for pure states establishes that two pure states can be simultaneously cloned iff they are identical or orthogonal. It is generalized by the following two theorems, a no-cloning theorem for mixed states and a no-broadcasting theorem [3,17]. Let σ be a mixed state on a system A. By definition, a (two-fold) broadcast of the input state σ is a quantum channel Λ A→AB , such that the output state has the identical marginals ρ out A = ρ out B = σ. A particular broadcast corresponds to the case ρ out AB = σ A ⊗ σ B , which is called a cloning of the state σ. We call two mixed states σ 1 and σ 2 orthogonal if σ 1 σ 2 = 0. Theorem 1 (No cloning for mixed states, [3,17]). Two mixed states σ 1 , σ 2 can be simultaneously cloned iff they are orthogonal or identical. By a "simultaneous cloning/broadcast," we mean that the same choice of Λ A→AB is made for broadcasts of σ 1 and σ 2 . These results were essentially first proved in [3], albeit under an additional minor invertibility assumption. Alternative proofs were given in [17,[28][29][30]. Sometimes Theorem 2 is called the "universal no-broadcasting theorem" to distinguish it from local no-broadcasting results for multipartite systems [31]. Quantitative versions of the local no-broadcasting results for multipartite systems were reviewed very recently by Piani [32] (see also [16]). No-cloning and no-broadcasting are also closely related to the monogamy property of entanglement via the Choi-Jamiolkowski isomorphism [29]. In this paper, we study limitations on approximate cloning/broadcasting, which we define as follows: Definition 3 (Approximate cloning/broadcast). Let σ,σ be mixed states. An n-fold approximate broadcast of σ is a quantum channel Λ A→A1···An such that the output state has the identical marginalsσ. That is, we consider the situation where ρ out A1···An := Λ(σ A ). An approximate cloning is an approximate broadcast for which ρ out A1···An =σ A1 ⊗ · · · ⊗σ An . The main case of interest is n = 2. Our main results give bounds on (appropriate notions of) distance betweenσ i and σ i for i = 1, 2, given any pair of input states σ 1 and σ 2 . Conventions-The notions of approximate cloning / broadcast stated above are direct generalizations of the notions of cloning/broadcasting in the literature related to Theorems 1 and 2. Regarding the input states, these notions are more general than the one used in the cloning machine literature [13]; we allow for the input states to be arbitrary, whereas they are usually pure tensor-power states ψ ⊗n for cloning machines. Our notion of approximate cloning requires the output states to be tensor-product states. Hence, some quantum cloning machines (in particular the universal cloning machine when acting on general input states) are approximate broadcasts by the definition given above. Let us fix some notation. Given two mixed states ρ and σ, we denote the relative entropy of ρ with respect to σ by D(ρ σ) := tr [ρ(log ρ − log σ)], where log is the natural logarithm [33]. We define the fidelity by F (ρ, σ) : , where · 1 is the trace norm. Since all of our bounds involve the relative entropy D(σ 1 σ 2 ) of the input states σ 1 and σ 2 , they are only informative when D(σ 1 σ 2 ) < ∞. This is equivalent to ker σ 2 ⊆ ker σ 1 , and we assume this in the following for simplicity. We note that if this assumption fails, our results can still be applied by approximating σ 2 (in trace distance) with σ ε 2 := εσ 1 + (1 − ε)σ 2 for ε ∈ (0, 1), which satisfies ker σ ε 2 ⊆ ker σ 1 . Main results-We will now present our main results. All proofs are rather short and deferred to [35]. Restrictions on approximate cloning/broadcasting-Our first main result concerns limitations if σ 1 is approximately broadcast n-fold while σ 2 is approximately cloned n-fold. To see that (3) is indeed restrictive for approximate cloning / broadcasting, let n = 2 and suppose without loss of generality that σ 1 = σ 2 , so that δ := 1 6 σ 1 − σ 2 2 1 > 0. We can use the triangle inequality for · 1 and the elementary inequality 2ab ≤ a 2 + b 2 on the right-hand side in (3) to get Since σ 1 and σ 2 are fixed, the same is true for δ > 0. Hence, for any approximate cloning/broadcasting operation (2), at least one of the following three statements must hold: 1. σ 1 is far fromσ 1 (i.e., the channel acts poorly on the first state), 2. σ 2 is far fromσ 2 (i.e., the channel acts poorly on the first state), or 3. there is a large decrease in the distinguishability of the states under the action of the channel, in the sense that D(σ 1 σ 2 ) − D(σ 1 σ 2 ) is bounded from below by a constant. As anticipated in the introduction, we can prove a stronger version of Theorem 4 by invoking recent developments linking monotonicity of the relative entopy to recoverability [20][21][22][23][24][25]. The stronger version involves an additional non-negative term on the right-hand side in (3) and it contains an additional integer parameter m ∈ {1, . . . , n} (the case m = n corresponds to Theorem 4; the case m = 1 is also useful as we explain after the theorem). A1···Am→A such that A1···Am→A satisfies the identity σ 2 = R (m) (σ ⊗m 2 ). There exists an explicit choice for such an R (m) with a formula depending only on σ 2 and Λ [24,35]. One can generalize Theorem 5 to the case of "k → n cloning" [13] where one starts from k-fold tensor copies σ ⊗k 1 and σ ⊗k 2 and broadcasts the former and clones the latter to states on an n-fold tensor product; this is Theorem 11 in [35]. To see how the additional remainder term in (4) can be useful, we apply Theorem 5 with m = 1. It implies that there exists a recovery channel R (1) such that Now suppose that we are in a situation where the left hand side in (5) is less than some ε > 0. Then, (5) implies that In other words, we can (approximately) recover the input states σ i from the output marginals σ i . Therefore, in a next step, we can improve the quality of the cloning / broadcasting channel Λ by post-composing it with n parallel uses of the local recovery channel R (1) . Indeed, the improved cloning channel Λ impr : Here, ≈ again stands for − log F (σ 1 , R (1) (σ 1 )) < ε. That is, we have found a strategy to improve the output of the cloning channel Λ, namely to the output of Λ impr . Universal cloning machines and symmetrized partial trace channels-In our next results, we consider a particular example of an approximate broadcasting channel well known in quantum information theory [9,11,13], a universal quantum cloning machine (UQCM). We connect the UQCM to relative entropy and recoverability. We recall that the UQCM is the optimal cloner for tensor power pure states, in the sense that the marginal states of its output have the optimal fidelity with the input state [9,11]. Let k and n be integers such that 1 ≤ k ≤ n. In general, one considers a k → n UQCM as acting on k copies ψ ⊗k of an input pure state ψ of dimension d (a qudit), which produces an output density operator ρ (n) , a state of n qudits. From Werner's work [9], the UQCM is known to be sym is the projection onto the (permutation-)symmetric subspace of (C d ) ⊗n , which has dimension d[n] := d+n−1 n . We note that C k→n is trace-preserving when acting on the symmetric subspace. The main results here are Theorems 6 and 7, which highlight the duality between the UQCM (6) and the following symmetrized partial trace channel In addition to the operational sense of duality between the partial trace channel P n→k and the UQCM C k→n which is established by Theorems 6 and 7, the two are dual in the sense of quantum channels (up to constant). That is, Our results will quantify the quality of the UQCM for certain tasks in terms of the relative entropy D(ω (n) π d,n sym ), which is between a general n-qudit state ω (n) and the maximally mixed state π d,n sym of the symmetric subspace. We consider the maximally mixed state π d,n sym as a natural "origin" from which to measure the "distance" D(ω (n) π d,n sym ) since it is a (Haar-)random mixture of tensor-power pure states. We recall what one obtains from the standard monotonicity of the relative entropy, namely Our next main result is the following strengthening of the entropy inequality in (8): Let ω (n) be a state with support in the symmetric subspace of (C d ) ⊗n , let π d,n sym denote the maximally mixed state on this symmetric subspace, let C k→n denote the UQCM from (6), and P n→k the symmetrized partial trace channel from (7). Then The entropy inequality in (9) can be interpreted as follows: The ability of a k → n UQCM to recover an n-qubit state ω (n) from the loss of n − k particles is limited by the decrease of distinguishability between ω (n) and π d,n sym under the action of the partial trace P n→k . Thus, a small decrease in relative entropy (i.e., D(ω (n) π d,n sym ) − D(P(ω (n) ) P(π d,n sym )) ≈ ε) implies that a k → n UQCM C k→n will perform well at recovering ω (n) from P n→k (ω (n) ). We can also observe that C k→n is the Petz recovery map corresponding to the state σ = π d,n sym and channel N = tr n−k (as defined in [35]). As an application of Theorem 6, we consider the special case that is most common in the context of quantum cloning [9,11,13]. We set ω (n) = φ ⊗n for a pure state φ. In this case, By estimating D ≥ − log F , we recover one of the main results of [9], which is that the k → n UQCM has the following performance when attempting to recover n copies of φ from k copies: Given the above duality between the symmetrized partial trace channel and the UQCM, we can also consider the reverse scenario. Theorem 7. With the same notation as in Theorem 6, the following inequality holds This entropy inequality can be seen as dual to that in (9), having the following interpretation: if the decrease in distinguishability of ω (k) and π d,k sym is small under the action of a UQCM C k→n , then the partial trace channel P n→k can perform well at recovering the original state ω (k) back from the cloned version C k→n (ω (k) ). There is a striking similarity between the inequalities in (9) and (12) and those from [27, Sect. III-A], which apply to photonic channels (cf. [38]). This observation is based on the analogy that cloning is like particle amplification and partial trace is like particle loss and we discuss this further in [35]. Restrictions on cloning in general subspaces-We can generalize the discussion in the previous section to arbitrary subspaces. For 1 ≤ k ≤ n, let X n be a d Xn -dimensional subspace of (C d ) ⊗n and let Y k be a d Y k -dimensional subspace of (C d ) ⊗k . We write Π Xn , Π Y k for the projections onto these subspaces and π Xn and π Y k for the corresponding maximally mixed states. We generalize the definitions in (6) and (7) to The cloning map C k→n is a direct analogue of the UQCM for the specialized task of recovering a state in the subspace X n from one in the subspace Y k (previously, X n and Y k were both taken to be the symmetric subspace). By inspection, it is completely positive, and if tr n−k [π Xn ] = π Y k , then it is trace preserving when acting on any operator with support in X n . The same argument that proves Theorem 6 then gives Theorem 8. Let ω (n) be a state with support in X n , and suppose that tr n→k [ω (n) ] is supported in Y k . Then . (15) The assumption that tr n→k [ω (n) ] is supported in Y k is made for convenience. Without it, the quantity tr[P n→k (ω (n) )] < 1 would enter in the statement, cf. [35]. We can obtain a stronger statement under the additional assumption tr n−k [π Xn ] = π Y k : It implies P n→k (π Xn ) = π Y k and that (C k→n • P n→k )(ω (n) ) has trace one. Theorem 8 controls the performance of the cloning machine C k→n (13) in recovering from a loss of n − k particles when a priori information about the states is given (in the sense that we know on which subspaces they are supported). To see this, consider, e.g., the case of perfect a priori information when dim X n = 1. Then D(ω (n) π Xn ) = 0 and so (15) implies that the cloning is perfect, ω (n) = (C k→n • P n→k )(ω (n) ). For non-trivial applications of Theorem 8, a natural class of subspaces to consider are those associated to irreducible group representations, e.g. of the permutation group acting on (C d ) ⊗n . To avoid introducing the representation-theoretic background, we focus here on the case when both X n and Y k are taken to be the familiar antisymmetric subspace. Physically, the antisymmetric subspace describes fermions and therefore our results have bearing on electronic analogues of the photonic scenarios mentioned above. For this part, we let d ≥ n. An example system for which d can be larger than n is a tight-binding model on d lattice sites, where each site can host a single electron. The antisymmetric subspace X n has dimension d Xn = d n . The analogue of a tensor-power pure state in the antisymmetric subspace is a Slater determinant |Φ n ≡ |φ 1 ∧ · · · ∧ |φ n , where the states {|φ i } i are orthonormal. [35] reviews background and how the marginal tr n→k [Φ n ] is again antisymmetric and has quantum entropy log n k . Thus, (15) of Theorem 8 applies to establish the first inequality of the following: Using D ≥ − log F again, we conclude that the performance of the antisymmetric cloning machine C k→n in recovering from a loss of n − k fermionic particles is controlled by We mention that (C k→n • P n→k )(Φ n ) has trace one; this follows from the identity tr n−k [π Xn ] = π Y k for the antisymmetric subspace (cf. Lemma 12 in [35]). We also mention that the standard symmetric UQCM would produce the zero state in this case and thus yields a (minimal) fidelity of zero. General restrictions on approximate broadcasts-As the introduction mentioned, our methods imply new informationtheoretic restrictions on any approximate two-fold broadcast. These are relegated to [35]. Conclusion-In this paper, we have proven several entropic inequalities that pose limitations on the kinds of approximate clonings / broadcasts that are allowed in quantum information processing. Some of the results generalize the well known nocloning and no-broadcasting results, restated in Theorems 1 and 2. Other results demonstrate how universal cloning machines and partial trace channels are dual to each other, in the sense that one can be used as an approximate recovery channel for the other, with a performance controlled by entropy inequalities. We can also control the performance of an analogue of the UQCM for cloning between any two subspaces. In particular, we obtain bounds on its performance in recovering from a loss of n − k fermionic particles. ACKNOWLEDGMENTS We acknowledge discussions with Sourav Chatterjee and Kaushik Seshadreesan and helpful comments by an anonymous referee. After completing the results of this paper, we learned of the related and concurrent work of Marvian and Lloyd [39]. We are grateful to them for passing their manuscript along to us. M.M.W. acknowledges support from the NSF under Award No. 1350397. Appendix A: Monotonicity of the relative entropy and recoverability We recall the lower bound from [24] on the decrease of the relative entropy for a channel N and states ρ and σ: Theorem 9 ([24]). Let β(t) := π 2 (1 + cosh(πt)) −1 . For any two quantum states ρ, σ and a channel N , the following bound holds where the rotated Petz recovery map R t N ,σ is defined as where N † is the completely positive, unital adjoint of the channel N . Every rotated Petz recovery map perfectly recovers σ from N (σ): In the special case when the applied quantum channel is the partial trace, the inequality becomes as follows: Theorem 10 ( [24]). Let β(t) := π 2 (1 + cosh(πt)) −1 . For any two quantum states ρ AB , σ AB , we have where the rotated Petz recovery map R t A,X is defined in (G4). Appendix B: A generalization of Theorem 5 to k to n cloning Theorem 11. Consider the more general situation in which we begin with k ≤ n tensor-product copies of the state σ i for i ∈ {1, 2}, and suppose that the channel Λ A1···A k →A1···An approximately broadcasts σ 1 , in the sense that and approximately clones σ 2 , in the sense that Then, for every m ∈ {1, . . . , n}, there exists a recovery channel R (m,k) A1···Am→A1···A k such that and the recovery channel R (m,k) This can be proved by the same method as for Theorem 5 (see below). Appendix C: On photon amplification and loss Here we discuss the analogy between (9) and (12) and the inequalities from Section III-A of [27]. The partial trace channel is like particle loss, which for photons is represented by a pure-loss channel L η with transmissivity η ∈ [0, 1]. Furthermore, a UQCM is like particle amplification, which for bosons is represented by an amplifier channel A G of gain G ≥ 1. Let θ E denote a thermal state of mean photon number E ≥ 0, and let ρ denote a state of the same energy E. A slight rewriting of the inequalities from Section III-A of [27], given below, results in the following: where the symbol indicates that the entropy inequality holds up to a term with magnitude no larger than log(1/η) and which approaches zero as E → ∞. So we see that (C1) is analogous to (9): under a particle loss L η , we can apply a particle amplification procedure A 1/η to try and recover the lost particles, with a performance controlled by (C1). Similarly, (C2) is analogous to (12): under a particle amplification A G , we can apply a particle loss channel L 1/G to try and recover the original state, with a performance controlled by (C2). Observe that the parameters specifying the recovery channels are directly related to the parameters of the original channels, just as is the case in (9) and (12). Note that an explicit connection between cloning and amplifier channels was established in [38], and our result serves to complement that connection. Proof of (C1) and (C2). A proof of (C1) is as follows. The Hamiltonian here is a † a, which is the photon number operator. Let ρ be a state of energy E, and let θ E be a thermal state of energy E (i.e., a † a ρ = a † a θE = E). Under the action of a pure-loss channel L η , the energies of L η (ρ) and L η (θ E ) are equal to ηE, and we also find that L η (θ E ) = θ ηE . Furthermore, a standard calculation gives that − tr[ρ log θ E ] = H(θ E ) = g(E) := (E + 1) log (E + 1) − E log E. Putting this together, we find that The first equality is a rewriting using what we mentioned above and the inequality follows from Section III-A of [27]. When E = 0, g(E) − g(ηE) = 0 also. As E gets larger, g(E) − g(ηE) is monotone increasing and reaches its maximum of log(1/η) as E → ∞. The other inequality in (C2) for an amplifier channel follows similarly. Under the action of an amplifier channel A G , the energies of A G (ρ) and A G (θ E ) are GE. We also find that A G (θ E ) = θ GE . Proceeding as above, we find that The first equality is a rewriting and the inequality follows from Section III-A of [27]. The last inequality follows because g(GE) − g(E) = 0 at E = 0, and it is monotone increasing as a function of E, reaching its maximum value of log G as E → ∞. Proof of Theorem 6. We observe that π d,k sym = tr n−k [π d,n sym ] which follows easily from the representation π d,n sym = dψ ψ ⊗n [37], the integral being with respect to the Haar probability measure over pure states ψ. A proof of (9) then follows from a few key steps: ) =D(ω (n) (C k→n • P n→k )(ω (n) )). (D12) The first equality holds by definition of quantum relative entropy and in the second equality we used the fact that tr[P n→k (ω (n) )] = tr[tr n→k (ω (n) )] = tr[ω (n) ] = 1, wherein the first step holds because tr n→k [ω (n) ] is supported in the symmetric subspace. The inequality above is a consequence of [27,Thm. 1] which states that for any state ρ and positive, trace-preserving map N . (We remark that P n→k is indeed trace-preserving when considered as a map on states supported on the symmetric subspace.) The last equality in (D12) follows from the property of relative entropy that D(ξ τ ) − log c = D(ξ cτ ) for states ξ, τ and c > 0. Essentially the same argument, with minor modifications, also proves Theorems7 and 8. For the former, we use the facts that C k→n (π d,k sym ) = π d,n sym and that C k→n is trace-preserving when acting on states supported in the symmmetric subspace. For Theorem 8, we use the assumption that tr n→k [ω (n) ] is supported in Y k to get tr[P n→k (ω (n) )] = 1. The details are left to the reader. We close this proof section with a remark on a so-far implicit assumption. Remark (Non-identical marginals case). Some of our results, Theorems 4, 5 and 14 (see below), apply to approximate clonings/broadcasts in the sense of Definition 3. That is, we always assume that the marginals of the output state are identical, i.e. We make this assumption for two reasons: (a) It simplifies the bounds in our main results and (b) we believe that it is a natural assumption for approximate cloning/broadcasting. However, the methods apply more generally and they also yield limitations on approximate clonings/broadcasts when (D14) is not satisfied. In the second equality, we used orthonormality. The product of delta functions implies that we only need to consider permutations π and σ which agree on {k + 1, . . . , n}. To exploit this, we partition the permutations according to which k-set A k features as the image of {1, . . . , k}. More precisely, given a k-set A k , we define S n (A k ) := {π ∈ S n : π({1, . . . , k}) = A k } . (F8) There is a more useful, kind of affine representation of the elements of S n (A k ) as tuples in S k × S n−k composed with a fixed bijection f A k ∈ S n (A k ). For definiteness, we define f A k to be the unique bijection in S n (A k ) which preserves ordering. Then π ∈ S n (A k ) ⇐⇒ π = f A k • (π k , π n−k ), for some π k ∈ S k , π n−k ∈ S n−k . However, ∆ CL does not appear to have information-theoretic content, while ∆ R features the Petz recovery map. We close this appendix with the Proof of Theorem 14. The proof is based on the following key estimate. It is a variant of Theorem 10, which was proved in [24]. where the rotated Petz recovery map R t A,X was defined in (G4). (ii) Suppose that the output state ρ out i,AB has identical marginals, i.e. Consider the last expression. When we apply the partial trace over the A subsystem to both states and use Theorem 10, we obtain D(ρ out 1 ρ out 2 ) ≥ D(ρ out 1,B ρ out 2,B ) −
2017-02-09T03:57:26.000Z
2016-08-26T00:00:00.000Z
55639210
s2orc/train
v2
Effects of Acute Potassium Chloride Administration on Ventricular Dysrhythmias after Myocardial Infarction in a Rat Model of Ischemia/Reperfusion
Effects of Acute Potassium Chloride Administration on Ventricular Dysrhythmias after Myocardial Infarction in a Rat Model of Ischemia/Reperfusion Background: Acute myocardial infarction is an important cause of morbidity. This study aimed to investigate the effects of the administration of potassium chloride (KCl) on reperfusion-induced injuries in a rat model of myocardial ischemia/reperfusion. Methods: Thirty-six male Wistar rats, weighing 200 to 250 g, were randomly assigned to 3 experimental groups: control, K1 (10 µg/kg of KCl), and K2 (20 µg/kg of KCl). Twenty minutes before ischemia, a single dose of 10 and 20 µg/kg of KCl was intraperitoneally administered in the K1 and K2 groups, respectively. The coronary artery was occluded for 30 minutes (ischemia); thereafter, it was opened for 60 minutes (reperfusion) to measure hemodynamic parameters and ventricular arrhythmias. Blood sampling was performed after the reperfusion period to determine the serum levels of lactate dehydrogenase, troponin I, creatine kinase (CK)-MB, malondialdehyde, and pro-oxidant-antioxidant balance. Results: Serological parameters significantly decreased in the potassium groups compared with the control group. In particular, the decline was more pronounced for the serum levels of lactate dehydrogenase (1180.25±69.48 vs 1556.67±77.02 U/L; P=0.011), troponin I (21.98±0.61 vs 28.76±1.65 ng/mL; P=0.020), and pro-oxidant-antioxidant balance (15.51±0.72 vs 20.63±1.42 HK; P=0.041) in the K2 group compared with the K1 group. Moreover, the administration of 20 µg/kg of KCl significantly decreased the incidence of ventricular tachycardias and fibrillations compared with the control group (P=0.002). Additionally, no considerable differences were observed between the control group and the groups with 10 µg/kg and 20 µg/kg of KCl regarding the number of ventricular ectopic beats. Conclusion: The administration of KCl before ischemia could reduce ventricular arrhythmias and reperfusion-induced injuries by reducing oxidative stress. Introduction Acute myocardial infarction (MI) is one of the main causes of death and disability in the world. 1 Following MI, a large number of structural and functional changes inside the myocardium may occur due to the obstruction of the coronary blood flow, which finally may cause irreversible injuries to the heart, hence the significance of treatment protocols aimed at reducing myocardial ischemic injuries. 2 Reperfusion, defined as the rapid restoration of the coronary blood flow, is one of the standard methods for alleviating MI-induced injuries. 1,2 Although the reperfusion of the ischemic heart is more effective in the reduction of the infarct size, the beneficial effects of this approach would be limited due to cardiomyocyte death, which happens in the first few minutes after the provision of oxygen for hypoxic tissues and potentiates excessive injuries known as "myocardial reperfusion injuries". 1, 2 Such injuries may lead to the further production of reactive oxygen species (ROS), causing oxidative stress, apoptosis, and necrosis due to, at least in part, mitochondrial dysfunction. 3 Reperfusion-induced injuries can be prevented by applying some mechanical or pharmacological interventions prior to sustained lethal myocardial ischemic events. 4 Nonetheless, the commonly used antiarrhythmic approaches such as electrical cardioversion may result in minor myocardial injuries or dysfunction, which explains why antiarrhythmic agents should be considered. 5 Hypokalemia promotes the incidence of ventricular and atrial arrhythmias by distinct mechanisms in cardiomyocytes, especially by inducing progressive Ca 2+ overload in ventricular cells, as well as in a subpopulation of atrial cells, 6 which finally may affect the contraction of the heart and lead to decreased blood supply through the coronary arteries and produce lethal arrhythmias. 4 Massive calcium influx into hypoxic and necrotic areas caused by reperfusion can activate phospholipase-A, which breaks down the normal phospholipids of the mitochondrial membrane and consequently results in mitochondrial dysfunction and oxidative stress. 7 Elevated serum levels of K + may alleviate reperfusion-associated injuries by preserving mitochondrial function and may exert an antiapoptotic effect against post-ischemic myocardial injuries. 8,9 This point is further supported by an anecdotal case report of an inguinal hernia repair operation on a 3-year-old boy as early as 1961, which demonstrated that the direct infusion of potassium (K) into the heart chamber decreased ventricular fibrillations. 10 Furthermore, some studies have reported the clinical significance of extra KCl doses on the strength of its antiarrhythmic effects during cardiac surgery for the treatment of post-declamping ventricular arrhythmias. [11][12][13][14] Collectively, such evidence indicates that the prior stimulation of the K channels in the process of preconditioning confers cardioprotection against ischemic insults. Therefore, given the important role of K in the pathophysiology of arrhythmias, we aimed to investigate the potential effects of the administration of potassium chloride (KCl) on reperfusion-induced injuries and ventricular arrhythmias in a rat model of myocardial ischemia/reperfusion (I/R). Thirty-six male Wistar rats, weighing 200 to 250 g, were prepared from a breeding colony and kept for 2 weeks before the experiment with free access to commercial food and tap water at 20 to 25 º C and 12:12 hour light-darkness cycle. The rats were randomly divided into 3 groups, each composed of 12 rats: control, K1 (10 µg/kg of KCl), and K2 (20 µg/kg of KCl). Twenty minutes before the induction of ischemia, a single dose of 10 and 20 µg/kg of KCl (Pasteur Institute, 1228054808, Tehran, Iran) was administered intraperitoneally in the K1 and K2 groups, respectively. Methods Anesthesia was induced intraperitoneally with 50 mg/kg of thiopental (Exir, Tehran, Iran); then, electrocardiography (ECG) was recorded after thoracotomy and ventilation using a rodent ventilator with a tidal volume of 2 to 3 mL and a respiratory rate of 60 breaths per minute (Harvard Rodent Ventilator [model 683], Holliston, MA, USA). Afterward, hemodynamic parameters were monitored by cannulating the right carotid artery. The pulse wave of the artery was recorded before and during I/R periods and then 1 minute after the reperfusion period. MI was induced in all the experimental subjects by the ligation of the left anterior descending artery using a 6-0 silk suture for 30 minutes. Reperfusion was then performed for 60 minutes by loosening the suture. Successful ligation was confirmed by ECG changes consisting of STelevation. Post-reperfusion blood samples were collected for serological studies. Finally, the animals were submitted to euthanasia after the reperfusion period. An investigator, who was blinded to the groups' identity, performed all the measurements in this study. In our previous study (no data published as yet), we subjected a group of animals to sham surgery in the same way as described for MI induction except for the ligation of the coronary artery, and our results concerning serological parameters such as lactate dehydrogenase (LDH), troponin I, and CK (creatine kinase)-MB in this group showed no increase compared with these parameters in the I/R group. Moreover, we observed no incidence of arrhythmias in these The Journal of Tehran University Heart Center 17 J Teh Univ Heart Ctr 17 (1) http://jthc.tums.ac.ir January, 2022 TEHRAN HEART CENTER Effects of Acute Potassium Chloride Administration on Ventricular Dysrhythmias ... animals after the sham surgery. Thus, we did not consider another sham group for the purposes of the present study. Isolated sera were analyzed for the measurement of LDH (Pars Azmoon, Iran), high-sensitive cardiac troponin I (Siemens, Belgium), and CK-MB (Siemens, Belgium) levels via the colorimetric method in accordance with the manufacturer's instructions. For the evaluation of oxidative stress, malondialdehyde (MDA) levels in the sera were assessed by thiobarbituric acid reactive substances (TBARS) assay using an ELISA kit (ZellBio, Germany) based on the manufacturer's instructions. For the measurement of the levels of oxidants and antioxidants simultaneously in a single test, a pro-oxidantantioxidant balance (PAB) assay was employed. PAB values were expressed in arbitrary HK units. While a low PAB value indicates an increased antioxidant level, a high PAB value represents a decreased antioxidant level. 2 As was previously described, ventricular arrhythmias induced by ischemia were determined according to the Lambeth Conventions. 3 The results are expressed as the mean±the standard deviation (SD). The statistical comparison between all the groups for parametric variables, including ventricular arrhythmias and serological parameters, was performed by 1-way ANOVA and a subsequent Tukey test as needed. Comparison of hemodynamic parameters during the baseline, ischemia, and reperfusion periods between the groups was done by 2-way ANOVA, followed by the Tukey test. The arrhythmia scores were analyzed using the Kruskal-Wallis test (nonparametric test), and the incidence of ventricular tachycardias (VTs) and ventricular fibrillations (VFs) was analyzed using the Fisher exact test. The analyses were performed with the SPSS software (version 20; SPSS Inc, Chicago, IL, USA), and a P value less than 0.05 was statistically considered a significant difference. Results The hemodynamic data of the studied groups are shown in Table 1. Our statistical analyses revealed no significant differences between the experimental groups with respect to all hemodynamic parameters. Moreover, no differences were observed between the baseline, ischemia, and reperfusion periods within each group regarding heart rate, systolic blood pressure, and diastolic blood pressure. As is shown in Table 2, CK-MB levels in the 10 µg/kg KCl group (437.84±19.58 U/L; P=0.043) and the 20 µg/kg KCl group (368.84±19.92 U/L; P<0.001) significantly decreased compared with the control group (530.84±35.59 U/L). There was also no significant difference between the K1 and K2 groups concerning the serum levels of CK-MB. The serum level of LDH significantly decreased in the 10 µg/kg KCl group (1556.67±77.02 U/L; P<0.001) and the 20 Table 2). In addition, a significant decrease in the serum level of LDH was seen in the 20 µg/kg KCl group in comparison with the 10 µg/kg KCl group (P=0.011). The obtained data pertaining to the serum level of troponin I revealed that it significantly fell in the 10 µg/ kg KCl group (28.76±1.65 ng/mL) and the 20 µg/kg KCl group (28.76±1.65 ng/mL) compared with the control group (34.37±1.42 ng/mL; P=0.013 and P<0.001, respectively) ( Table 2). Additionally, this decrease was more pronounced in the K2 group compared with the K1 group (P=0.020). The obtained data apropos of MDA activity showed that MDA levels significantly decreased in the 10 µg/kg KCl group (4.04±0.12 nmol/mL; P=0.047) and the 20 µg/kg KCl group (3.65±0.11 nmol/mL; P=0.001) by comparison with the control group (4.75±1.05 nmol/mL) ( Table 2). There was also no statistical difference between the K1 and K2 groups concerning the serum levels of MDA. The results of the PAB assay on the serum concentrations of PAB are presented in Table 2, which shows that they significantly declined in the 10 µg/kg KCl group (20.63±1.42 HK; P=0.025) and the 20 µg/kg KCl group (15.51±0.72 HK; P<0.001) compared with the control group (26.2±1.88 HK). In addition, a significant decrease was observed in the serum levels of PAB in the K2 group compared with the K1 group (P=0.041). All arrhythmic events occurred during the reperfusion period and terminated spontaneously with no requirement for cardioversion. Our analysis of the ischemia-induced ventricular arrhythmias yielded no statistical differences between all the experimental groups in terms of the number of ventricular ectopic beats ( Figure 1). Moreover, our results showed that the administration of 20 µg/kg of KCl significantly decreased the incidence of VTs compared with the control group (P=0.002) (Figure 2). Still, there was no significant difference in the occurrence of VTs between the K1 and K2 groups. Finally, VFs occurred in 4 rats: 1 animal in the low-dose (10 µg/kg) KCl group (1/12) and 3 animals in the control group (3/12) following the induction of MI, and none of them was sustained. Our nonparametric analysis showed that the score of arrhythmias in the 20 µg/kg KCl group was considerably lower than that of the control group (P=0.016), and the 10 µg/ kg KCl group was not statistically different from the control and K2 groups in terms of the arrhythmia score ( Figure 3). Discussion Ischemic preconditioning through repeated short episodes of ischemia is a phenomenon that renders the myocardium more resistant to the loss of blood supply and protects it against subsequent and more severe insults. However, I/R injuries sometimes occur following the reestablishment of blood flow to previously ischemic tissues, and they are accompanied by further damage to cardiomyocytes, contractile dysfunction of the heart, and ventricular arrhythmias. 4 It is of great importance to prevent these phenomena by interventions applied before and/or at the time of I/R. During the reperfusion phase, the augmented activity of the NA + /K + pump creates a sympathomimetic stimulation, leading to the acute and rapid exchange of the K ion. 15 The decrease in the extracellular K level at this stage results in VFs and triggers arrhythmias, 16 which are created by a reduction in cardiac repolarization reserve and an increase in Ca 2+ accumulation within cardiomyocytes, manifested by premature depolarizations (ie, early or delayed afterdepolarizations). 17 ROS accumulates in response to K deprivation, and pre-treatment of tissues with K channel openers can promote the recovery of mitochondrial function through increased ATP-sensitive mitochondrial K + transport and consequently modulate the mitochondrial production of ROS. 18 With this in mind, in the present study, we evaluated the effects of acute KCl administration on ventricular dysrhythmias after MI in a rat model of I/R and the potential effects of the intraperitoneal administration of KCl (10 and 20 µg/kg) to control reperfusion-induced injuries in a rat model of I/R. Given that ECG has a lower sensitivity to detect STsegment elevation and new Q-waves after cardiac ischemia, the diagnosis of acute MI by evaluating cardiac enzymes is superior to ECG. 19 Following myocardial injuries, the disruption of the integrity of normal cardiac myocyte membranes may result in the release of a wide variety of biologically active intracellular proteins such as troponin, CK-MB, and LDH, which could be considered cardiac markers (diagnostic markers) of myocardial tissue damage and augment the accuracy of MI diagnosis. 20 Thus, in the current study, with the hypothesis that KCl may decrease the serum levels of enzyme markers, including CK-MB and LDH, we showed that the intraperitoneal administration of 10 and 20 µg/kg of KCl could considerably alleviate the serum concentrations of these two markers. However, due to the lack of the tissue specificity of LDH and CK-MB, it is now generally accepted that the diagnosis of myocardial injuries by evaluating these two enzyme markers is not of great value unless we additionally evaluate more sensitive and specific markers of cardiac injuries such as cardiac troponins. Troponins, which are associated with tropomyosins, regulate the interaction between myosins and actin filaments for muscle contraction and are regarded as indicators of primary arrhythmias after MI. 21 Persistent and mild elevations of troponin levels constitute a common finding in cardiac fibrillations. 22 Accordingly, we assessed the serum concentrations of troponins in all the experimental groups and showed that the intraperitoneal injection of 10 µg/kg and 20 µg/kg of KCl significantly decreased the serum levels of troponin I in the K1 and K2 groups compared with the control group. Hypokalemia is reported to cause myocyte destruction and the leakage of troponin, CK-MB, and LDH from the damaged membranes into the circulation. 23 Hypokalemia-induced myopathy and massive CK elevation as the first presentation of Conn's syndrome may occur due to hypertension. 24 Zhang et al 9 in 2004 reported that the administration of glucose-insulin-K combination attenuated the accumulation of LDH and CK in rabbits subjected to myocardial injuries. Another study revealed that the activation of ATP-sensitive K channels protected cardiac myocytes against apoptosis. 25 Based on these findings, we can deduce that the pre-MI injection of KCl could prevent further damage to cardiomyocytes and enhance the activity of troponin I, CK-MB, and LDH after I/R. Although there was no statistical difference in the serum concentrations of CK-MB between the K1 and K2 groups in the present study, our results established a significant reduction in LDH and troponin I serum levels in animals that received a higher dosage of KCl (20 µg/kg) than the K1 (10 µg/kg) group. Since concurrent skeletal muscle injuries may occur at the time of MI induction and greater amounts of CK-MB could be released from skeletal muscles than cardiac muscles, it is believed that the analysis of the serum CK-MB as a sole biomarker for the detection of cardiac injuries is inappropriate, and it is essential that other biomarkers of MI such as troponin and LDH be evaluated to assess the extent or severity of myocardial injuries in animal models. 26 In further support of this notion, our results revealed that higher amounts of troponin and LDH could be released from cardiomyocytes after the ligation of the left anterior descending coronary artery, which may be affected by KCl dose-dependently. In other words, a higher dosage of KCl is associated with a greater decrease in troponin and the possible leakage of LDH. Oxidative stress, which is caused by an imbalance between ROS production and impaired antioxidant defense, has a pivotal role in the initiation and expansion of I/R-induced myocardial injuries. 2,19 After prolonged ischemia, due to oxygen and nutrient deprivation, the permeability of the inner mitochondrial membrane is increased, leading to the dysfunction of the electron transport chain, intracellular Ca 2+ overload, and mitochondrial swelling. The damage to mitochondria is accompanied by a massive burst of ROS production during reperfusion, which exceeds the antioxidative capacity of the cells and exerts detrimental effects on cardiomyocytes. 27 In this regard, a reduced antioxidant response associated with the increased level of J Teh Univ Heart Ctr 17 (1) http://jthc.tums.ac.ir January, 2022 pro-oxidants has been observed in patients with acute MI. 28 Although it has not been indicated which compartment of cardiomyocytes is the ultimate end-effector of ischemic preconditioning, some studies have suggested that mitochondria can be considered the main signaling pathway in this process. Mitochondrial KATP channel (Adenosine triphosphate-sensitive K + channels) opening using ischemic preconditioning may not only inhibit mitochondrial Ca 2+ overload, followed by the excessive generation of ROS, but also attenuate myocardial reperfusion-induced injuries. 27 We have previously shown that the blockage of mitochondrial KATP channels might abolish the protective effects of cardiac preconditioning. 29 In this context, the present study demonstrated that pre-ischemic treatment with KCl significantly attenuated the increased levels of PAB and MDA induced by I/R, suggesting that K might exert cardioprotective effects through the reduced activity of ROS. In further support of these results, Li et al 8 in 2018 indicated that higher serum levels of K caused by KCl administration were coupled with an increase in ATP production, as well as alleviated oxidative stress. These findings indicate, to some extent, that elevated K + outside mitochondria can increase the recovery of the mitochondrial proton gradient and attenuate pro-oxidant markers. The current study also revealed that the level of PAB in the high-dose KCl (20 µg/ kg) group was markedly decreased compared with the lowdose KCl (10 µg/kg) group. In patients with acute coronary syndromes, antioxidant activity for scavenging myocardialfree radicals can be increased by the administration of a solution of glucose-insulin-K, 30 indicating that the restoration of intracellular K (ie, K + outside mitochondria) by elevated serum levels of K after the injection of KCl could promote the activity of antioxidant enzymes, especially with the administration of high-dose KCl. Increased heart rate and ventricular ectopic beats are known to be involved in the initiation of a variety of cardiac arrhythmias. 29,31 Nevertheless, in the present study, we found no significant differences in hemodynamic parameters and the number of ventricular ectopic beats between the control, K1, and K2 groups, and it appears that susceptibility to the occurrence of VTs and VFs in our study was not associated with heart rate and ventricular ectopic beats. Although some studies have shown that heart rate and blood pressure changes may not influence the incidence of arrhythmias, further investigation is needed to explain why hemodynamic parameters did not change significantly while troponin and VTs showed considerable differences. Ventricular tachyarrhythmias, which commonly occur early during ischemia, can lead to VFs and may significantly increase the mortality rate after MI. 32 Myocardial ischemia causes dysfunction in KATP channels, followed by prolonged effective refractory periods, in the ischemic zone, which may sensitize the myocardium for the initiation of ventricular arrhythmias. Of note, reperfusion can amplify the heterogeneity of membrane potentials caused by ischemia without restoring the refractory period after MI and lead to lethal ventricular arrhythmias. 33 Therefore, it would be desirable to attenuate these arrhythmias by ischemic preconditioning-induced antiarrhythmic strategies. Herein, we found that the administration of 20 µg/kg of KCl significantly decreased the incidence of VTs and VFs compared with the control group, suggesting that K might restore enzymatic activity in the electron transport chain and provide protection against ventricular arrhythmias. Finally, in our preliminary experiment, different doses of KCl (ie, 10, 20, 30, 40, and 50 µg/kg) were injected intraperitoneally into male Wistar rats, and the serum concentration of K was evaluated 10 minutes after the KCl administration. Our results revealed that all the animals at the beginning of the study were normokalemic and serum K + had significantly increased in the rats that received the KCl solution compared with the normal saline group. Additionally, no statistically significant differences were observed between the KCl groups concerning these increased serum levels of K. Moreover, hemodynamic stability was not affected by the administration of the KCl solution at 10 and 20 µg/kg doses; nonetheless, a further increase in the dose of the KCl solution (ie, 30, 40, and 50 µg/kg) led to cardiac arrhythmias and hemodynamic instability. Therefore, in the current study, we used KCl solutions at concentrations of 10 and 20 µg/kg based on the notion that KCl might confer better protection against reperfusion-induced ventricular arrhythmias. Despite its strengths, the present study suffers from the following limitations. Firstly, our findings would have been augmented had we assessed baseline serological parameters and the serum levels of inflammatory mediators such as C-reactive protein. Secondly, in the initial steps of our experiment, we believed that an evaluation of serological parameters would be sufficient; however, another evaluation, for instance, a histological assessment of the infarct size was needed to determine whether KCl would prevent I/R injuries by limiting the infarct size. Consequently, we will consider it in our future studies. Conclusion Our findings suggested that the pre-ischemia administration of KCl at 10 and 20 µg/kg doses could attenuate increased levels of troponin, LDH, and CK-MB, as well as oxidative stress markers, after ischemia/reperfusion, which would finally alleviate reperfusion-induced injuries. In particular, our results showed that high-dose KCl (20 µg/kg), compared with low-dose KCl (10 µg/kg), might considerably minimize the incidence of VTs and VFs through a further reduction of LDH, troponin I, and pro-oxidant-antioxidant balance.
2022-05-10T16:47:03.401Z
2022-01-01T00:00:00.000Z
248611600
s2orc/train
v2
Deactivation of prospective memory intentions: Examining the role of the stimulus–response link
Deactivation of prospective memory intentions: Examining the role of the stimulus–response link Successful prospective remembering involves formation of a stimulus (e.g., bottle of medication and/or place where the bottle is kept)–response (e.g., taking a medication) link. We investigated the role of this link in the deactivation of no-longer-relevant prospective memory intentions, as evidenced by commission error risk. Experiment 1a contrasted two hypotheses of intention deactivation (degree of fulfillment and response frequency) by holding constant the degree of intention fulfillment (e.g., participants responded to one of two target words) while manipulating the number of times the intention was performed. Findings supported the response frequency hypothesis. Experiment 1b employed novel lure trials to examine what “stimulus” participants link the prospective memory response to—target words and/or the salient contextual cue—and compared commission errors to Experiment 1a. Findings suggested the salient context alone does not always function as the stimulus. Collectively these findings, in conjunction with those of Experiment 2 (a within-experiment replication) and a combined analysis, suggest that (a) intention deactivation is facilitated by prior responding (formation/strengthening of stimulus–response links), but additional research is needed to establish the robustness of this effect, and (b) when responding frequently to targets, participants are more likely to bind the response to the context alone than to the target or target/context combination, possibly because they learn to rely on context to predict target occurrence. The latter finding was robust and indicates that deactivation of the appropriate stimulus (target and/or context)–response link may be a critical component of reducing commission errors. Prospective memory (PM) refers to the act of remembering to perform an intention in the future. In the past several decades, most PM research has addressed the question of how to successfully fulfill PM intentions and tried to understand what causes PM omission errors (i.e., failures to remember to perform an intention). In recent years, PM researchers have become increasingly interested in a different type of PM error: PM commission errors. A PM commission error is the act of erroneously repeating a PM intention when it is no longer relevant (e.g., erroneously taking medication that is no longer appropriate to take). Examining the underlying mechanisms that cause commission errors is crucial to understanding why they happen and how to prevent them from occurring (see Möschl et al., 2020, for a recent review). To examine PM commission errors in lab settings, the "finished paradigm" was developed (Scullin, Bugg, & McDaniel, 2012; for review, see Bugg & Streeper, 2019;cf. Walser, Fischer, & Goschke, 2012; for a related but distinct habitual PM paradigm, see Einstein, McDaniel, Smith, & Shaw, 1998;McDaniel, Bugg, Ramuschkat, Kliegel, & Einstein, 2009). In this paradigm, participants encounter two phases. In the first phase, referred to as the active PM phase, participants perform an ongoing task requiring word and nonword judgments (i.e., a lexical decision task). Along with completing the ongoing task, participants are given a PM intention to press a special key (e.g., the Q key) if they encounter either of two target words (e.g., corn or dancer). Participants are additionally instructed that target words will always appear on a salient, colored background (e.g., red screen). Most studies have used the four-target version of this paradigm in which each target word appears twice. Consequently, participants can fulfill the intention (respond to both target words at least once) in the active PM phase. Following the active PM phase, participants are instructed that they no longer need to perform the special action in response to the target words-they simply should continue performing the ongoing task. These finished PM instructions are followed by the second phase of the paradigm, the finished PM phase. During this phase, participants again encounter the target words on the same salient background as during the active PM phase, but the target words are now irrelevant. Pressing the Q key in response to the no-longerrelevant target words indicates a commission error and suggests the PM intention is still accessible. Most studies have examined commission errors following intention fulfillment (i.e., participants are presented with and respond to the target words in the active PM phase). However, a few studies using the finished paradigm have examined whether participants are inclined to perform a previously relevant intention they never had the opportunity to fulfill by employing a zero-target condition Bugg, Scullin, & Rauvola, 2016;cf. Marsh, Hicks, & Bink, 1998). As in the four-target condition, participants encode the PM intention to press the Q key in response to the target words; however, in the zero-target condition, target words are never presented during the active PM phase. This means participants cannot fulfill the intention. Although it may seem intuitive that a PM intention performed multiple times (as in the four-target condition) would become somewhat habitual and therefore be harder to deactivate than an intention that was never performed (as in the zero-target condition), the findings from these studies were quite the opposite (see also Schaper & Grundgeiger, 2017). For example, Bugg and Scullin (2013) found that participants in the four-target condition deactivated the intention and did not make a commission error (but see Pink & Dodson, 2013). In striking contrast, 56% of participants in Experiment 1 and 46% of participants in Experiment 2 made a commission error in the zero-target condition. The authors concluded that PM intentions that remain unfulfilled are more accessible than intentions that are fulfilled, which is referred to as the intention fulfillment effect (Bugg & Streeper, 2019). An important yet unanswered question concerns the cause(s) of the intention fulfillment effect. A few accounts have been proposed . One is the Zeigarnik (1938) account and refers to the possibility that the heightened accessibility reflects a Zeigarnik-like effect, whereby selectively in the zero-target condition participants experience tension about not fulfilling the intention and perseverate on it. A second account is the episodic trace, or stop tag, account. According to this account, the act of pressing Q in response to target words during the active PM phase yields episodic traces of prior responding (cf. Hommel, 1998) and accordingly, a richer representation of intention completion. This enables participants to attach a "stop tag" to the intention when the finished instructions are shown, thereby facilitating creation of a no-go memory (cf. Hommel, Musseler, Aschersleben, & Prinz, 2001; see Anderson & Einstein, 2017, for recent evidence for the stop-tag account). Because responding in the active PM phase occurs only in the fourtarget condition, this condition benefits from the stop tag while the zero-target condition does not. Although it is difficult to disentangle these accounts when comparing the four-target and zero-target conditions, findings from one prior study that investigated the intermediate case of a partially fulfilled intention are informative. In their third experiment, Bugg and Scullin (2013) once again had participants encode the PM intention to press the Q key in response to the target words. However, in the active PM phase, only one of the two target words (e.g., corn) was shown, and it was presented once. (Hereafter, we refer to this as the one-target condition.) In the finished PM phase, they manipulated whether the first (now, no-longer-relevant) target word shown on the salient background was the presented word (i.e., corn) or the nonpresented word (i.e., dancer). The key finding was that commission errors were 3.5 times more likely when the nonpresented word was shown first in the finished PM phase, although this difference was marginal. Importantly, for present purposes, this finding provided preliminary support in favor of the episodic trace account as opposed to the Zeigarnik account. According to the Zeigarnik account, commission errors should have been equivalent for all participants in the one-target condition because all participants had the opportunity to fulfill the intention to the same degree (once) in the active PM phase. However, errors were lower for those participants that received the presented word first compared with those that received the nonpresented word first. The episodic trace account readily explains this difference. According to this account, prior responding yielded an episodic trace corresponding to the stimulus-response link of pressing Q in response to the presented word (corn) during the active PM phase, and thus a stop tag could be attached to this trace, facilitating deactivation. In contrast, there was no episodic trace corresponding to the stimulus-response link (i.e., dancer-press Q) for the nonpresented word. Current study To take stock, prior studies have suggested that the level of persisting activation of a no-longer-relevant intention is related to the degree of intention fulfillment. This is reflected in the intention fulfillment effect, as well as cross-experimental comparisons that have additionally considered commission error rates for partially fulfilled intentions (one-target condition), which appear to fall between the four-target and zero-target conditions ; see Anderson & Einstein, 2017, for different results in a one-target condition in a paradigm where participants knew the task was completed after performing the action once). Additional evidence is needed to inform theoretical accounts of the intention fulfillment effect and understand what factors affect the level of persisting activation of an intention once it is no longer relevant. Along these lines, the current study aimed to take a closer look at the stimulus-response link-that is, how the intention is represented. This link is purported to play a central role in the episodic trace account of commission errors. Experiment 1a examined whether differences in commission error risk are due to the degree of intention fulfillment per se or the total number of responses that were made to a target stimulus in the active PM phase (i.e., strengthening of the stimulus-response link). Indeed, these two factors have covaried in prior experiments contrasting four-target (fulfilled intention/4 responses), one-target (partially unfulfilled intention/1 response), and zero-target (unfulfilled intention/0 responses) conditions. 1 Consequently, it has not been possible to tease apart their effects. Experiment 1b examined what comprises the stimulus component of the stimulus-response link using the novel approach of embedding lure trials in the finished PM phase, and comparing commission errors between Experiment 1b (lure trials) and Experiment 1a (standard trials). Addressing this question is important for further informing the episodic retrieval account and understanding the conditions that may increase susceptibility to commission errors. Experiment 2 further contrasted these conditions, albeit head-to-head within a single experiment, thereby offering an opportunity to replicate the patterns observed across Experiments 1a and 1b. Experiment 1a The focus of Experiment 1a was to contrast two hypotheses that fall out of two extant accounts of the intention fulfillment effect: the Zeigarnik account and the episodic trace account. One hypothesis, termed here the degree of fulfillment hypothesis, posits that commission errors should be least likely when an intention is fulfilled meaning a participant has responded to all targets at least once, more likely when an intention is partially fulfilled, meaning a participant has responded to a subset of the targets at least once, and most likely when the intention is unfulfilled meaning a participant has responded to no targets. This hypothesis falls out of the Zeigarnik account in that the degree of intention accessibility in the finished PM phase is predicted to be higher to the extent that intentions are left unfulfilled, as this may lead to perseverating on the unfulfilled intentions. The second hypothesis, termed here the response frequency hypothesis, posits that commission errors should be less likely the more frequently a participant performs the intention in the active PM phase. This hypothesis falls out of the episodic trace account in that responding more frequently to a target word should create a stronger stimulusresponse link (solidify intention representation through the accumulation of traces) and make it easier to associate a stop tag with this representation, leading to better intention deactivation. To contrast these hypotheses, we compared performance in a one-target condition (e.g., participants encoded an intention to respond to both corn and dancer, but performed the PM intention only once in the active PM phase for corn), which necessarily represented partial completion of the intention, to a novel four single-target condition that also represented partial completion of the intention (e.g., participants encoded an intention to respond to both corn and dancer, but performed the PM intention four times in the active PM phase for corn only). Critically, comparing these two conditions enabled us to hold constant the degree of intention fulfillment (i.e., only one of the two target words was responded to at least once in both conditions) while varying the number of times participants responded to a target word (one vs. four, respectively). We also included a zero-target condition as a theoretically interesting comparison. This allowed us to examine the effects of partial intention fulfillment (one-target and four singletarget conditions) relative to a completely unfulfilled intention. The experimental procedure for these three conditions is depicted in Fig. 1. Theoretically, the critical comparison of interest is between the one-target and four single-target conditions. According to the degree of fulfillment hypothesis, commission error rates should be comparable between these two conditions because both involve partial fulfillment. In contrast, according to the response frequency hypothesis, the four single-target condition should have significantly lower commission error rates compared with the one-target condition. Regarding the other potential comparisons, the degree of fulfillment hypothesis posits that commission error rates should be higher in the zero-target condition, which represents an unfulfilled intention, compared with the one-target condition and four singletarget condition. The same prediction holds for the response frequency hypothesis, although it would attribute the difference to the number of times participants responded in each condition. As in our prior research (e.g., Bugg et al., 2016) the primary dependent variable was the number of participants in each condition that made a commission error during the finished PM phase to gauge accessibility of the encoded intention (which was to press Q to either corn or dancer). Additionally, we examined the number of participants who made an error selectively on the first target, which was always the target that was previously responded to in the one-target and four-target conditions (e.g., corn in examples above). Method Design and participants Seventy-seven Washington University in Saint Louis undergraduate students, with normal or corrected-to-normal vision and color vision, and who reported English as their native language, participated in this study for either monetary compensation or course credit. The one-target and four single-target conditions were run simultaneously, and participants were randomly assigned to one of these conditions. The zero-target condition was run as a dangling comparison condition. A priori, we implemented a stopping rule of 24 participants per condition (following who met PM performance-based inclusion requirements. Those requirements were that participants in the zero-target condition should not have pressed Q in the active PM phase and participants in the four single-target condition should have pressed Q more than once to the single target presented four times in the active PM phase. We reasoned that a participant that responded twice or three times in the four single-target condition still met the purpose of the condition (partial fulfillment) and responded more frequently than a participant in the one-target condition and thus should be included. 2 Later, we similarly decided to exclude participants in the one-target condition who did not press the Q key in response to the single target in the active PM phase. We reasoned that a participant who did not respond once in the one-target condition (i.e., responded zero times) no longer met the purpose of this condition (partial fulfillment) and instead more closely mimicked the zero-target condition (although a target was never shown in the active PM phase of that condition). In the one-target condition, two participants were excluded for failing to press the Q key in response to the target word in the active PM phase. In addition, three participants were excluded for either failing to read instructions (i.e., one participant in the four singletarget condition pressed through the finished instructions without reading them), or understand instructions (i.e., one participant in the one-target condition and one in the four single-target condition did not know how to advance to the next trial when the target appeared in the finished PM phase). The final sample for Experiment 1a (N = 72, 24 per condition) was 73.6% female (one participant did not report sex). Materials and procedure The procedure is shown in Fig. 1. First, all participants were instructed to use only one hand when responding during the task. Then, they were given the opportunity to practice the ongoing lexical decision task for eight trials. They indicated whether they thought the letters onscreen were a word or a nonword by pressing the labeled Y (5) or N (6) keys on the number pad. After practice, participants were given the PM intention instructions. They were instructed to press the Q key if they saw either of the two target words, which would appear on the colored background. There were two possible sets of target words (corn/dancer and fish/ writer) and two possible background colors (red and blue). T h e t a r g e t w o r d s a n d b a c k g r o u n d c o l o r s w e r e counterbalanced across participants. Participants were told they could press the Q key in response to target words before or after making their lexical decision. All participants regardless of condition were given two target words to encode. Once participants finished reading these instructions, they wrote down their target words and completed a demographic form and vocabulary test to create an~5-min delay between encoding and testing (Einstein & McDaniel, 1990). After completing these forms, participants began the active PM phase. The active PM phase consisted of 76 trials. In the one-target condition, participants only saw one of their target In the active PM phase, participants encountered one encoded target word once (one-target condition), one encoded target word four times (four single-target condition), or neither encoded target word (zero-target condition) words one time (e.g., only corn one time) on Trial 38. In the four single-target condition, participants also saw only one of their target words, but they saw that target word four times (e.g., only corn four times) on Trials 14, 33, 52, and 71. For the zero-target condition, participants did not encounter either target word in the active PM phase. Upon completing the active PM phase, all participants were given the following finished instructions before beginning the finished PM phase: "PLEASE NOTE THAT YOU NO LONGER NEED TO PRESS 'Q' IN THE PRESENCE OF TARGET WORDS. THAT TASK IS FINISHED AND SHOULD NOT BE PERFORMED AGAIN. Just as before, you will determine whether a string of letters forms a word or a nonword by pressing the keys marked Y and N on the number pad. YOUR ONLY GOAL is to make word/nonword judgments." In the finished PM phase, all participants experienced a brief delay, during which they first completed a short block of lexical decision trials (24 trials with no targets; see e.g., Bugg and Scullin, 2013;Bugg et al., 2016;Scullin et al., 2012), and then another vocabulary form (which differed from the first one). Following this~5min delay, participants completed a 118-trial lexical decision block that included four trials in which they encountered the no-longer-relevant target words (e.g., corn and dancer each presented twice). In the one-target condition and the four single-target condition, the first no-longer-relevant target word presented in the finished PM phase was the word they saw in the active PM phase (e.g., corn). In the zero-target condition, the finished PM phase matched that of the onetarget condition and the four single-target condition. In conditions where the target words were corn and dancer, the target words appeared in the 118-trial lexical decision block on Trials 42, 66, 90, and 113. In conditions where the target words were fish and writer, the target words appeared on Trials 39, 47, 83, and 103. After completing the finished PM phase, participants completed a postexperimental questionnaire. Results Active PM phase PM hits Following Bugg et al. (2016), PM hits were defined as a Q press that occurred on the target trial or within two trials after the presentation of the target. 3 An independent t test showed a significant difference in average number of PM hits between the one-target condition (M = 1.00, SD = .00) and the four single-target condition (M = 3.92, SD = .28), t(23) = 50.61, p < .001, as was expected given the difference in target presentation between conditions and the inclusion criteria. 4 Finished PM phase commission errors Following prior research (Bugg et al. 2016) a commission error was defined as a Q press that occurred in the finished PM phase, 5 and our primary interest was the effect of condition on the number of participants who made at least one commission error (see Fig. 2). Significantly fewer participants made a commission error in the four singletarget condition (21%) compared with the one-target condition (50%), χ 2 (1) = 4.46, p = .035. The number of participants who made a commission error in the zero-target condition (75%) was not significantly higher than the one-target condition, χ 2 (1) = 3.20, p = .074, but was significantly higher than the four singletarget condition, χ 2 (1) = 14.11, p < .001. Following Bugg et al. (2016), we also examined the number of participants who made a commission error on the first nolonger-relevant target presented in the finished PM phase. In the one-target and four single-target conditions, the first nolonger-relevant target was the same target participants responded to in the active PM phase. The patterns mirrored those found when considering commission errors to any target (preceding analysis) though in this analysis the difference between the four single-target condition (21%) and the one-target condition (46%) was not significant, χ 2 (1) = 3.38, p = .066. The number of participants who made a commission error on the first no-longer-relevant target in the zero-target condition (71%) was not significantly higher than the one-target condition, χ 2 (1) = 3.09, p = .079, but was significantly higher than the four single-target condition, χ 2 (1) = 12.08, p = .001. Lexical decision task performance To examine the difference in speeding from the active PM phase, a phase that includes a PM task, to the finished PM phase, a phase that consists only of the ongoing lexical decision task, by condition, we examined reaction times (RTs) in milliseconds (ms) on nontarget trials in the ongoing lexical decision task in both phases (see Table 1). We restricted our analyses to correct trials and trials that did not occur within three trials after a target was presented, and only RTs within 2.5 standard deviations from each participant's mean for each phase were included in analyses (cf. Bugg & Ball, 2017;Lourenço, White, & Maylor, 2013). A 2 (phase: active PM phase, finished PM phase) × 3 (condition: one-target, four single-target, zero-target) mixed-model ANOVA showed a significant effect of phase, F(1, 69) = 23.99, p < .001, but not an effect of condition, F(2, 69) = .64, p = .528, or a significant Phase × Condition interaction, F(2, 69) = 1.68, p = .193. These results suggest participants in all conditions sped up from the active PM phase (M = 716, SD = 125) to the finished PM phase (M = 575, SD = 266) at similar rates. Discussion The key novel finding from Experiment 1a was that fewer participants made at least one commission error in the four single-target condition compared with the one-target condition. That is, in conditions that were equated in the degree of fulfillment (i.e., participants in both conditions responded only to one of the two target words), the condition in which responding occurred more frequently led to fewer participants making at least one commission error. This pattern supports the response frequency hypothesis over the degree of fulfillment hypothesis and highlights the role of responding in intention deactivation. It appears that the more frequently one fulfills an intention, the less accessible the intention will be later, and this is true even under conditions that control for the degree of fulfillment. This accords with the view that episodic traces of prior responding facilitate the linking of a stop tag to a no-longer-relevant intention. Presumably, the strength of these traces is enhanced with repeated responding such that the stronger the stimulusresponse link (the more robust the representation of the intention), the higher the likelihood of effectively binding a stop tag to the intention, and thereby deactivating the intention. Surprisingly, and contrary to both accounts, commission error rates did not significantly differ between the one-target condition and the zero-target condition. However, the direction of the difference was as predicted (by both the response frequency and degree of fulfillment accounts) with higher commission error rates in the zero-target condition compared with the onetarget condition. Possibly this could reflect inadequate power, and we will return to this possibility following Experiment 2. Finally, Experiment 1a produced an intention fulfillment effect in the form of higher rates of commission errors in the zero-target condition compared with the four single-target condition, consistent with both accounts. In prior studies demonstrating the intention fulfillment effect, the four-target condition comprised presentation of both targets (twice each) and thus complete fulfillment of the intention (Bugg & Scullin, 2013; Bugg et al., 2016). The current finding extends this effect to a partially completed intention that was responded to repeatedly (i.e., the four single-target condition). Collectively, the findings of Experiment 1a point to a continuum of intention deactivation that corresponds to the degree of intention fulfillment, with initial evidence suggesting that in intermediate conditions (conditions of partial fulfillment), intention deactivation is greater the more frequently a PM target has been responded to previously (i.e., in the four single-target condition as compared with the one-target condition). We interpret this result to suggest an important role for the strengthening of the stimulus-response link in intention deactivation, consistent with the episodic trace account. Experiment 1b While Experiment 1a identified the strength of the stimulusresponse link as a factor that affects intention deactivation independently of the degree of intention fulfillment by manipulating the number of responses made to a single target word, Experiment 1b aimed to examine what precisely constitutes the "stimulus" portion of the stimulus-response link and what the implications are for intention deactivation. As previously mentioned, in the finished paradigm participants are informed that target words will appear in a salient context (i.e., uniquely colored background) when they encode the PM intention. Later, when the no-longer-relevant targets are presented in the finished PM phase, that same salient context reappears and the representation of this context appears to be important for eliciting commission errors (Scullin et al., 2012;Scullin, Bugg, McDaniel, & Einstein, 2011). It has been suggested that the salient context may cause the PM intention to "pop" into mind in the finished PM phase (i.e., to be spontaneously retrieved; . In other words, during the active PM phase participants may be associating the PM response not necessarily with the target word but potentially with the salient context, given that it accompanies the target 100% of the time (i.e., it is 100% predictive of target occurrence). However, this is merely speculation as no study has directly examined whether participants are linking the PM response to the target word itself, some combination of the target word and salient context or if they might indeed be relying primarily on the salient context to guide intention retrieval. This question of how the PM intention is represented, and in particular what constitutes the stimulus in the stimulus-response link, has important implications for understanding and predicting when commission errors will occur. If participants are primarily relying on the salient context, the implications are that (a) commission errors can occur even when the target stimulus (word) is absent so long as participants are in the salient context associated with the intention (i.e., in the real-world, if one's bathroom is the context, participants may make a commission error by taking medication B upon walking into the bathroom even if medication A is no longer there), and (b) intention deactivation that focuses on only the target itself will not be sufficient to prevent commission errors. Theoretically speaking, considering the episodic trace account, this means that applying a stop tag to the association between a specific target word and a response (i.e., in the real world, to the association between medication A and the response), may not be sufficient to prevent one from making a commission error when the word is re-presented on the salient background in the finished PM phase (i.e., when medication A is again encountered in the bathroom). To examine this question, we modified the paradigm used in Experiment 1a to include a "lure word" in the finished PM phase (see Fig. 3). Lure words were not the target words encoded in the active PM phase, but they nonetheless appeared on the same colored background previously linked only to target words. For example, if the target words (e.g., corn) were shown on a red background in the active PM phase, then the lure word (e.g. fish) would also be shown on a red background. Examining how participants respond to the lure words informs the question of what plays the role of the stimulus in the stimulus-response link. If participants are linking their PM response to the salient context, regardless of condition (four single-target, one-target, or zero-target), they should show similar commission error rates for the lure words as they do for the previously relevant target words because the identity of the word should not impact whether they respond (press Q), only the background color should matter. However, if they are linking the PM response to the target word alone or a combination of the salient background and the target word (i.e., if the target word itself plays some role in the stimulus component of the stimulus-response link), then regardless of condition the commission error rates for lure words should be lower than the rates for the previously relevant target words. To test these hypotheses, we compared commission error rates between the lure conditions from Experiment 1b and their control (nonlure) counterparts (with an actual, encoded target word) from Experiment 1a. In this experiment, the dependent variable of interest was selectively the commission error rates to the first "target" in the finished PM phase because the lure word occurred only on the first target trial in the lure conditions, and thus was compared with the nonlure target word that was presented on the first trial in the control conditions. Method Design and participants Seventy-five Washington University in Saint Louis undergraduate students with normal or corrected-to-normal vision and color vision, and who reported English as their native language, participated in this study for either monetary compensation or course credit. Participants were randomly assigned to the lure one-target condition, the lure four single-target condition, and the lure zero-target condition (or the zero-target condition from Experiment 1a). The stopping rule and exclusion criteria from Experiment 1a (i.e., 24 participants who met inclusion criteria per condition) were applied in Experiment 1b. In the lure one-target condition, two participants were excluded for failing to press the Q key to the single target in the active PM phase. 7 In addition, one participant was excluded for repeatedly falling asleep in the lure four single-target condition. The final sample for Experiment 1b (N = 72, 24 per condition) was 63.9% female. Materials and procedure The same materials and procedure for the conditions in Experiment 1a were used for the conditions in Experiment 1b (see Fig. 1), with one modification. For all the conditions in Experiment 1b (i.e., lure one-target condition, lure four single-target condition, and lure zero-target condition), participants were presented with one lure word (e.g., fish when the target words encoded were corn and dancer) during the finished PM phase (see Fig. 3). The lures appeared on the same colored background on which participants were told the target words would appear (e.g., red; see Fig. 3). For the lure one-target and lure four single-target conditions, the lure was later followed by one presentation of the target word previously shown in the active PM phase (e.g., corn) and two presentations of the other target word (e.g., dancer). For the lure zero-target condition, the finished PM phase after the lure word matched that of the lure one-target and lure four single-target conditions. Results For each of the analyses below, we applied the same criteria used in Experiment 1a for determining PM hits and commission errors, as well as in the analysis of RT and accuracy. Active PM phase PM hits 8 The average number of PM hits was equivalent between the control one-target condition and the lure one-target condition (M = 1.00, SD = .00), as expected given exclusionary criteria. An independent t test indicated no significant difference in the average number of PM hits between the control four single-target condition (M = 3.92, SD = .28) and the lure four single-target condition (M = 3.67, SD = .64), t(31.70) = 1.76, p = .088. 9 Finished PM phase commission errors 10 Unlike in Experiment 1a, the single commission error measure of interest was the number of participants who made a commission error on the first "target" shown in the finished PM phase (see Fig. 4). This was because the lure (in the lure conditions) appeared only on the first trial and not later trials. We compared the lure conditions (whose first "target" was a lure trial) from Experiment 1b to the control conditions (whose first "target" was in fact a target) from Experiment 1a. For the four single-target condi-7 In the lure four single-target condition, four participants pressed the Q key in the active PM phase three times and two participants pressed the Q key in the active PM phase two times. All other participants in the lure four single-target condition responded four times in the active PM phase. tions, the number of participants who made a commission error did not differ between the control (21%) and lure condition (25%), χ 2 (1) = 0.12, p = .731. For the one-target conditions, numerically more participants made a commission error on the first target in the control one-target condition (46%) compared with its lure counterpart (21%); however, this difference was not significant, χ 2 (1) = 3.38, p = .066. In the zerotarget condition, significantly more participants made a commission error in the control condition (71%) compared with the lure condition (4%), χ 2 (1) = 22.76, p < .001. Strikingly, only one participant made a commission error on the first "target" in the lure zero-target condition compared with 17 participants in the control zero-target condition. Lexical decision task performance We examined RTs during the active PM phase and the finished PM phase by performing a 2 (phase: active PM phase, finished PM phase) × 3 (number of targets: zero, one, four) × 2 (first target type condition: lure, control) mixed-model ANOVA (see Table 1). There was a significant effect of phase, F(1, 138) = 109.07, p < .001, indicating speeding from the active PM phase (M = 709, SD = 115) to the finished PM phase (M = 552, SD = 193). 11 No other main effects or interactions were significant. These patterns suggest participants sped up from the active PM phase to the finished PM phase comparably across all conditions. As for average accuracy, a 2 (phase: active PM phase, finished PM phase) × 3 (number of targets: zero, one, four) × 2 (first target type condition: lure, control) mixed-model ANOVA showed significant effects of phase, F(1, 138) = 287.00, p < .001, and number of targets, F(2, 138) = 4.42, p = .014, but no significant effect of first target type condition, F(1, 138) = 2.06, p = .154 (see Table 2). 12 None of the interactions were significant. Participants performed worse on the ongoing lexical decision task in the active PM phase (M = 83.88%, SD = 7.06%) than in the finished PM phase (M = 91.90%, SD = 4.91%). Post hoc Tukey HSD tests indicated participants performed significantly better in the four singletarget conditions (M = 89.06%, SD = 3.85%), compared with the one-target condition (M = 86.09%, SD = 6.79%), p = .017. However, average accuracy did not significantly differ between the one-target condition and the zero-target condition (M = 88.53%, SD = 4.62%), p = .061, or between the fourtarget condition and the zero-target condition, p = .872. Discussion The purpose of Experiment 1b was to determine what constitutes the stimulus portion of the stimulus-response link. For the zero-target condition, the commission error rate on the first "target" in the finished PM phase was lower in the lure condition compared with the control condition. While not significant, the commission error rate was also numerically lower in the lure condition compared with the control condition for the one-target condition. These findings suggest that participants in the zerotarget condition linked their PM intention to either the target word itself or a combination of the target word and the salient context, a tendency that was apparent though not as robust for participants in the one-target condition. A lower rate of commission errors in a lure condition compared with the corresponding control condition implies that the stimulus portion of the stimulus-response link comprised more than just the salient context. If it was just the salient context, then commission error rates should have been equivalent for the lure and control conditions because the context was present in both cases. Interestingly, in the four single-target condition, commission error rates on the first "target" in the finished PM phase were equivalent for the lure and control conditions. This suggests that in the four single-target condition, the stimulus participants linked their PM intentions to may have been the salient context alone. If the stimulus comprised the word in some form (on its own or in conjunction with the salient context), then commission error rates should have been higher in the control condition. The implication is that, in these cases, the salient context alone may be enough to trigger retrieval of the intention and tempt participants into committing commission errors. Before discussing these implications further, we first attempt to replicate these patterns in another experiment. Fig. 4 Percentage of participants who made a commission error (CE) on the first "target" presented in the finished PM phase by condition. The control conditions are from Experiment 1a and the first "target" was in fact a target on a salient background; the lure conditions are from Experiment 1b, and the first "target" was a lure trial (nontarget word) on a salient background Experiment 2 The contrast between the control (nonlure) conditions of Experiment 1a and lure conditions of Experiment 1b provided novel evidence demonstrating that, in some cases participants associate the PM response not with a specific target or a target/ context conjunction, but merely with the salient context in which the intention is performed. The main purpose of Experiment 2 was to try to replicate the patterns observed in the cross-experimental contrast between Experiments 1a and 1b. Toward this end, in Experiment 2 we randomly assigned participants to one of the six conditions that comprised Experiments 1a and 1b. A related purpose was to collect additional data to allow for a higher-powered test of our primary hypotheses in an analysis that combined the data from all experiments. A final purpose regarded the zero-target condition. In Experiments 1a and 1b all participants (regardless of condition) received the same instructions at the end of the finished PM phase to not perform the PM task "again." As a reviewer noted, it may be possible that participants in the zero-target condition were especially inclined to press the Q key in the finished PM phase (as indicated by higher rates of commission errors) because the instructions to not perform the task "again" may have led them to think the experimenter made an error since they never actually responded previously (i.e., no targets were shown). To address this possibility, in Experiment 2, all conditions were identical to Experiments 1a and 1b, except the zero-target conditions, which we modified by eliminating the word again from the finished instructions. Method Design and participants One hundred and forty-two Washington University in Saint Louis undergraduate students with normal or corrected-to-normal vision and color vision, and who reported English as their native language, participated in this study for course credit. Participants were randomly assigned to the control one-target condition, the control four single-target condition, the control zero-target modified condition, the lure one-target condition, the lure four single-target condition, and the lure zero-target modified condition. The stopping rule and exclusion criteria from Experiment 1a and 1b were adopted in Experiment 2. In the control one-target condition, one participant was excluded for failing to press the Q key to the single target in the active PM phase. 13 In addition, one participant in the lure four single-target condition was excluded for failing to complete the experiment. Due to the COVID-19 pandemic, we were unable to meet the 24participant stopping goal for two conditions. The lure four single-target condition consists of data from 21 participants, and the control one-target condition consists of 23 participants. All other conditions consisted of 24 participants. The final sample for Experiment 2 (N = 140) was 76.4% female. Materials and procedure The same materials and procedure for the conditions in Experiment 1a and Experiment 1b were used for the conditions in Experiment 2 (see Figs. 1 and 3), with one modification. For the control zero-target and lure zero-target modified conditions, the finished instructions were slightly modified to exclude the word "AGAIN" and read: "PLEASE NOTE THAT YOU NO LONGER NEED TO PRESS 'Q' IN THE PRESENCE OF TARGET WORDS. THAT TASK IS FINISHED AND SHOULD NOT BE PERFORMED. Just as before, you will determine whether a string of letters forms a word or a nonword by pressing the keys marked Y and N on the number pad. YOUR ONLY GOAL is to make word/nonword judgments." No other changes were made to any of the conditions. Results For each of the analyses below, we applied the same criteria used in Experiments 1a and 1b for determining "hits" and commission errors, as well as in the analysis of RT and accuracy. Active PM phase PM hits 14 The average number of PM hits was equivalent between the control one-target condition and the lure one-target condition (M = 1.00, SD = .00), as expected. Also, as expected, an independent t test showed a significant difference in average number of PM hits between the control one-target condition (M = 1.00, SD = .00) and the control four single-target condition (M = 3.92, SD = .28), t(23) = 50.61, p < .001. 15 An independent t test indicated no significant difference in the average number of PM hits between the control four single-target condition (M = 3.92, SD = .28) and the lure four single-target condition (M = 4.00, SD = .00), t(23) = 1.45, p = .162. 16 Finished PM phase commission errors 17 The same analyses employed in Experiments 1a and 1b were conducted in Experiment 2. Control conditions (effects of response frequency manipulation). As in Experiment 1a, our primary variable of interest was the number of participants who made a commission error in the finished PM phase (see Fig. 5). The number of participants who made a commission error in the four single-target condition (42%) was nominally lower than the one-target condition (52%), χ 2 (1) = .52, p = .471, but this difference was not significant (unlike in Experiment 1a). Additionally, the number of participants who made a commission error in the zerotarget modified condition (54%) was not significantly greater than the one-target condition, χ 2 (1) = .02, p = .891, or the four single-target condition, χ 2 (1) = .75, p = .386. As can be seen in Fig. 5, the percentage of participants who made at least one commission error in the zerotarget modified condition was low compared with the standard zero-target condition in Experiment 1a, 18 whereas the commission error rate in the one-target condition was comparable between experiments. Lure conditions (effects of lure manipulation). As in Experiment 1b, our primary variable of interest was the number of participants who made a commission error on the first "target" that was either a lure (lure conditions) or the no-longer-relevant target word (control). Mirroring the between-experiment contrast, the number of participants who made a commission error in the four singletarget conditions did not differ between the control (38%) and lure condition (19%), χ 2 (1) = 1.86, p = .173, while significantly more participants made a commission error on the first target in the control zero-target modified condition (46%) compared with its lure counterpart (0%), χ 2 (1) = 14.27, p < .001. Additionally, significantly more participants made a commission error in the control one-target condition (48%) compared with its lure counterpart (4%), χ 2 (1) = 11.78, p = .001 (this difference was not significant, p = .066, in Experiment 1b; see Fig. 6). Lexical decision task performance We examined RTs during the active PM phase and the finished PM phase by performing a 2 (phase: active PM phase, finished PM phase) × 3 (number of targets: zero, one, four) × 2 (first target type condition: lure, 17 Across control conditions, 96.36% of commission errors occurred on the presentation of the no-longer-relevant target word, 2.73% occurred on the subsequent trial, and .91% commission errors occurred on the trial after the subsequent trial. Across lure conditions, 80% of commission errors occurred on the presentation of the first "target" word in the finished PM phase, while 20% occurred on the subsequent trial. 18 Comparing Experiment 1a (zero-target condition) with Experiment 2 (zerotarget modified condition) revealed that there was not a significant difference in the number of participants that made a commission error in the finished PM phase, χ 2 (1) = 2.28, p = .131. Fig. 6 Percentage of participants who made a commission error (CE) on the first "target" presented in the finished PM phase by condition in Experiment 2. The first "target" in the control conditions was in fact a target on a salient background; first "target" in the lure conditions was a lure trial (nontarget word) on a salient background control) mixed-model ANOVA (see Table 1). There was a significant effect of phase, F(1, 134) = 350.72, p < .001, indicating speeding from the active PM phase (M = 698, SD = 114) to the finished PM phase (M = 549, SD = 79). No other main effects or interactions were significant. These patterns suggest participants sped up from the active PM phase to the finished PM phase comparably across all conditions. Combined analysis of Experiments 1a, 1b, and 2 As a reviewer pointed out, a sample size of 24 per cell may yield an underpowered test of some of the comparisons of theoretical interest, and a few of our cells were actually smaller in Experiment 2 due to the interruption of data collection. Some of the comparisons of theoretical interest were significant in one experiment, but not the other (though consistently in the same direction). For example, the contrast in commission error rates between the one-target condition and four single-target condition in Experiment 1a revealed a significantly lower rate for the four single-target condition, whereas this difference was not significant in Experiment 2. Similarly, the contrast in commission error rates between the one-target condition and lure one-target condition in the crossexperimental contrast between Experiments 1a and 1b was not significant, whereas this difference was significant in Experiment 2. To provide a higher-powered test of these and the remaining contrasts (which are all between subjects), we combined the data from all experiments and conducted the commission error analyses reported in the individual experiments. Note, however, that the zero-target conditions were not included in this combined analysis, since we modified the zero-target condition in Experiment 2. Finished PM phase commission errors Control conditions (effects of response frequency manipulation) The combined percentage of participants who made a commission error in the one target conditions from Experiments 1a and 2 was compared with the combined percentage of participants who made a commission error in the four single-target conditions from Experiments 1a and 2. In this combined analysis, significantly fewer participants made a commission error in the four single-target condition (31%) compared with the one-target condition (51%), χ 2 (1) = 3.85, p = .050. For the first no-longer relevant target, fewer participants made a commission error in the four single-target condition (29%) compared with the one-target condition (47%), but this difference was not significant, χ 2 (1) = 3.14, p = .076. Lure conditions (effects of lure manipulation) As a reminder, the primary variable of interest for the lure conditions comparisons was the number of participants who made a commission error on the first "target" that was either a lure (lure condition) or the no-longer-relevant target word (control conditions; see Fig. 7). The combined percentage of participants who made a commission error on the first "target" in the lure one-target conditions from Experiments 1b and 2 was compared with the combined percentage of participants who made a commission error on the first "target" in the control one-target conditions from Experiments 1a and 2. The same comparison was performed for the four single-target conditions. For the one-target conditions, significantly more participants made a commission error on the first target in the control condition (47%) compared with its lure counterpart (13%), χ 2 (1) = 13.45, p < .001. In contrast, for the four single-target conditions, the number of participants who made a commission error did not differ between the control (29%) and lure condition (22%), χ 2 (1) = .59, p = .444. Discussion The primary purpose of Experiment 2 was to attempt to replicate or reproduce (zero-target modified) the patterns Fig. 7 Percentage of participants who made a commission error (CE) on the first "target" presented in the finished PM phase by condition in the combined analysis previously observed in the cross-experimental contrast between Experiments 1a and 1b. Those patterns were closely replicated-there was a significantly higher commission error rate for the control zero-target modified and one-target conditions compared with the lure zero-target modified and onetarget conditions, respectively, but there was not a difference between the control and lure four single-target conditions. Experiment 2 also enabled us to attempt to replicate the patterns observed within Experiment 1a-namely, the higher commission error rate for the one-target condition compared with the four single-target condition, which had supported the response frequency account. While the rate was again higher for the one-target condition, it was not statistically higher in Experiment 2. To further test the key theoretical patterns of interest, we combined the data from the one-target and four-single target conditions (control and lure) across all experiments, which enabled higher-powered tests. Not surprisingly, given the results of the individual experiments, the contrast between the control and lure conditions revealed a significant difference for the one-target conditions, with more commission errors in the control condition compared with the lure condition, while the contrast between the control and lure conditions did not differ for the four single-target conditions. Regarding the higher rate of commission errors for the one-target compared with the four single-target condition in Experiment 1a, a difference that was not significant in Experiment 2, the combined analysis revealed a significant difference between these two conditions. We will discuss these findings in more depth in the General Discussion. A final purpose of Experiment 2 was to investigate whether the rate of commission errors in the zero-target condition might be lower if the finished instructions were revised to eliminate the word again. In Experiment 1a, the rate in the zero-target condition was 75%, whereas in the present experiment it was 54%. Although the difference was not statistically significant (see Footnote 18), the reduction in Experiment 2 supports the possibility that some participants who made commission errors in that condition in Experiment 1a may have been inclined to do so because they thought the experimenter made an error (see General Discussion for further discussion). Interestingly, the low(er) rate in the zero-target modified condition was still significantly higher than that of the corresponding lure zero-target modified condition in Experiment 2, further reinforcing the stability of that contrast (lure vs. nonlure). General discussion The overarching aim of the current research was to better understand the role that the stimulus-response link plays in intention deactivation and commission errors. Prior work has indicated that intention deactivation plays an important role in commission error risk but the process by which PM intentions become deactivated has been less clear. Different theoretical accounts of intention deactivation have been proposed, but prior work has not directly examined which account better explains how PM intentions are deactivated. One aim was to address this question by comparing two previously proposed accounts of intention deactivation, the Zeigarnik account and the episodic retrieval account, by testing two hypotheses that fell out of these accounts termed the degree of fulfillment hypothesis and the response frequency hypothesis, respectively. The degree of fulfillment hypothesis suggests commission errors occur due to perseveration of PM intentions that have not been fulfilled. In contrast, the response frequency account posits that performing the PM intention more frequently in the active PM phase creates a stronger stimulus-response link that allows a stop tag to be connected to the link, making it easier to deactivate the PM intention. The evidence was somewhat mixed with respect to these hypotheses. Experiment 1a supported the response frequency account in demonstrating that rates of commission errors differed between two conditions that were matched on the degree of intention fulfillment (i.e.., the one-target and four singletarget conditions), but varied with respect to the number of responses that were made to the presented target. Responding to the target multiple times (four single-target condition), which presumably strengthened the stimulus-response link for that target, led to fewer commission errors than responding just once (one-target condition). While Experiment 2 again found lower rates of commission errors for the four single-target condition compared with the one-target condition, the difference was not significant. Finally, combining the data from Experiments 1 and 2 to produce a higher-powered test, the commission error rate was significantly lower for the four single-target condition than the one-target condition. Collectively, the findings indicate an effect of the response frequency manipulation on commission error risk that favors the response frequency account; however, the effect was not stable across the two subsamples (Experiments 1a and 2), and therefore we cannot fully rule out the degree of fulfillment account. While Experiment 1a examined the role that the stimulusresponse link plays in intention deactivation, Experiment 1b focused on what exactly the "stimulus" in the stimulusresponse link is. Although one might assume that participants link their PM response to the target words, it is possible that their responses are being linked to the context in which the words appear (e.g., the salient background). To examine this possibility, we embedded lure trials in the finished PM phase in Experiment 1b. Lure trials matched the salient background associated with target trials, but contained a different word (e.g., if targets were corn and dancer, the lure was fish). We then compared commission error rates on the lure trials to the control (target) trials from Experiment 1a. Furthermore, we directly contrasted these conditions head-to-head within a single experiment in Experiment 2. Providing evidence that the stimulus itself (the target word) and not solely the context (salient background) was linked to the PM response, participants in the zero-target and one-target conditions were less likely to make a commission error on lure trials compared with target trials. This is an important finding because it suggests that, under these conditions, context alone may not be enough to cause commission errors. However, in contrast, in the four singletarget condition, participants were just as likely to make a commission error to a lure trial as a target trial, suggesting reliance on context in this case. Notably, these patterns were consistent across the various experiments (Experiments 1a vs. 1b, Experiment 2) and in the combined analysis, suggesting a stable and robust effect of the lure manipulation on commission error risk. The findings in Experiment 1b stimulate an interesting question: Why do participants appear to link the target word to the response in some conditions, but not in other conditions? One possible explanation is due to the predictive nature of the salient context during the active PM phase. Let us first consider the four single-target condition where participants encountered four target word trials and each one appeared in the salient context. It is possible that participants' representation of the "stimulus" in the four single-target condition initially may have consisted of a combination of the target word and the salient background (similar to the other conditions) given the initial instructions during encoding; however, as the predictive value of the salient context increased (i.e., the salient context always correctly predicted the presence of a target word, and the salient context and target word repeatedly appeared together in the active PM phase), participants' representation of the "stimulus" may have shifted to the salient context alone. In most circumstances, this shift would be both logical and efficient, but in this experiment, it may have made participants in the four single-target conditions susceptible to commission errors on the first target in the finished PM phase, including when that target was a lure. By contrast, participants in the one-target condition encountered only one target word in the active PM phase, and although that word appeared in the salient context, a single experience with this pairing may not have been sufficient for participants to associate the context alone with the intention (i.e., to rely on the context to trigger intention retrieval). In the case of the zero-target condition, participants did not encounter any targets and thus never encountered the salient context during the active PM phase. On our view that the context is relied upon to the extent that it is predictive of target occurrence, it is unsurprising that the zero-target condition maintained a representation of the target word (possibly in conjunction with the context) as opposed to relying solely on context. The effects of the lure manipulation are also interesting when considered from the perspective of the dualmechanism account of commission errors. This account has posited a role for the salient context in stimulating spontaneous retrieval of PM intentions in the context of the traditional four-target condition Scullin et al., 2012;Scullin et al., 2011). However, these prior studies did not include lure trials and could not determine if the salient context alone could lead to commission errors. 19 At least in the current four single-target condition, it appears this is possible. However, in the one-target and zero-target conditions, this is not the case. The implication is that spontaneous retrieval of the intention in these latter two conditions may stem from processing either the target word alone or the conjunction of the target word and the salient context. Limitations and future directions Although the findings from Experiment 1a and the combined analysis favored the episodic trace account over the Zeigarnik account, given that the contrast between the four single-target and one target conditions in Experiment 2 supported the degree of fulfillment hypothesis, it is important for future research to continue to examine both accounts. It is likely that the response frequency hypothesis has boundaries. For example, Pink and Dodson (2013) showed that a PM intention that was performed 10 times for each of eight targets became habitual and was difficult to deactivate. In the current study, the maximum number of times that a PM target was responded to was four. Examining a fuller range of possible responses may elucidate the function relating response frequency to intention deactivation. It is also plausible that there may be some contexts in which perseveration of an unfulfilled or partially fulfilled PM intention may lead to a commission error, providing further support for the degree of fulfillment hypothesis. For example, future research might examine whether participants are more likely to make a commission error in the current four single-target condition compared with the traditional fourtarget condition. If so, this would support the degree of fulfillment hypothesis because the current condition is a partial intention fulfillment condition whereas the traditional condition is a complete fulfillment condition. Furthermore, although the total number of responses (four) is equated across these two conditions, in the traditional condition participants respond twice to a given target whereas in the current condition, they respond four times to a given target. Thus, a response frequency account may, if anything, predict the opposite patternhigher commission error risk in the traditional four-target condition. As for our findings regarding the effects of the lure manipulation, while we can conclude that participants in the zerotarget and one-target conditions were not linking their PM intention to the salient context alone, a limitation of our design is that we are not able to differentiate between the other two possible "stimulus" representations: the target word alone or a combination of the target word and the salient context. One clear prediction that falls out of the explanation we forwarded above based on the predictive nature of the salient context is that reliance exclusively on the target word should increase to the extent that one reduces the predictability of the target based on the context (e.g., if the target appeared in a unique context each time). Testing this prediction may prove challenging, however, as prior research has shown that participants are unlikely to commit commission errors unless they appear in a salient (and predictive, to date) context (Scullin et al., 2012;Scullin et al., 2011). Finally, the findings of Experiment 2 should motivate additional research to further explore the sources of commission errors in the zero-target condition. At the start of this research, we considered that the high rates of commission errors found in this condition may reflect either the absence of intention fulfillment (creating intention perseveration) or the absence of prior responses (and therefore, representations to bind a stop tag to). The finding that commission error rates were nominally lower in the modified zero-target condition in Experiment 2 raises the possibility that another source is possible-participants may have pressed the Q key in the standard variant of this condition (e.g., Experiment 1a) because they thought the experimenter made an error upon being instructed not to perform the task "again." Considering past research that has demonstrated higher rates of commission errors in zero-target conditions that did not use phrasing such as the word again (see, e.g., Schaper & Grundgeiger, 2017, who told participants they could ignore the red screen in the finished phase because they were chosen for a condition that did not have to react to it) we also cannot rule out the possibility that the Experiment 2 commission error rate reflects sampling error. Thus, future studies should contrast the standard and modified versions head-to-head to determine the stability of this pattern. Conclusion Collectively, the findings provided initial evidence demonstrating the importance of different aspects of the stimulusresponse link to intention deactivation and accordingly, commission error risk. Consistent with the episodic trace account, one take home message is that prior responding appears to facilitate later deactivation of an intention and the benefits of prior responding can be distinguished from those of intention fulfillment. However, while we provided evidence in Experiment 1a and the combined analysis for the response frequency hypothesis, Experiment 2 did not replicate that pattern, and therefore we cannot yet conclude that it is robust. The second take-home message is that what precisely is stored and associated in the episodic traces of prior responding is not always a target-response link; rather, in some cases, responses are bound merely to contextual cues that are predictive of target occurrence as revealed by the novel lure manipulation. The effect of the lure manipulation appears to be robust as the evidence supporting this conclusion was strong and consistent across experiments and the combined analysis. These novel findings bring us another step closer to understanding the processes underlying intention deactivation. Future research should aim to further test theoretical accounts of intention deactivation and evaluate applications of this knowledge to prevent commission errors from occurring in real-life settings.
2020-10-01T05:07:22.117Z
2020-09-29T00:00:00.000Z
222067800
s2orc/train
v2
Myositis-myasthenia gravis overlap syndrome complicated with myasthenia crisis and myocarditis associated with anti-programmed cell death-1 (sintilimab) therapy for lung adenocarcinoma
Myositis-myasthenia gravis overlap syndrome complicated with myasthenia crisis and myocarditis associated with anti-programmed cell death-1 (sintilimab) therapy for lung adenocarcinoma Immune checkpoint inhibitors (ICIs) have improved clinical outcomes with a number of advanced malignancies. However, diverse immune-related adverse events (iRAEs) occurred with the widespread use of ICIs, some of which are rarely and life-threatening. Here we report a 66-year-old patient with lung adenocarcinoma who received two doses of sintilimab, a human monoclonal antibody against programmed cell death-1 (PD-1), experienced a fatal storm of iRAEs. He was admitted to the intensive care unit (ICU) by immune induced-myositis/myocarditis and rhabdomyolysis. Despite immediate immunosuppressive therapy with methylprednisolone (MP) and immunoglobulin intravenously, he developed into myositis-myasthenia gravis (MG) overlap syndrome complicated with myasthenia crisis. We commenced plasma exchange (PLEX), mechanical ventilation, immunosuppressive therapy, as well as other supportive therapies. Three months later, the patient’s serum creatine phosphate kinase (CPK) and anti-acetylcholine receptor antibody (anti-AChR-Ab) returned to normal despite tumor progression. Herein we discuss the incidence, operating mechanism and management strategies of the fatal iRAEs. Early admission to the ICU and multidisciplinary collaborative treatment for unstable patients with iRAEs could help to achieve a favorable outcome. Introduction The application of immune checkpoint inhibitors (ICIs) has ushered a new era for cancer therapy and shown the efficacy of improving the survival time for patients with advanced malignancies, such as non-small cell lung cancer and metastatic melanoma (1,2). ICIs potentiate T cells cytotoxicity against cancer cells through targeting cytotoxic T lymphocyte-associated protein-4 (CTLA-4), programmed cell death-1 (PD-1) or programmed cell death-ligand 1 (PD-L1) (3,4). The blockade of PD-1 on T cells or PD-L1 on the surface of the cancer cells with antibodies enhances the anti-tumor immune activity by the T cells (5). However, more and more immune-related adverse events (iRAEs) have been reported with the widespread use of ICIs, some of which are severe and fatal for the patients (6,7). Sintilimab (Tyvyt ® ), a human immunoglobulin G4 Case Report (IgG4) anti-PD-1 monoclonal antibody co-developed by Innovent Biologics and Eli Lilly Company, approved for the treatment for relapsed Hodgkin's lymphoma after ≥2 lines of systemic chemotherapy in China-since December 2018, is now undergoing phase I, II and III development using in various solid tumors including nonsquamous nonsmall cell lung cancer (8). Herein we present a case of lung adenocarcinoma, which developed to a fatal storm of iRAEs after two doses of sintilimab. Case presentation A 66-year-old man who had a smoking history of 30 years, admitted for myasthenia gravis (MG), accepted radical resection of thymoma and left lung adenocarcinoma in 2006, with postoperative pathology suggesting type AB thymoma. He was found with the right lung adenocarcinoma in 2014, and underwent video-assisted thoracoscopy pulmonary wedge resection followed by three 21-day cycles of chemotherapy, which was consisted of pemetrexed 0.8 g and carboplatin 0.5 g on day 1 (PC chemotherapy). Three years later, the left lung adenocarcinoma relapsed with the patient. After two cycles of the same PC chemotherapy followed by one course of radiotherapy to 54 Gy in 27 fractions, the tumor continued to progress. As immunohistochemical examination revealed high PD-L1 (test with 22C3 antibody) expression in 50% of the tumor cells from the biopsy of the left lung carcinoma, he was treated with sintilimab at a dose of 200 mg per day intravenously at his demand out of economic reasons. And the second dose was given 21 days later. Four days after two doses of sintilimab, he experienced fatigue, myalgia and tender muscles in both the upper and lower extremities. He was hospitalized to the intensive care unit (ICU) for shortness of breath and progressive muscle weakness 8 days later. Blood results showed increased creatine phosphate kinase (CPK) up to 11,919 (normal 55-170) U/L, serum myoglobin up to over 3,000 (normal 28-72) ng/mL and troponin T (TnT) up to 0.916 (normal 0.013-0.025) ng/mL. Complete right bundle branch block complicated with complete atrioventricular block was seen the electrocardiogram despite normal echocardiogram, resulting in the diagnosis of an immune induced-myositis/ myocarditis and rhabdomyolysis. We immediately started an immunosuppressive therapy with methylprednisolone (MP, 2 mg/kg/d) and immunoglobulin (400 mg/kg/d for 5 days) intravenously. And a temporary pace-maker was inserted for 2 days till the normalization of heart rhythm. In the meantime, the patient presented dysphagia, hypercapnic respiratory failure, ptosis, ophthalmoplegia. He had to be put on non-invasive positive pressure ventilation (NIPPV), but eventually intubated. We diagnosed him with myositis-MG overlap syndrome complicated with myasthenia crisis considering the elevated anti-acetylcholine receptor antibody (anti-AChR-Ab) to 8.97 nmol/L (positive >0.50 nmol/L), then adjusted the treatment with MP (500 mg daily for 5 days followed by does-reduction step by step) and pyridostigmine bromide (120 mg, twice a day). In due time the patient was given tracheotomy, antibiotic therapy, nutrition support, ventilator weaning rehabilitation, regular diaphragm mobility assessment by ultrasound, and pulmonary hygiene. A lack of diaphragmatic activity was then confirmed. After all these treatments, symptoms in his peripheral limbs and eye opening improved, the serum level of CPK normalized and anti-AChR-Ab decreased to 2.394 nmol/L a month after admission. However, the weakness in respiratory muscles barely responded to the treatments. Since the half-life of immunoglobulin was 3-4 weeks, plasma exchange (PLEX) was not carried out until the fifth week after immunoglobulin therapy, avoiding to destroy the efficacy of immunoglobulin (9). Owing to two courses of PLEX, the patient could have normal anti-AChR-Ab and breath spontaneously for 8-12 hours per day. Unfortunately, the repeat CT scan revealed the atelectasis of left upper lung from tumor progression (Figure 1), 2 months after sintilimab therapy. Three months since admission to ICU, the patient is now receiving maintenance therapy of pyridostigmine bromide without prednisone, as well as mechanical ventilation (12 hours per day), and continuing rehabilitation in the general ward. Changes in CPK, TnT and anti-AChR-Ab levels after sintilimab therapy are demonstrated in Figure 2. Discussion Although the incidence of high-grade iRAEs such as myocarditis induced by anti-PD-1 antibody is less than 1%, death occurs from cardiac failure or arrhythmias or complications of a prolonged stay in ICU, leading to a mortality of 46% in the large series. It is reported that the patients, mostly with melanoma or lung cancers, received only one or two doses of anti-PD-1 therapy before the onset of myocarditis, and 25% of them may overlap with myositis or MG (10). Some neuromuscular iRAEs cases, especially in those with preexisting immune disease, seem more likely to suffer from overlap-syndromes indicating myositis, myocarditis and MG present at the same time, and have higher frequencies of myasthenic crisis and fatal deterioration (10)(11)(12). The combination of checkpoint blockade is also much easier to have lifethreatening adverse effects than mono-therapy (13,14). In a large retrospective study of 9,869 cancer patients in Japan with nivolumab, 12 cases (0.12%) developed MG in 6 weeks after nivolumab and rapidly deteriorated despite immediate management, two of them died from myocarditis and myasthenic crisis. Within the-12 MG patients, 10 cases had positive anti-AChR-Ab, and 4 cases were complicated by myositis while 3 cases were accompanied by myocarditis (7). In a systematic review of 85 patients with neuromuscular disorders following anti-PD-1 therapy, more than 30% of them developed cardiac complications. Among the 23 patients diagnosed with MG from the review, 8 patients (35%) had a history of preexisting MG while 6 of whom showing positive anti-AChR-Ab (15). In addition, the patient in our case did not have anti-AChR-Ab test before sintilimab treatment. It reminded us that it would be reasonable to test for anti-AChR-Ab before the commencement of anti-PD-1 therapy to avoid the high risk of fatal iRAEs, especially in patients with thymoma or MG history. The diagnosis of immune-related MG could be based on the clinical features and the presence of serum anti-AChR-Ab or anti-muscle-specific kinase antibodies (anti-MuSK-Ab), however, the auto-antibodies of some patients with seronegative MG are not detectable. Electromyography could contribute to prompt diagnosis at this time (9,16). The operating mechanism of iRAEs is not clear yet. Johnson assumed that there seemed to be cross antigens between tumor tissues and striated muscle (myocardium and skeletal muscle). As a result, the distinct T-cell receptors targeted dissimilar antigens with the mislead of the clonal T-cell receptor sequences across tumor and muscle samples (17). In cases related to ICI-induced myositis/ myocarditis, the skeletal muscle and myocardium biopsy demonstrated a greater infiltration of mononuclear cells, especially CD8+ T cells, leading to the development of iRAEs (7,17,18). Besides, muscle biopsy could act as a helpful diagnostic tool to differentiate necrotizing myopathy from MG. So, it is a pity that the patient in our case refused to take muscle biopsy for definite diagnosis. Ji et al., developed a monkey model characterized by anti-PD-1 antibody-induced multiple organ toxicities including myocarditis, indicating increased proliferation in CD4+ and CD8+ T lymphocytes, as well as cardiac troponin-I and NT-pro-BNP in the myocardium (19). The study of animal models supported that either CTLA-4 or PD-1 could prevent from immune-mediated damage, for the block of CTLA4 or PD-1/PD-L1 might result in anti-tumor response and diversification of recognition to myocardium and skeletal muscle by CD8+ T cells (20). MG has been characterized as the most frequent neuromuscular manifestations among PD-1 inhibitorassociated iRAEs, the clinicians should be aware of how to manage it (21,22). The patients with ICIs-related MG or myasthenic crisis require immediate initiation of intravenous corticosteroid treatment and consideration of treatment escalation with immunoglobulins, PLEX, cyclophosphamide, rituximab, azathioprine or methotrexate (9,11,23). Although immunoglobulins are proved to be equally effective with PLEX for myasthenic crisis, PLEX works more quickly and has better performance in MG patients with positive anti-MuSK-Ab (24,25). Additionally, nearly half of the cases with ICIs-related myasthenic crisis require ventilatory support (7). It is really important to reduce the morbidity by comprehensive assessments of risk factors like history of heart disease, autoimmune disease and diabetes before starting the medication (20). In addition, an early prediction of iRAEs improves the outcome. An American study which enrolled 5160064 anti-PD-1 monotherapy cases from 19 different cancer types, revealed a significant positive correlation between the reporting odds ratios (RORs) of iRAE and the corresponding tumor mutational burden (TMB), which is a biomarker for prediction of therapy response. It strongly suggested cancers with a high TMB, such as melanoma and nonsmall cell lung cancer, are associated with a higher ROR of iRAE during anti-PD-1 therapy than cancers with a low TMB because of antigen spreading (26). Furthermore, physicians from both oncology and ICU department should have a good knowledge of the salient clinical features and make full use of the examinations to avoid delaying the diagnosis. And cardiac magnetic resonance imaging (MRI) scan is the gold standard noninvasive test for the diagnosis of myocarditis (27). Unfortunately, the patient in this case refused it for economic reasons. Mortality was high in the patients with high-grades iRAEs, despite immediate and adequate treatment strategies including corticosteroid, immunoglobulins, PLEX and immune-adsorption. A PD-1-induced myocarditis case suffered from malignant arrhythmia was reported to get cured by alemtuzumab on the base of fundamental anti-immunotherapy. Alemtuzumab is a monoclonal antibody that binds to CD52 of peripheral immune cells and leads to rapid cytolytic induction of immunosuppression (28). Early admission for unstable patients of iRAEs to ICU is recommended. A collaborative and multidisciplinary approach dominated by the intensivists and physicians from related departments could help to achieve a favorable outcome. Footnote Conflicts of Interest: The authors have no conflicts of interest to declare. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Written informed consent was obtained from the patients for publication of this manuscript and any accompanying images. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2020-03-19T10:31:49.594Z
2020-03-01T00:00:00.000Z
214688910
s2orc/train
v2
His bundle capture proximal to the site of bundle branch block: A novel pitfall of the para-Hisian pacing maneuver
His bundle capture proximal to the site of bundle branch block: A novel pitfall of the para-Hisian pacing maneuver Paced QRS morphology should be assessed to differentiate between complete loss of His bundle capture and loss of distal His bundle capture alone. This might be enough in patients with proximal right bundle branch block; however, in patients with proximal left bundle branch block, QRS morphology might be similar with loss of His bundle capture and with proximal His bundle capture. Introduction The para-Hisian pacing maneuver is useful in determining whether retrograde conduction is dependent on atrioventricular (AV) nodal conduction. Loss of direct His bundle capture results in a longer route for the depolarization wave to reach the AV node and the atrium, as it has to travel through the working myocardium to engage the distal Purkinje fibers. Thus, loss of direct His bundle capture results in obligatory ventriculoatrial (VA) interval prolongation unless a nonphysiological retrograde conduction route (an accessory pathway [AP]) is present. Consequently, a stable VA interval with loss of His bundle capture is considered diagnostic of the presence of an AP. This concept has been regarded as useful, especially when concentric retrograde atrial activation is present. Subsequently, however, potential important pitfalls in the interpretation of this differentiating maneuver were described. These include the recognition of inadvertent atrial capture, pure His bundle capture, the presence of fasciculoventricular pathways, and the impact of retrograde dual AV nodal physiology. A further reduction in pacing output or pushing the pacing catheter slightly deeper into the right ventricle to ensure pure right ventricular myocardial capture may be useful to avoid misinterpretation of proximal His bundle capture as pure right ventricular myocardial capture. Introduction The para-Hisian pacing maneuver is useful in determining whether retrograde conduction is dependent on atrioventricular (AV) nodal conduction. Loss of direct His bundle capture results in a longer route for the depolarization wave to reach the AV node and the atrium, as it has to travel through the working myocardium to engage the distal Purkinje fibers. Thus, loss of direct His bundle capture results in obligatory ventriculoatrial (VA) interval prolongation unless a nonphysiological retrograde conduction route (an accessory pathway [AP]) is present. Consequently, a stable VA interval with loss of His bundle capture is considered diagnostic of the presence of an AP. This concept has been regarded as useful, especially when concentric retrograde atrial activation is present. 1 Subsequently, however, potential important pitfalls in the interpretation of this differentiating maneuver were described. These include the recognition of inadvertent atrial capture, pure His bundle capture, the presence of fasciculoventricular pathways, and the impact of retrograde dual AV nodal physiology. [2][3][4][5] Case report A 14-year-old boy presented with recurrent syncope and preexcitation on the surface electrocardiogram. An electrophysiology study was performed under general anesthesia. The AP conducted intermittently in the basal state, unmasking incomplete right bundle branch. After administration of isoprenaline, however, the AP was capable of 1:1 conduction at a cycle length of 240 ms. The fully preexcited QRS morphology observed indicated localization in the left posterior region. The anterograde Wenckebach point was reached at 230 ms, with conduction only via His-Purkinje system at this rate, along with normal atrium-His and His-ventricle intervals and right bundle branch block. Retrograde conduction was decremental with 1:1 conduction up to a cycle length of 240 ms and was mildly eccentric, with earliest atrial activation just after coronary sinus ostium, corresponding to the presumed AP localization. Despite isoprenaline infusion, no arrhythmia was induced. Para-Hisian pacing was performed to determine whether the AP conducted in a retrograde manner. The decrease KEY TEACHING POINTS Care should be taken in interpreting the results of para-Hisian pacing maneuvers in patients with preexisting, functional or even transient, mechanically induced bundle branch block. In such cases, proximal His bundle capture with sudden QRS prolongation mimics complete loss of His bundle capture and might lead to misinterpretation of the response as extranodal. Paced QRS morphology should be assessed to differentiate between complete loss of His bundle capture and loss of distal His bundle capture alone. This might be enough in patients with proximal right bundle branch block; however, in patients with proximal left bundle branch block, QRS morphology might be similar with loss of His bundle capture and with proximal His bundle capture. A further reduction in pacing output or pushing the pacing catheter slightly deeper into the right ventricle to ensure pure right ventricular myocardial capture may be useful to avoid misinterpretation of proximal His bundle capture as pure right ventricular myocardial capture. This pitfall may be avoided by recording retrograde His potential with a second catheter to confirm the loss of direct His bundle capture rather than relying on the usual observation of sudden QRS prolongation with the decrease in pacing output. Discussion This patient showed no evidence of VA interval prolongation despite a sudden change in QRS morphology and an increase in QRS duration from 114 to 140 ms. However, several basic questions must be asked when interpreting the results of a para-Hisian pacing maneuver, including: (1) Was the His bundle truly directly captured? (2) Was there loss of His bundle capture with only pure right ventricular (RV) capture achieved? and (3) Was direct capture of the atrium avoided? The first 2 questions are usually addressed by the observation of a narrow QRS complex during high-output pacing and a sudden QRS complex broadening with a reduction of the pacing output. Observation of changes in the timing of a retrograde His potential is a more certain and elegant method, although using a single catheter for pacing and recording often results in a suboptimal position for recording His bundle potential. Moreover, saturation of the channel by the pacing stimulus often obscures a small His bundle potential. Direct capture of the atrium is unlikely with coronary sinus ostium VA interval .85 ms and can be excluded by shortening of the VA by .20 ms with deliberate capture of the right atrium. 2 All these "prerequisites" were present in the present case, but the findings in Figure 1 were misleadingly indicative of retrograde AP conduction. The key observation in this patient was that the broad QRS complexes, initially regarded as resulting from pure RV myocardial capture, did not have a morphology compatible with pacing of the high interventricular septum. After loss of His bundle capture, the QRS complexes are always of left, not right, bundle branch block morphology. The lead V 1 QRS complex should be uniformly negative, and lead I QRS complex should be a monophasic R wave, which is often slurred or notched. The phenomenon observed in Figure 1 can be explained as resulting from loss of distal His bundle capture with unmasking of the underlying right bundle branch block, not as loss of direct His bundle capture. Direct His bundle pacing in the presence of bundle branch block can result in the normalization of QRS duration and morphology, especially with higher pacing output. This phenomenon, initially described by Narula in 1977, is currently observed more frequently, as the QRS complex normalizes with pacing in w70% of patients with bundle branch block and permanent direct His bundle pacing. 6,7 The classic explanation of this phenomenon involves the longitudinal dissociation of the His bundle, in that this bundle consists of fibers that are isolated and "predestined" to form the right or left bundle fascicles, as well as His bundle capture beyond the area of the block. Because the pacing site did not change in this patient, the higher pacing amplitude likely resulted in extension of the area of direct capture beyond the level of the block in the right bundle fibers of the His bundle. Alternative explanations involving hyperpolarization and mobilization of the diseased/dysfunctional tissue with higher output have been proposed. 8 This explanation was supported by the subsequent findings. A further reduction in pacing stimulus energy resulted in bona fide pure RV capture, characterized by further QRS prolongation to 160 ms and typical QRS morphology. This corresponded to a VA interval prolongation from 85 to 128 ms, indicating an AV nodal response (Figure 2). The patient underwent AP ablation because of the high AP catecholamine sensitivity and history of syncope, after which the para-Hisian pacing maneuver was repeated. The para-Hisian pacing results obtained after ablation were identical to those obtained before ablation. Moreover, adenosine administration during ventricular pacing confirmed lack of retrograde AP conduction. Figure 3 illustrates the proposed mechanism underlying the para-Hisian pacing results, unifying the observed VA intervals and QRS morphologies with changes in pacing output and capture of different heart structures. Conclusion This report describes a novel pitfall of the para-Hisian pacing maneuver: proximal His bundle capture in a patient with intra-Hisian bundle branch block mimics complete loss of His bundle capture and extranodal response.
2018-04-03T00:11:02.949Z
2017-11-01T00:00:00.000Z
7576710
s2orc/train
v2
Differential nuclear translocation and transactivation potential of beta-catenin and plakoglobin.
Differential nuclear translocation and transactivation potential of beta-catenin and plakoglobin. beta-Catenin and plakoglobin are homologous proteins that function in cell adhesion by linking cadherins to the cytoskeleton and in signaling by transactivation together with lymphoid-enhancing binding/T cell (LEF/TCF) transcription factors. Here we compared the nuclear translocation and transactivation abilities of beta-catenin and plakoglobin in mammalian cells. Overexpression of each of the two proteins in MDCK cells resulted in nuclear translocation and formation of nuclear aggregates. The beta-catenin-containing nuclear structures also contained LEF-1 and vinculin, while plakoglobin was inefficient in recruiting these molecules, suggesting that its interaction with LEF-1 and vinculin is significantly weaker. Moreover, transfection of LEF-1 translocated endogenous beta-catenin, but not plakoglobin to the nucleus. Chimeras consisting of Gal4 DNA-binding domain and the transactivation domains of either plakoglobin or beta-catenin were equally potent in transactivating a Gal4-responsive reporter, whereas activation of LEF-1- responsive transcription was significantly higher with beta-catenin. Overexpression of wild-type plakoglobin or mutant beta-catenin lacking the transactivation domain induced accumulation of the endogenous beta-catenin in the nucleus and LEF-1-responsive transactivation. It is further shown that the constitutive beta-catenin-dependent transactivation in SW480 colon carcinoma cells and its nuclear localization can be inhibited by overexpressing N-cadherin or alpha-catenin. The results indicate that (a) plakoglobin and beta-catenin differ in their nuclear translocation and complexing with LEF-1 and vinculin; (b) LEF-1-dependent transactivation is preferentially driven by beta-catenin; and (c) the cytoplasmic partners of beta-catenin, cadherin and alpha-catenin, can sequester it to the cytoplasm and inhibit its transcriptional activity. Elevation of ␤ -catenin in colon carcinoma cells that express a mutant APC molecule (Powell et al., 1992;Polakis, 1997), or in melanoma where mutations in the NH 2 -terminal domain of ␤ -catenin were detected (both inhibiting ␤ -catenin degradation) is oncogenic, most probably due to constitutive activation of target genes which contributes to tumor progression Morin et al., 1997;Peifer, 1997;Rubinfeld et al., 1997). Interestingly, plakoglobin was shown to suppress tumorigenicity when overexpressed in various cells (Simcha et al., 1996;Ben-Ze'ev, 1997), and displays loss of heterozigosity in sporadic ovarian and breast carcinoma (Aberle et al., 1995). Moreover, upon induction of plakoglobin expression in human fibrosarcoma and SV-40-transformed 3T3 cells ␤ -catenin is displaced from its complex with cadherin and directed to degradation (Salomon et al., 1997). In the present study we characterized the mechanisms underlying nuclear accumulation of ␤ -catenin and/or plakoglobin and identified some of the partners associated with both proteins in the nucleus. Furthermore, we compared the nuclear translocation and transactivation abilities of wild-type (wt) and mutant ␤ -catenin and plakoglobin constructs and found that these two proteins differ considerably in these properties, and demonstrated that N-cadherin, as well as ␣ -catenin, can drive ␤ -catenin from the nucleus to the cytoplasm and consequently block activation of LEF-1-responsive transcription. We propose that the deregulated transactivation associated with elevated ␤ -catenin in certain tumors can be suppressed by cadherins and ␣ -catenin. Cell Culture and Transfections Canine kidney epithelial cells MDCK, human fibrosarcoma HT1080, 293-T human embryonic kidney cells, Balb/C mouse 3T3, and human colon carcinoma SW480 cell lines were cultured in DME plus 10% calf serum (Gibco Laboratories, Grand Island, NY) at 37°C in the presence of 7% CO 2 . The human renal carcinoma cell line KTCTL60 (Simcha et al., 1996) was grown in RPMI medium and 10% calf serum. Cells were transiently transfected with the cDNA constructs described below, using Ca 2 ϩ -phosphate or lipofectamine (Gibco Laboratories) and the expression of the transgene was assessed between 24 and 48 h after transfection. In some experiments, the expression of the stably transfected NH 2 terminusdeleted ␤ -catenin ( ⌬ 57 ) (Salomon et al., 1997) was enhanced in HT1080 cells by overnight treatment with 2 mM sodium butyrate. The DNA-binding domain of Gal4 (Gal4DBD) was obtained by PCR using a 5Ј-ACCTTCTAGAATGAAGCTACTGTCTTCTATC-3Ј oligonucleotide with an XbaI site, and 5Ј-ACCTGAGCTCCGATACAGT-CAACTGTCTTTG-3Ј with a SacI site (see Fig. 1, Gal4DBD ␤-catenin and Gal4DBD plakoglobin), and with the antisense primer 5Ј-ACCTG-GATCCTACGATACAGTCAACTGTCTTTG-3Ј creating a BamHI site (see Fig. 1, Gal4DBD). Fragments corresponding to the COOH terminus of ␤-catenin (aa 682-781) and plakoglobin (aa 672-745) were obtained by PCR using sense primers containing a SacI site and antisense primers containing a BamHI site. The XbaI/SacI fragment of Gal4DBD and the SacI/ BamHI fragments of ␤-catenin and plakoglobin were joined and cloned into pCGN (see Fig. 1). The XbaI/BamHI fragment of Gal4DBD was cloned into pCGN and used as control in transactivation assays. A cytomegalovirus promoter-driven N-cadherin cDNA was used (Salomon et al., 1992). The validity of the constructs shown in Fig. 1 was verified by sequencing and then the size of the proteins were determined after transfection into 293-T and MDCK cells by Western blotting with anti-HA and anti-VSV antibodies. Transactivation Assays Transactivation assays were conducted with SW480 and 293-T cells grown in 35-mm-diam dishes that were transfected with 0.5 g of a plasmid containing a multimeric LEF-1 consensus-binding sequence driving the luciferase reporter gene (TOPFLASH), or a mutant inactive form (FOP-FLASH, provided by H. Clevers and M. van de Wetering, University of Utrecht, Utrecht, The Netherlands). A plasmid encoding ␤-galactosidase (0.5 g) was cotransfected to enable normalization for transfection efficiency. The relevant plasmid expressing catenin constructs (4.5 g) was cotransfected with the reporter, or an empty expression vector was included. After 24-48 h, expression of the reporter (luciferase) and the control (␤-galactosidase) genes were determined using enzyme assay systems from Promega Corp. The Gal4RE reporter plasmid for determining transactivation by Gal4DBD chimeras was constructed as follows: oligonucleotides comprising a dimer of 17 nucleotides of the Gal4 binding sequences (Webster et al., 1988) containing a SacI site at the 5Ј end and a BglII site at the 3Ј end, were obtained by PCR amplification using 5Ј-GGAAGACTCTCCTCCGGATC-CGGAAGACTCTCCTCC-3Ј and 5Ј-GATC-GGAGGAGAGTCTTCCG-GATCCGGAGGAGAGTCTTCCAGCT-3Ј and subcloned as a SacI/ BglII fragment into the pGL3-promoter plasmid (Promega Corp.) driving luciferase expression. Northern Blot Hybridization Total RNA was extracted from cells by the guanidinium thiocyanate method. Northern blots containing 20 g per lane of total RNA were stained with methylene blue to determine the positions of 18S and 28S rRNA markers, and then hybridized with plakoglobin (Franke et al., 1989) and ␤-catenin (Butz et al., 1992) cDNAs, which were labeled with 32 P-dCTP by the random priming technique, as described in Salomon et al. (1997). Protease Inhibitors The calpain inhibitor N-acetyl-leu-leu-norleucinal (ALLN, used at 25 M) and the inactive analogue N-acetyl-leu-leu-normethional (ALLM, used at 10 g/ml) were purchased from Sigma Chemical Co. (St. Louis, MO). Lactacystin A (dissolved in water at 0.4 mg/ml was used at a final concentration of 4 g/ml) and MG-132 (used at 10 or 20 M) were purchased from Calbiochem-Novabiochem (La Jolla, CA). Immunofluorescence Microscopy Cells cultured on glass coverslips were fixed with 3.7% paraformaldehyde in PBS and permeabilized with 0.5% Triton X-100 (Sigma Chemical Co.). Monoclonal antibodies against human plakoglobin (11E4), ␤-catenin (5H10), the COOH terminus of ␤-catenin (6F9), and ␣-catenin (1G5) were described (Johnson et al., 1993;Sacco et al., 1995), and provided by M. Wheelock and K. Johnson (University of Toledo, Toledo, OH). The secondary antibody was FITC-or Cy3-labeled goat anti-mouse IgG (Jackson ImmunoResearch Laboratories, West Grove, PA). Polyclonal antiserum against ␤-catenin, monoclonal antibody against pan cadherin (CH-19), vinculin (h-VIN 1), and ␣-actinin (BM75.2) were from Sigma Chemical Co. (Holon, Israel). Monoclonal antibodies against the splicing factor SC35 and the HA epitope were provided by D. Helfman (Cold Spring Harbor Laboratory, Cold Spring Harbor, NY). Polyclonal rabbit antibody against the VSV-G epitope was a gift of J.C. Perriard (Swiss Federal Institute of Technology, Zurich, Switzerland). FITC-labeled goat anti-rabbit IgG antibody was from Cappel/ICN (High Wycombe, UK). Monoclonal antibody against LEF-1 was provided by R. Grosschedl (University of California, San Francisco, CA). Polyclonal anti-HA tag antibody was a gift of M. Oren (Weizmann Institute, Rehovot, Israel). Antibodies to actin were provided by J. Lessard (Childrens Hospital Research Foundation, Cincinnati, OH) and I. Herman (Tufts University, Boston, MA). The cells were examined by epifluorescence with an Axiophot microscope (Carl Zeiss, Inc., Thornwood, NY). To determine the level of overexpression relative to the endogenous protein, digitized immunofluorecent microscopy was used. Images of the fluorescent cells were recorded with a cooled, scientific grade charge-coupled device camera (Photometrics, Tucson, AZ) as described (Levenberg et al., 1998). Integrated fluorescent intensities in transfected and nontransfected cells were determined after the background was substracted. Electron Microscopy Cells were processed for conventional electronmicroscopy by fixation with 2% glutaraldehyde followed by 1% OsO 4 . The samples were dehydrated and embedded in Epon (Polysciences, Inc., Warington, PA), sectioned, and then examined in an electron microscope (model EM410; Philips Electron Optics, Eindhoven, The Netherlands). Samples processed for immunoelectronmicroscopy were fixed with 3% paraformaldehyde and 0.1% glutaraldehyde in 100 mM cacodylate buffer, pH 7.4, containing 5 mM CaCl 2, embedded in 10% gelatin and refixed as above, and then incubated with sucrose, frozen, and then cryosectioned as previously described (Sabanay et al., 1991). The sections were incubated with monoclonal anti-vinculin (h-VIN-1), monoclonal anti-␤-catenin (5H10), or polyclonal anti-VSV tag (to recognize the VSV-tagged ␤-catenin), followed by secondary antibody conjugated to 10 nm gold particles (Zymed Labs, Inc., South San Francisco, CA). The sections were embedded in methyl cellulose and examined in an electron microscope (model CM12; Philips Electron Optics). PAGE and Immunoblotting Equal amounts of total cell protein were separated by SDS PAGE, electrotransferred to nitrocellulose, and then incubated with monoclonal antibodies. The antigens were visualized by enhanced chemiluminescence (Amersham Int., Little Chalfont, UK). In some experiments, cells were fractionated into Triton X-100-soluble and -insoluble fractions as described (Rodríguez Fernández et al., 1992). Briefly, cells cultured on 35-mm dishes were incubated in 0.5 ml buffer containing 50 mM MES, pH 6.8, 2.5 mM EGTA, 5 mM MgCl 2 and 0.5% Triton-X-100 at room temperature for Figure 1. Schematic representation of catenin constructs used in this study. The molecules were tagged either with the hemagglutinin tag (HA) at the NH 2 terminus, or with the VSV-G protein tag (VSV) at the COOH terminus. Numbers 1-13 represent armadillo repeats in ␤-catenin and plakoglobin with a nonrepeat region (ins) between repeats 10 and 11. Mutant plakoglobin and ␤-catenin lacking the COOH transactivation domain (HA plakoglobin 1-ins; HA ␤-catenin 1-ins) were also constructed. An HA-tagged ␣-catenin that lacks the ␤-catenin-binding domain was also prepared (HA ␣-catenin ⌬␤). The COOH-terminal (C-term) transactivation domains of ␤-catenin and plakoglobin were fused to the DNA binding domain of Gal4 (Gal4DBD) to allow assessment of their transactivation potential. Overexpression of ␤-Catenin and Plakoglobin in MDCK Cells Results in Their Nuclear Accumulation and Nuclear Translocation of Vinculin To determine the localization of overexpressed ␤-catenin and plakoglobin, MDCK cells that normally display these molecules at cell-cell junctions were transiently transfected with VSV-tagged ␤-catenin or plakoglobin cDNA constructs ( Fig. 1) and immunostained with either anti-VSV, anti-␤-catenin, or antiplakoglobin antibodies. The results in Fig. 2 show that when expressed at very low level, ␤-catenin was detected at cell-cell junctions ( Fig. 2 F), but in cells expressing higher levels (approximately fivefold over the endogenous protein level, as determined by digital immunofluorescent microscopy), most of the transfected molecules were localized in the nuclei of cells, either in a diffuse form, or in aggregates of various shapes (Fig. 2, A-C, speckles and rods). These ␤-catenin-contain-ing aggregates were organized in discernible structures that could be identified by phase microscopy (Fig. 3, E and F). Transmission electron microscopy of Epon-embedded cells revealed within the nucleus highly ordered bodies consisting of laterally aligned filamentous structures, with a filament diameter of ‫01ف‬ nm and packing density of ‫05ف‬ filaments/m (Fig. 3 A). Immunogold labeling of ultrathin frozen sections indicated that these intranuclear bodies contained high levels of ␤-catenin (Fig. 3, B and C). Transfection of plakoglobin also resulted in nuclear accumulation of the molecule and, in addition, showed diffuse cytoplasmic (Fig. 2 D) and junctional staining (see below, Fig. 7 H). Whereas nuclear aggregates were observed in plakoglobin-transfected cells (Fig. 2 E), large rods were not detected in the nuclei of these cells. Similar structures were observed with HA-tagged and untagged ␤-catenin or plakoglobin (data not shown), suggesting that these structures in the nucleus assembled due to high levels of these proteins and are not attributable to tagging. The unique assembly of ␤-catenin into discrete nuclear structures is, most probably nonphysiological, yet it enabled us to examine the association of other molecules with ␤-catenin (Fig. 4). Interestingly, in addition to the transcription factor LEF-1 (Fig. 4 B) that was shown to complex with ␤-catenin in the nucleus (Behrens et al., 1996; Huber et al., 1996b;Molenaar et al., 1996), vinculin also strongly associated with the ␤-catenin-containing speckles and rods in the nucleus (Fig. 4, C and D). This was also confirmed by immunogold labeling of ultrathin frozen sections with anti-vinculin antibodies showing that the labeling was distributed throughout the entire nuclear aggregate (Fig. 3 D). In contrast, other endogenous proteins known to be involved in linking cadherins to actin at adherens junctions such as ␣-catenin (Fig. 4 F), ␣-actinin ( Fig. 4 J), and plakoglobin ( Fig. 4 H) were not associated with the ␤-catenin-containing nuclear aggregates. Actin was also missing from these nuclear aggregates (results not shown). Furthermore, the ␤-catenin-containing rods and speckles were clearly distinct from other nuclear structures such as the splicing component SC35 (data not shown), which also displays a speckled nuclear organization in many cells (Cáceres et al., 1997). The molecular interactions of plakoglobin were distinctly different from those formed by ␤-catenin. Although nuclear speckles in MDCK cells overexpressing plakoglo-bin displayed some faint staining for LEF-1 (Fig. 5, compare B with A), this was less pronounced than that seen with ␤-catenin ( Fig. 4 B), and essentially no nuclear costaining for vinculin, ␣-catenin (data not shown), or ␣-actinin (Fig. 5 F) was observed. Interestingly, plakoglobincontaining nuclear speckles (Fig. 5 C) were also stained with anti-␤-catenin, and the cytoplasm of these cells was essentially devoid of the diffuse ␤-catenin staining seen in nontransfected cells (Fig. 5 D). This may be explained by the capacity of plakoglobin to compete and release ␤-cate- nin from its other partners (i.e., cadherin or APC) leading to its nuclear translocation. Nuclear Accumulation of ␤-Catenin after Induced Overexpression Transient transfection usually results in very high and nonphysiological levels of expression (and organization) of the transfected molecules. To obtain information on ␤-catenin that is more physiologically relevant, we have isolated HT1080 cells stably expressing a mutant ␤-catenin molecule lacking the NH 2 -terminal 57 aa (⌬N57, Salomon et al., 1997) that is considerably more stable than the wt protein Papkoff et al., 1996;Rubinfeld et al., 1996;Yost et al., 1996). In such stably transfected cells, the level of expression was low and only faint nuclear ␤-catenin staining was detected, with the majority of ␤-catenin localized at cell-cell junctions (Fig. 6 C). However, when the cells were treated with butyrate, an approximate twofold increase in the expression of the transgene was observed by Western blot analysis (data not shown), and a dramatic translocation of ␤-catenin into the nucleus occurred (Fig. 6 D). This translocation was not observed in butyrate-treated control neo r HT1080 cells (Fig. 6 B). These results suggest that an increase in ␤-catenin over certain threshold levels of either transiently or induc-ibly expressed ␤-catenin results in its nuclear translocation and accumulation. LEF-1 Overexpression Induces Translocation of Endogenous ␤-Catenin but Not Plakoglobin into the Nuclei of MDCK Cells Nuclear translocation of ␤-catenin was shown to be promoted by elevated LEF-1 expression. We have thus compared the ability of transfected LEF-1 to induce the translocation of endogenous ␤-catenin and plakoglobin into the nuclei of MDCK cells. Cells were transfected with an HAtagged LEF-1 and after 36 h were doubly immunostained for LEF-1 and ␤-catenin, or for LEF-1 and plakoglobin. As shown in Fig. 7, whereas endogenous ␤-catenin was efficiently translocated into the nucleus in LEF-1-transfected cells and colocalized with LEF-1 (Fig. 7, A and B), plakoglobin was not similarly translocated into the nucleus (Fig. 7, compare D with C). This difference between ␤-catenin and plakoglobin could result from the larger pool of diffuse ␤-catenin (Fig. 8 A) that is available for complexing and translocation into the nucleus with LEF-1. In contrast, plakoglobin was almost exclusively found in the Triton X-100insoluble fraction (Fig. 8 A), in association with adherens junctions and desmosomes. Vinculin and ␣-catenin that also display a large Triton X-100-soluble fraction (Fig. 8 A), in contrast to ␣-actinin and plakoglobin, were not translocated into the nucleus after LEF-1 transfection (data not shown). When LEF-1 was overexpressed in MDCK cells together with plakoglobin, both proteins were localized in the nucleus and displayed diffuse staining (Fig. 7, G and H), similar to ␤-catenin (Fig. 7, E and F). In cells doubly transfected with ␤-catenin and LEF-1, vinculin was also translocated into the nucleus displaying diffuse staining (Fig. 8 B, panels a and b), whereas plakoglobin was not detected in the nuclei of such cells (Fig. 8 B, panels c and d). Since LEF-1 transfection did not result in nuclear localization of vinculin (data not shown), this implies that vinculin translocation into the nucleus is only related to that of ␤-catenin. Induction of ␤-Catenin Accumulation and Its Nuclear Localization by Inhibition of the Ubiquitin-Proteasome System Another treatment through which ␤-catenin and plakoglobin content could be elevated in cells is the inhibition of degradation by the ubiquitin-proteasome pathway that apparently controls ␤-catenin and plakoglobin turnover (Aberle et al., 1997;Orford et al., 1997;Salomon et al., 1997). We have used two cell lines for this study: 3T3 cells that express ␤-catenin, and KTCTL60 renal carcinoma cells that do not express detectable levels of cadherin, ␣-catenin, ␤-catenin, or plakoglobin (Simcha et al., 1996), and treated them with various inhibitors of the ubiquitinproteasome system (Fig. 9). In 3T3 cells, such treatment resulted in the appearance of higher molecular weight ␤-catenin forms representing most likely ubiquitinated derivatives of the molecule (Fig. 9 A). This was accompanied by the accumulation of ␤-catenin in the nuclei of the cells (Fig. 9 D, compare panels a and b). In KTCTL60 cells that contain minute levels of ␤-catenin, inhibitors of the ubiquitin-proteasome pathway induced a dramatic increase in the level of ␤-catenin ( Fig. 9 B), and its translocation to the nucleus (Fig. 9 D, compare panels c and d). Since these cells express no cadherins (Simcha et al., 1996), it is conceivable that free ␤-catenin is very unstable and rapidly degraded by the proteasome pathway in these cells. In contrast, no plakoglobin was detected in KTCTL60 cells either before or after treating the cells with the proteasome inhibitors (Fig. 9 B) due to lack of plakoglobin RNA in these cells (Fig. 9 C). When plakoglobin was stably overexpressed in KTCTL60-PG cells (Fig. 9 C), its level was further increased by inhibitors of the ubiquitin-proteasome system (Fig. 9 B), and it accumulated in the nuclei of the cells (Fig. 9 D, panels e and f). These results demonstrate that in some cells the level of ␤-catenin can be dramatically enhanced by inhibiting its degradation by the ubiquitin-proteasome pathway. Plakoglobin levels were also enhanced by MG-132 treatment, but to a considerably lower extent. Under conditions of excess, both proteins accumulated in the nuclei of cells. Transcriptional Coactivation by Plakoglobin and ␤-Catenin of Gal4-and LEF-1-driven Transcription ␤-Catenin and its homologue in Drosophila armadillo were shown to be able to activate transcription of LEF/ TCF-responsive consensus sequences by their COOH termini Morin et al., 1997;Riese et al., 1997;van de Wetering et al., 1997). To compare the ability of ␤-catenin to that of plakoglobin in transactivation, the COOH terminus of ␤-catenin and the corresponding domain in plakoglobin were fused to the Gal4DBD (refer to Fig. 1). Both constructs, when cotransfected with a reporter gene (luciferase) whose transcription was driven by a Gal4-responsive element, showed a similar ability to activate the expression of the reporter gene (Fig. 10 A). This implies that the COOH-terminal domain of plakoglobin, like that of armadillo and ␤-catenin, has the ability to activate transcription. Next, we compared the capacity of ␤-catenin and plakoglobin to activate transcription of a reporter gene driven by a multimeric LEF-1-binding consensus sequence in 293 cells. The results summarized in Fig. 10 B demonstrate that ␤-catenin is a potent transcriptional coactivator of the multimeric LEF-1-responsive sequence. Interestingly, a mutant ␤-catenin lacking the COOH transactivation domain (refer to Fig. 1, HA ␤-catenin 1-ins) was also active in promoting LEF-1-driven transcription (Fig. 10 B). We examined whether this resulted from the substitution for endogenous ␤-catenin in its complexes with cadherin and APC by the mutant ␤-catenin as seen in Xenopus embryos injected with mutant ␤-catenin (Miller and Moon, 1997). This could re-lease endogenous ␤-catenin from cytoplasmic complexes, resulting in its translocation into the nucleus and transcriptional activation of the LEF-1-responsive reporter. Double immunofluorescence using an antibody against the HA-tag linked to the mutant ␤-catenin (Fig. 10 C, panel c) and an anti-␤-catenin antibody recognizing the COOH terminus of endogenous ␤-catenin (but not the mutant HA ␤-catenin 1-ins that lacks this domain), demonstrated that the level of endogenous ␤-catenin was elevated, and part of the endogenous protein translocated into the nucleus in cells expressing mutant ␤-catenin (Fig. 10 C, panel d). Plakoglobin could also activate LEF-1-driven transcription, albeit at a three-to fourfold lower extent than ␤-catenin (Fig. 10 B). A mutant plakoglobin lacking the COOH transactivation domain (refer to Fig. 1, HA plakoglobin Figure 7. Nuclear translocation of ␤-catenin but not plakoglobin by LEF-1 overexpression. MDCK cells were transfected with either LEF-1 (A-D), with LEF-1 together with ␤-catenin (E and F), or with LEF-1 and plakoglobin (G and H). The cells were doubly stained with antibodies against LEF-1 (A, C, E, and G) and antibodies to ␤-catenin (B) or plakoglobin (D). In doubly-transfected cells (E-H), the transfected ␤-catenin (F) and plakoglobin (H) were detected by anti-VSV tag antibody. Note that LEF-1 efficiently translocated endogenous ␤-catenin into the nucleus, but not plakoglobin, whereas in cells transfected with both LEF-1 and plakoglobin or ␤-catenin, both transfected molecules were localized in the nucleus. Bar, 10 m. 1-ins) , was unable to enhance LEF-1-driven transcription (Fig. 10 B). To examine if full-length or mutant plakoglobin overexpression resulted in nuclear accumulation of endogenous ␤-catenin, cells transfected with HA-tagged plakoglobin were doubly stained for HA (Fig. 10 C, panel a), and ␤-catenin (Fig. 10 C, panel b). The results demonstrated that plakoglobin overexpression resulted in nuclear translocation of endogenous ␤-catenin (Fig. 10 C, compare panel a with b). In contrast, the COOH deletion mutant plakoglobin that was abundantly expressed in the transfected cells (Fig. 10 C, panel e), was unable to cause translocation of endogenous ␤-catenin into the nucleus (Fig. 10 C, panel f). Taken together, these results strongly suggest that although both plakoglobin and ␤-catenin have a COOH-terminal domain that can act as cotranscriptional activator when fused to the Gal4DBD, LEF-1-driven transcriptional activation by mutant ␤-catenin and wild-type plakoglobin mostly resulted from the release of endogenous ␤-catenin from its cytoplasmic partners, nuclear translocation, and induction of LEF-1-responsive transcription. Thus, elevated plakoglobin expression can influence ␤-catenin-driven transactivation. Inhibition of ␤-Catenin Nuclear Localization and Transactivation Capacity by N-Cadherin and ␣-Catenin Constitutive transactivation by high levels of ␤-catenin was suggested to be involved in tumor progression in colon carcinoma Morin et al., 1997). In addition, the signaling activity of ␤-catenin in Xenopus development could be blocked by its junctional partners (i.e., C-cadherin and the NH 2 -terminal of ␣-catenin [Funayama et al., 1996;Sehgal et al., 1997]). We have investigated the localization and transcriptional activation capacity of ␤-catenin in SW480 colon carcinoma cells that overexpress ␤-catenin due to lack of APC , before and after transfection with N-cadherin and ␣-catenin. In these cells, ␤-catenin is abundant in the nucleus (Fig. 11 B, panels b, d, and f) and a high level of constitutive LEF-1-driven transcription was detected (Fig. 11 A), in agreement with Korinek et al. (1997). This activity of ␤-catenin was effectively blocked by the cotransfection of N-cadherin or ␣-catenin (Fig. 11 A). Deletion of the ␤-catenin binding site on ␣-catenin (refer to Fig. 1, ␣-catenin ⌬␤) abolished the transactivation inhibition capacity of this molecule (Fig. 11 A). Double immunofluorescence Figure 8. Differential Triton X-100 solubility of various junctional plaque proteins and nuclear translocation of vinculin in cells overexpressing ␤-catenin together with LEF-1. (A) Equal volumes of total MDCK cell proteins (T), and Triton X-100-soluble (S) and -insoluble (I) cell fractions, were analyzed by gel electrophoresis and Western blotting with antibodies to ␤-catenin (␤-CAT), plakoglobin (PG), vinculin (vinc), ␣-actinin (␣-Act), and ␣-catenin (␣-cat). Note that whereas ␤-catenin, vinculin, and ␣-catenin display a large pool of a detergent-soluble fraction, plakoglobin and ␣-actinin are almost entirely insoluble in Triton X-100. (B) MDCK cells were cotransfected with LEF-1 and ␤-catenin and doubly stained for ␤-catenin (a) and vinculin (b), and ␤-catenin (c) and plakoglobin (d). Note that in cells doubly transfected with ␤-catenin and LEF-1 vinculin, translocated into the nucleus but plakoglobin remained junctional. Bar, 10 m. microscopy indicated that both molecules can drive ␤-catenin out of the nucleus in transfected SW480 cells (Fig. 11 B). In these cells, ␤-catenin was sequestered to the cytoplasm by ␣-catenin (Fig. 11 B, panels c and d) or to cellcell junctions by the transfected N-cadherin (Fig. 11 B, panels a and b). The ␣-catenin mutant lacking the ␤-catenin binding site (Fig. 11 B, panels e and f) did not show this activity. The results suggest that the partners of ␤-catenin that are active in cell adhesion, are effective antagonists of the nuclear localization of ␤-catenin and its function in transcriptional regulation. Discussion In mammalian cells, ␤-catenin and the closely related molecule plakoglobin (Butz et al., 1992;Knudsen and Wheelock, 1992;Peifer et al., 1992) have been shown to complex independently with similar partners, and are both involved in the formation of adherens type junctions (Butz and Kemler, 1994;Hülsken et al., 1994;Nathke et al., 1994;Rubinfeld et al., 1995). Plakoglobin, in addition, can associate with various desmosomal components (Cowin et al., 1986;Schmidt et al., 1994;Chitaev et al., 1996;Kowalczyk et al., 1997), whereas ␤-catenin does not normally associate with desmosomes, except in plakoglobin-null mouse embryos where the segregation between adherens junctions and desmosomes collapses (Bierkamp et al., 1996;Ruiz et al., 1996). Plakoglobin is also unable to substitute for ␤-catenin during development, since ␤-catenin-null mouse embryos die early in development (Haegel et al., 1995). In this study we highlight some common features of ␤-catenin and plakoglobin, as well as considerable differences in their nuclear translocation under various conditions, and their capacity to function in transcriptional activation. For both proteins, the increase in free protein levels induces nuclear translocation. This translocation can be blocked by junctional proteins that bind to ␤-catenin and sequester it to the plasma membrane or the cytoplasm. Nuclear Translocation Under the various conditions that resulted in increased levels of ␤-catenin and plakoglobin, both proteins translocated into the nucleus independently of, or in complex with LEF-1. Whereas in ␤-catenin-overexpressing cells the nuclear complexes that were formed by excess ␤-catenin also contained vinculin in addition to LEF-1, plakoglobin overexpression did not result in the recruitment of these molecules into the nuclear speckles. Interestingly, ␣-catenin and ␣-actinin, both of which bind ␤-catenin and plakoglobincontaining complexes (Knudsen et al., 1995;Huber et al., 1997;Nieset et al., 1997), were not cotranslocated into the nucleus by ␤-catenin or plakoglobin probably due to their stronger binding to actin filaments, resulting in a limited soluble pool in cells, in contrast to vinculin that is mostly in the detergent-soluble fraction. Our study is the first demonstration that ␤-catenin can associate with and recruit into the nucleus vinculin, but not other components of the cadherin-catenin system (i.e., ␣-catenin, ␣-actinin, or cadherin) in a complex that also contains LEF-1. The association between vinculin and ␤-catenin was recently demonstrated by coimmunoprecipitation of these proteins together with E-cadherin, but was most pronounced in cells lacking ␣-catenin (Hazan et al., 1997). Taken together, these findings reveal a new interaction of ␤-catenin with vinculin, that under certain conditions may lead to their colocalization in the nucleus where such a complex may play an important physiological role yet to be determined. Although both plakoglobin and ␤-catenin exhibited a largely similar nuclear translocation, they were distinct in their ability to colocalize with LEF-1 in the nucleus. Endogenous ␤-catenin was readily translocated into the nucleus after transfection with LEF-1, in agreement with previous studies (Behrens et al., 1996;Huber et al., 1996b;Molenaar et al., 1996), whereas the endogenous plakoglobin remained junctional. This difference may be attributed to the availability of a larger pool of soluble ␤-catenin in MDCK cells, or to an intrinsic difference between the two molecules in their binding to LEF-1. Plakoglobin and ␤-catenin also differed in their ability to influence the localization of endogenous LEF-1 when individually overexpressed. Plakoglobin overexpression could drive part of the endogenous ␤-catenin into the nucleus, most probably by displacing it from cadherin or other cytoplasmic partners, in agreement with results obtained with HT1080 cells (Salomon et al., 1997) and with Xenopus embryos (Miller and Moon, 1997). This implies that plakoglobin may have a regulatory role in the control of the extrajunctional function of ␤-catenin. In contrast, ␤-catenin was inefficient in altering plakoglobin's localization in MDCK, 293, and SK-BR-3 cells (all expressing desmosomes, our unpublished results). This was partly expected since ␤-cate- Figure 10. Activation of Gal4-and LEF-1-driven transcription by ␤-catenin and plakoglobin. (A) Constructs consisting of the DNA-binding domain of Gal4 (Gal4DBD) fused to the COOH-terminal transactivation domains of ␤-catenin and plakoglobin were cotransfected with a reporter gene (luciferase) driven by Gal4-responsive sequences into 3T3 cells, and the levels of luciferase activity determined from duplicate transfections (black and white bars). (B) Transactivation of LEF-1 consensus sequence (TOPFLASH)-driven transcription by full-length and truncated ␤-catenin and plakoglobin in 293 cells. The values (fold increase) were normalized for transfection efficiency by analyzing ␤-galactosidase activity of cotransfected lacZ, and for LEF-1 specificity with an inactive mutant LEF-1 sequence (FOPFLASH, white bars). (C) Double immunofluorescence for ␤-catenin (panels b, d, and f) in cells transfected with plakoglobin (a), ␤-catenin 1-ins (c), and plakoglobin 1-ins (e). Note that chimeras consisting of ␤-catenin and plakoglobin fused to Gal4DBD were both active in transcription stimulation, but LEF-1-responsive transactivation by ␤-catenin (and a ␤-catenin mutant) was much more potent than by full-length plakoglobin, and a COOH-deletion mutant of plakoglobin was inactive in LEF-1-driven transactivation. Full-length plakoglobin and the ␤-catenin mutant (␤-cat 1-ins) were effective in translocating endogenous ␤-catenin into the nucleus, whereas the plakoglobin mutant (PG 1-ins) was not. Bar, 10 m. nin is not normally associated with desmosomes and the soluble pool of plakoglobin in these cells is very low. Plakoglobin and ␤-catenin also responded differently to the inhibition of the ubiquitin-proteasome pathway, in particular in the renal carcinoma cell line KTCTL60 that does not express detectable levels of proteins of the cadherin-catenin system (Simcha et al., 1996). The level of ␤-catenin could be dramatically induced in these cells with proteasome inhibitors, suggesting that efficient degradation of ␤-catenin is responsible for the very low level of ␤-catenin in these cells. Plakoglobin was absent from these cells since there was no plakoglobin RNA, but when it was stably expressed (Simcha et al., 1996), its level was only moderately enhanced by inhibitors of the ubiquitin-proteasome system. It is interesting to note that this stable expression of plakoglobin did not result in the elevation of ␤-catenin content in KTCTL60 cells, implying that plakoglobin cannot, by itself, effectively protect ␤-catenin from degradation in cells lacking cadherins. Only when the level of plakoglobin was further increased in these cells by bu- Figure 11. Inhibition of transactivation and nuclear accumulation of ␤-catenin in SW480 colon carcinoma cells after transfection with N-cadherin or ␣-catenin. (A) SW480 cells were transfected with either empty vector (pCGN), N-cadherin, ␣-catenin, or a mutant ␣-catenin lacking the ␤-catenin binding site (refer to Fig. 1, HA ␣-catenin ⌬␤) together with a multimeric LEF-1-binding consensus sequence driving the expression of luciferase. The values of luciferase expression were corrected for transactivation specificity with a mutant LEF-1 consensus sequence, and with ␤-galactosidase activity for transfection efficiency. (B) Cells were transfected with N-cadherin (panels a and b), ␣-catenin (panels c and d), or mutant ␣-catenin (␣-CAT⌬␤) (panels e and f) and doubly stained for ␤-catenin (panels b, d, and f) and N-cadherin (panel a), ␣-catenin (panel c) and mutant ␣-catenin (panel e). Note the inhibition of transactivation and cytoplasmic retention of ␤-catenin in cells transfected with N-cadherin or ␣-catenin, but not with mutant ␣-catenin. Bar, 10 m. tyrate treatment could some accumulation of ␤-catenin be detected (our unpublished results). Transactivation Capacity Comparison between the presumptive transactivating domains of ␤-catenin and plakoglobin, fused to the DNAbinding domain of Gal4, indicated comparable transcriptional activation by the two molecules. This demonstrated that plakoglobin, like ␤-catenin and armadillo (van de Wetering et al., 1997), has a potent transactivation domain. However, the specific transcriptional activation of LEF-1driven reporter gene by plakoglobin was severalfold less efficient than that of ␤-catenin. Interestingly, a deletion mutant of ␤-catenin that lacked the transactivating domain but retained the cadherin-binding domain, was also capable of inducing transcription by the LEF-1 consensus construct. This can be attributed to competition and displacement of endogenous ␤-catenin from a complex with its cytoplasmic partners, nuclear translocation of the endogenous ␤-catenin, and consequently, LEF-1-driven transactivation. This finding is in agreement with recent studies by Miller and Moon (1997) who found that a variety of membrane-anchored mutant forms of ␤-catenin can act in signaling for axis duplication in Xenopus embryos by releasing endogenous ␤-catenin from cell-cell junctions or from a complex with APC, thus enabling its translocation into the nucleus. Overexpression of full-length plakoglobin was capable of inducing nuclear translocation of the endogenous ␤-catenin in MDCK and 293 cells, whereas a COOH terminus mutant plakoglobin, previously shown to be inefficient in displacing ␤-catenin from its complex with cadherin (Sacco et al., 1995;Salomon et al., 1997), was also unable to induce nuclear localization of the endogenous ␤-catenin, or transcriptional activation of the LEF-1-driven reporter. Since plakoglobin overexpression was inefficient in driving LEF-1 to complex with the nuclear speckles formed by plakoglobin overexpression, it is conceivable that the majority of the transcriptional stimulation of LEF-1-driven transcription in plakoglobin-overexpressing cells was due to the endogenous ␤-catenin that relocated to the nucleus under these conditions. Another recent study examining mammalian ␤-catenin and plakoglobin's embryonic signaling abilities in Drosophila (rescue of the segment polarity phenotype of armadillo) suggested that although both proteins can rescue armadillo mutants in adhesion properties, ␤-catenin had only a weak and plakoglobin had no detectable signaling activity (White et al., 1998). Nevertheless, since we found that the COOH terminus of plakoglobin is potent in transcriptional activation in the Gal4-fusion chimera and deletion mutants at the COOH terminus were inefficient in transactivation and most of the overexpressed plakoglobin was localized in the nuclei of transfected cells, one cannot exclude the possibility that plakoglobin can also play a direct role in the transcriptional regulation of specific genes that are yet to be identified . This possibility is being currently examined by analyzing the transactivation capacities of plakoglobin and ␤-catenin in cells that lack such endogenous proteins. In the human colon carcinoma SW480 cells that lack APC and therefore accumulate abnormally high levels of ␤-catenin in the nuclei, transcriptional activation of the LEF-1-driven reporter could be inhibited by members of the cadherin-catenin complex that sequestered ␤-catenin to the cytoplasm. The role of cadherin in regulating ␤-catenin levels is complex. On one hand, elevation in the content of cadherin can protect ␤-catenin from degradation and increase its levels. On the other hand, a strong binding of ␤-catenin to cadherin, rather than to LEF-1, may result in its cytoplasmic sequestration and the inhibition of transactivation by it. Furthermore, the cytoplasmic tail of cadherin that contains the binding site for ␤-catenin can also inhibit transactivation by ␤-catenin even when bound to it in the nucleus of the transfected cells (Sadot, E., M. Shtutman, I. Simcha, A. Ben-Ze'ev, and B. Geiger, unpublished results). The association of ␤-catenin with overexpressed ␣-catenin in SW480 cells also resulted in the cytoplasmic retention of nuclear ␤-catenin by binding of ␣-catenin to the actin cytoskeleton. These results are in agreement with those obtained for ␤-catenin signaling in axis specification of developing Xenopus that is antagonized by overexpression of cadherin (Heasman et al., 1994;Fagotto et al., 1996), or by the NH 2 terminus of ␣-catenin (Sehgal et al., 1997). These results may have important implications for the possible role of ␤-catenin in the regulation of tumorigenesis, since E-cadherin (Navarro et al., 1991;Vleminckx et al., 1991) and ␣-catenin (Bullions et al., 1997) were suggested to have tumor suppressive effects when reexpressed in cells deficient in these proteins and were shown to affect the organization of cell-cell adhesion. In addition, modulation of vinculin and ␣-actinin levels in certain tumor cells was shown to influence the tumorigenic ability of these cells (Rodríguez Fernández et al., 1992;Glück et al., 1993) and to affect anchorage independence and tumorigenicity in 3T3 cells (Rodríguez Fernández et al., 1993;Glück and Ben-Ze'ev, 1994;Ben-Ze'ev, 1997). It is possible that such effects are attributable to the capacity of vinculin and ␣-actinin to bind ␤-catenin, thus affecting both its localization and its role in regulating transcription. We are grateful to our colleagues for sending reagents: R. Kemler, R.
2014-10-01T00:00:00.000Z
1998-06-15T00:00:00.000Z
6706910
s2orc/train
v2
Molecular bases of morphologically diffused tumors across multiple cancer types
Molecular bases of morphologically diffused tumors across multiple cancer types Gastric cancer has two distinct subtypes: the diffuse (DGC) and the intestinal (IGC) subtypes. Morphologically, the former each consists of numerous scattered tiny tumors while the latter each has one or a few solid biomasses. The former tends to be more aggressive and takes place in younger patients than the latter. While these have long been documented, little is known about the underlying causes. Our hypothesis is that the level of sialic acid (SA) accumulation on the cancer cell surfaces is a key reason for the observed differences. Our transcriptomic data-based analyses provide evidence that (i) DGCs tend to deploy more SAs on cancer cell surfaces than IGCs; (ii) this gives rise to considerably stronger cell–cell electrostatic repulsion in DGCs due to the negative charge that each SA carries; and (iii) such repulsion drives stronger cell protrusion and metastasis. Similar observations as well as our transcriptomic data-based predictions hold for multiple other cancer types, namely breast, lung, prostate plus liver and thyroid cancers, each known to have diffuse-like vs. non-diffused subtypes as well as more aggressive behaviors like DGCs vs. IGCs. Hence, we speculate that the discovery presented here applies not only to gastric cancer but multiple and even potentially all cancer types having diffuse-like and non-diffused subtypes. Diffuse gastric cancer (DGC) and intestinal gastric cancer (IGC) are the two main gastric cancer subtypes [1]. Morphologically, DGC consists of numerous disjoint tiny tumors while IGC forms one or a few solid tumors (Supplementary Supplementary Fig. S1) [1,2]. In-depth studies have discovered that DGCs generally have an ill-defined or missing glandular structure and reduced cell-cell adhesion, and may tend to consist of a large fraction of signet cells [3] compared to IGCs. They also tend to be more aggressive and happen to younger patients compared to IGCs [1]. A number of studies have been published regarding the genomic and transcriptomic differences between DGC and IGC [4,5]. However, no studies have established how these molecular-level differences are functionally linked to the distinct morphologies and aggressiveness between DGCs and IGCs. We have discovered based on transcriptomic data analyses that DGCs generally accumulate considerably more sialic acids (SAs) on the cancer cell surfaces compared to IGCs, which will result in stronger cell-cell electrostatic repulsion since SAs each carry a negative charge and hence on average larger cell-cell distances similar to that among red blood cells. Statistical analyses reveal that there is a strong association between the elevated expression levels of the relevant genes and the reduced survival time. These lead to our main hypothesis: the level of SA accumulation on the cancer cell surfaces is a key factor that dictates the morphology and the aggressiveness of DGCs vs. IGCs. Literature search has revealed that multiple other cancer types also each consist of subtypes like diffuse vs. nondiffused tumors of gastric cancer, namely scattered tiny tumors with ill-defined glandular structures, summarized in Supplementary Supplementary Table S1. Our further analyses provided strong evidence that this hypothesis applies to these cancer types as well. MORE SAS ON CELL SURFACES IMPLY STRONGER CELL-CELL REPULSION AND METASTATIC POTENTIAL It has long been observed since the 1960s that overproduction of SAs, which are the capping molecules of cell-surface glycans, is associated with cancer metastasis in general [6]. Each SA carries a negative charge and hence its over-deployment on the cell surfaces will lead to stronger electrostatic repulsion among neighboring cells, similar to red blood cells (RBCs) [7], which are known to deploy significantly more SAs on their surface than other cell types, preventing them from aggregation [8]. Hence, we have examined the expression levels of genes relevant to the SA accumulation on the cell surfaces, namely the sialyltransferase (ST) genes for deploying SAs and sialidase genes for degrading SAs. We note that DGC samples (Supplementary Table S2) generally have higher levels of ST gene expressionsand lower sialidase gene expressions, suggesting that DGCs have more SAs accumulated on their cell surfaces than IGCs (Supplementary Table S3). This is supported by a quantitative analysis using the Michaelis-Menten kinetic model [7] (Supplementary material), which suggests that the SA accumulation rate and hence the cell-cell repulsion in DGCs are significantly higher than those of IGCs ( Fig. 1A and Supplementary material). Knowing that DGCs have higher death rates compared to IGCs [1], we have then analysed the levels of cell protrusion: a key migration-related activity [7] (Supplementary material), since metastasis is the predominant reason for cancer death, accounting for >93% of cancer-related death. Our analyses show that the protrusion level is considerably higher in DGCs than in IGCs (Fig. 1E). A linear regression analysis confirms that the increased protrusion activity can be statistically well explained by the expressions of the ST genes in DGCs ( Fig. 1I and Supplementary material). This is further supported by the survival data associated with integrin genes in DGCs and IGCs (Supplementary Table S4 YOUNGER DGC PATIENTS ARE DUE TO THE SPECIFIC GROWTH FACTOR USED Our previous study suggests that more malignant cancers tend to happen in younger patients [9]. Protrusion level LC samples cancer requires two types of factors to take place, one being the cancer risk factor in an organ, which goes up with age, and the other the availability level of circulatory growth factors specifically needed by a cancer type, which decreases with age. Using the program developed in [9], we have predicted the growth factors specifically needed by DGCs and IGCs, respectively (Supplementary material), with PDGFC being the main growth factor needed by DGCs and EREG and NRG2 the growth factors needed by IGCs (Supplementary Table S3 and Supplementary material), which is supported by published studies (Supplementary Table S5) and by an accurate regression analysis ( Supplementary Fig. S3 and Supplementary material). Similar is done for IGCs ( Supplementary Fig. S3 and Supplementary material). Furthermore, the average level of growth factor PDGFC drops more sharply from the population with age <60 years to that with age >60 years than those of EREG and NRG2, hence providing a natural explanation for the observation (Supplementary material). COMPARATIVE ANALYSES ON OTHER CANCER TYPES Literature search has revealed that multiple other cancer types also each have tumor subtypes similar to DGCs in terms of their morphology and aggressiveness, as summarized in Supplementary Table S1. For each such cancer type, we call their DGC-like tumors diffuselike tumors. Specifically, breast cancer (BC), prostate cancer (PC) and lung cancer (LC) each have a relatively large number of diffuse-like tumor samples with transcriptomic data in the public domain (Supplementary Table S2). In addition, other cancer types recorded in Supplementary Table S1 also have very limited diffuse-like tumor samples but we do not include them in the following analyses as their numbers of samples are too small. SA deployment The same analyses were conducted on BC, PC and LC. Inflammatory breast cancer (IBC), PC with Gleason score of ≥8 (DPC) and small-cell lung carcinoma (SCLC) are considered to be diffuse-like subtypes of BC, PC and LC, respectively, while non-IBC (NIBC), PCs with Gleason score of ≤6 (NDPC) and non-SCLC (NSCLC) are the corresponding non-diffused subtypes (Supplementary Table S1). Our analyses have revealed that for each of the three cancer types, its diffuse-like tumors, on average, have higher levels of ST gene expressions (Supplementary Table S3). And Michaelis-Menten kineticsbased calculations show that the SA accumulation rates and cell-cell repulsion in the three diffuse-like tumors are consistently higher than those of non-diffused tumors (Fig. 1B-D and Supplementary material). Furthermore, the level of the predicted cell-cell repulsion is consistent with the survival rate of each diffuse-like subtype (Supplementary Table S1). Increased metastatic potential Analyses were then conducted to estimate the level of cell protrusion in the diffuse-like vs. non-diffused BC, PC and LC tumors, respectively. Comparable results are achieved here compared to those for DGCs vs. IGCs (Fig. 1F-H). In addition, the increased protrusion activities in diffuse-like tumors can be statistically well explained by the expressions of the SA synthesis gene and ST genes for each of the three cancer types (Fig. 1J-L and Supplementary material). Based on these, we predict that the elevated SA deployment plays important roles in the increased migration activities and metastatic potentials of the diffuse-like subtypes of all three cancer types, which is further supported by published studies (Supplementary Table S5). SUMMARY Our data analyses and computational modeling provide support for the hypothesis that the distinct levels of SA accumulation on cancer cell surfaces are the key reason for the different morphologies and the aggressiveness between diffuse-like and non-diffused tumors of the same cancer types, hence providing new insights for an important and fundamental cancer biology question for the first time. The key novelty of our study lies in the fact that higher levels of the SA accumulation, estimated based on transcriptomic data, coupled with a physics-based argument provide a natural explanation for the possible causes of the distinct morphology as well as aggressiveness between diffuse-like vs. non-diffused tumors, plus the generality of this discovery. Clearly, this is a computation-based discovery. Further validation is needed by physically measuring the levels of cell-cell repulsion in diffuse-like vs. non-diffused tumors and establishing the detailed relationship between the repulsion levels and the tumor sizes. SUPPLEMENTARY DATA Supplementary data are available at NSR online.
2022-08-27T15:22:38.904Z
2022-08-26T00:00:00.000Z
251853900
s2orc/train
v2
Modified hybrid combination synchronization of chaotic fractional order systems
Modified hybrid combination synchronization of chaotic fractional order systems The paper investigates a new hybrid synchronization called modified hybrid synchronization (MHS) via the active control technique. Using the active control technique, stable controllers which enable the realization of the coexistence of complete synchronization, anti-synchronization and project synchronization in four identical fractional order chaotic systems were derived. Numerical simulations were presented to confirm the effectiveness of the analytical technique. Introduction A chaotic system is one whose motion is sensitive to initial conditions [31]. Since different initial conditions lead to different trajectories for the same dynamical system, it is expected that the trajectories cannot coincide. The possibility of two chaotic systems with different trajectories to follow the same trajectory by the introduction of a control function, as proposed by [29], has been an interesting research area for scientists in nonlinear dynamics. This is partly due to the applicability in different fields such as communication technology, security, neuroscience, atmospheric physics and electronics. There are several methods for the synchronization of chaotic systems. These methods include active control, Open Plus Closed Loop (OPCL), backstepping, feedback control, adaptive control, sliding mode and others. A comparison of performance of a modified active control method and backstepping control on synchronization of integer order system has been investigated [23]. The active control method was found "to be simpler with more stable synchronization time and hence more suitable for practical implementation". The active control method was also found to have the best stability and convergence when compared with the direct method and OPCL method for fractional order systems [21]. Generally, complete synchronization between a drive system y i and response system x i is said to occur if lim t→+∞ ||y i −x i || = 0 and anti-synchronization if lim t→+∞ ||y i + x i || = 0. If the error term is such that lim t→+∞ ||y i − αx i || = 0, where α is a positive integer, we have projective synchronization. According to [32], δ synchronization is defined by the error given as lim t→+∞ ||y i ± x i || ≤ δ , where δ has small value. Other forms of synchronization include phase synchronization, anticipated synchronization, lag synchronization etc. The possibility of one or more of these synchronization scheme in a single synchronization has not been explored. The study of chaotic systems has evolved over time from integer order dynamical systems to cover partial differential equations, time delayed differential equations, fractional order differential equations and even time series data. The prevalence of integer order system was the lack of solution methods for fractional differential equations [6] and its inherent complexity [9]. The Grünwald-Letnikov definition of fractional order systems, the fractional order derivative of order α can be written as [30] where the binomial coefficients can be written in terms of the Gamma function as The Riemann-Liouville definition of fractional derivative is given as The Caputo fractiional derivatives can be written as Fractional order systems have been found as a useful model in many engineering, physical and biological systems. In this present work, we aim to investigate the possibility of coexistence of different synchronization scheme in the synchronization of four chaotic systems (two drives and two response systems). Specifically, we aim to implement synchronization, anti-synchronization and projective synchronization on different dimensions in a fractional order combination synchronization using the method of active control. We believe, if implemented, it will will enhance faster, robust and more secure information transmission. To the best of our knowledge, this has not been reported in literature. System Description The integer order Chen system was introduced by [4 The fractional order chaotic Chen system was introduced by [13] as The system was found to be chaotic when (a, b, c) = (35, 3, 28) and 0.7 ≤ α ≤ 0.9. However, by varying parameter a rather than parameter c as in [13], the system was found to be chaotic in the region 0.1 ≤ α ≤ 0.1 [17]. The phase space of the fractional order Chen system is shown in figure 1. Various successful attempts have been made at synchronization of the integer, hyperchaotic, and fractional order Chen system [10,15,7,5]. Design and implementation of synchronization scheme The co-existence of different synchronization scheme within commensurate fractional order Chen system will be studied. Suitable controllers are designed (Section 3.1) and numerical simulations presented in Section 3.2 to verify the proposed controllers. [2] 3.1 Design of controllers Let the two drive system be defined as and D q 1 y 1 = σ (y 2 − y 1 ) D q 2 y 2 = (c − a)y 1 − y 1 y 3 + cy 2 D q 3 y 3 = y 1 y 2 − by 3 (7) Defining the two response systems as and where the six active control functions u 1 , u 2 , u 3 , u 4 , u 5 , u 6 introduced in equations 8 and 9 are control functions to be determined. We define the error states e 1 , e 2 e 3 as Substituting the drive systems (equations 6 and 7) and response systems (equations 8 and 9) into equation 10 and assuming a commensurate system, the error system is obtained as D µ e 2 = (a + c)e 1 + ce 2 + 2c(z 1 + w 1 ) − 2a( Active control inputs u i (i = 1, 2, 3, 4, 5, 6) are then defined as where the functions V i are to be obtained. Substituting equation 12 into equation 11 yields The synchronization error system (equation 13) is a linear system with active control inputs V i . We design an appropriate feedback control which stabilizes the system so that e i (i = 1, 2, 3) → 0 as t → ∞, which implies that synchronization is achieved with the proposed feedback control. There are many possible choices for the control inputs where C is a 3 × 3 constant matrix. In order to make the closed loop system stable, matrix C should be selected in such a way that the feedback system has eigenvalues λ i that satisfies the equation where λ is the eigenvalue, I is an identity matrix and A is the coeffcient of the error state. There are varieties of choices for choosing matrix C. Matrix C is chosen as Using equation 16 in 14, we obtain our control function as Based on the controllers obtained, two unique cases can be observed. The control system can be defined as It can also be defined as The control system can be defined as Numerical simulation of Results To verify the effectiveness of the synchronization scheme proposed in section 3.1 using the method of active control, we used the initial conditions x i (−10, 0.001, 37), y i (37, −5, 0), w i (−5, 0.5, 25) and z i (10, −5, 15). The order of the system was taken as 0.95. A time step of 0.005 was used. In the case of projective synchronization, the scaling parameter was taken to be 5. The parameters of the system are taken as (a, b, c) = (35, 3, 28). According to [30], the general numerical solution of the fractional differential equation can be expressed as where c (q) i is given as The results for the two cases considered are shown in figures 2 and 3. From the results presented, the drives and responses were found to achieve synchronization as indicated by the convergence of the error terms to zero. The effectiveness of the proposed scheme is hereby confirmed. Conclusion In this paper, a new synchronization scheme is proposed and implemented. The modified hybrid synchronization that allows for the coexistence of different synchronization schemes was implemented in a compound synchronization of fractional order Chen system. In particular, the controllers consists of complete synchronization, antisynchronization, and projective synchronization. We believe that this type of synchronization will offer better security and more robust. There is the need to investigate the performance of this type of synchronization using different synchronization schemes. Furthermore, it will be productive to study the behaviour of this scheme under different types and strength of noise. Practical implementation of this scheme is also proposed. Conflict of Interest The authors hereby declare that there is no conflict of interest.
2018-09-21T11:44:46.000Z
2018-09-21T00:00:00.000Z
53989600
s2orc/train
v2
Evaluation of laboratory assays for anti‐platelet factor 4 antibodies after ChAdOx1 nCOV‐19 vaccination
Evaluation of laboratory assays for anti‐platelet factor 4 antibodies after ChAdOx1 nCOV‐19 vaccination Abstract Introduction Vaccine‐induced immune thrombocytopenia and thrombosis (VITT) following ChAdOx1 nCOV‐19 vaccine has been described, associated with unusual site thrombosis, thrombocytopenia, raised D‐dimer, and high‐titer immunoglobulin‐G (IgG) class anti‐platelet factor 4 (PF4) antibodies. Enzyme‐linked immunosorbent assays (ELISA) have been shown to detect anti‐PF4 in patients with VITT, but chemiluminescence assays do not reliably detect them. ELISA assays are not widely available in diagnostic laboratories, and, globally, very few laboratories perform platelet activation assays. Methods Assays that are commercially available in the United Kingdom were evaluated for their ability to identify anti‐PF4 antibodies in samples from patients with suspected VITT. Four IgG‐specific ELISAs, two polyspecific ELISAs, and four rapid assays were performed on samples from 43 patients with suspected VITT from across the United Kingdom. Cases were identified after referral to the UK Expert Haematology Panel multidisciplinary team and categorized into unlikely, possible, or probable VITT. Results and Discussion We demonstrated that the HemosIL AcuStar HIT‐IgG, HemosIL HIT‐Ab, Diamed PaGIA gel, and STic Expert assays have poor sensitivity for VITT in comparison to ELISA. Where these assays are used for heparin‐induced thrombocytopenia (HIT) diagnosis, laboratories should ensure that requests for suspected VITT are clearly identified so that an ELISA is performed. No superiority of IgG‐ELISAs over polyspecific ELISAs in sensitivity to VITT could be demonstrated. No single ELISA method detected all possible/probable VITT cases; if a single ELISA test is negative, a second ELISA or a platelet activation assay should be considered where there is strong clinical suspicion. Authors have described using the Zymutest HIA IgG enzymelinked immunosorbent assay (ELISA), 4 the Lifecodes PF4 IgG ELISA, 2,3 and the Asserachrom HPIA IgG ELISA 3 to successfully detect anti-PF4 in patients with VITT, but have also reported that the HemosIL AcuStar HIT-IgG (PF4-H) chemiluminescence method does not reliably detect them. 3 At the time of writing, there is a single case report of VITT with a negative anti-PF4 assay using an unidentified lateral flow device, 5 and another where the results of anti-PF4 assays have not been reported. 6 Essentials • The performance of immunoassays for anti-PF4 antibodies in vaccine-induced immune thrombocytopenia and thrombosis (VITT) was assessed. • Patients with possible and probable VITT were tested using eleven commercially available immunoassays. • Rapid methods showed poor sensitivity for anti-PF4 antibodies in VITT. • No single ELISA method detected all cases of VITT. TA B L E 1 Results of anti-PF4 assays Study no. The polyspecific ELISAs were also performed on all 43 samples. AESKULISA These assays were Asserachrom HPIA (Stago UK Ltd, Theale, UK) and Lifecodes PF4 Enhanced (Immucor, Solihull, UK). for the HemosIL assays, the cutoff was defined by the manufacturer as 1.0 U/ml. For the remaining ELISAs, a kit-specific cutoff in relation to a kit reference plasma was used (Table 1). GraphPad Prism 9.1 (GraphPad Software, CA, USA) was used for statistical analysis of assay sensitivity and specificity. Table 1 shows the results for all assays; the results for patients who were categorized as possible or probable VITT are shown in Figure 1. | RE SULTS AND D ISCUSS I ON Of the 43 samples tested, 23 had OD for all six ELISAs that were above the assay-specific cutoff (positive); all of these 23 samples were from patients with possible or probable VITT. Eight samples were positive by five of the six ELISAs. Seven had OD below the assay-specific cutoff (negative) by AEKSULISA HiT II from six patients with probable VITT and one with possible VITT. One was negative by Asserachrom HPIA IgG from a patient with probable VITT. Four samples had results for all six ELISAs that were negative, all from patients in whom VITT was unlikely; one was positive using the Diamed PaGIA gel (2+) and all the other rapid assays were negative. Comparing test results with the clinical phenotype as evaluated by the clinical expert group enabled calculation of assay sensitivity and specificity for VITT. These data are presented in Table 2. We have demonstrated that, although the HemosIL AcuStar all authors contributed to the review and revision of the manuscript. CO N FLI C T O F I NTE R E S T All authors declare no relevant conflicts of interest.
2021-05-12T06:16:54.014Z
2021-05-10T00:00:00.000Z
234361700
s2orc/train
v2
Association of common genetic variation in the protein C pathway genes with clinical outcomes in acute respiratory distress syndrome
Association of common genetic variation in the protein C pathway genes with clinical outcomes in acute respiratory distress syndrome Background Altered plasma levels of protein C, thrombomodulin, and the endothelial protein C receptor are associated with poor clinical outcomes in patients with acute respiratory distress syndrome (ARDS). We hypothesized that common variants in these genes would be associated with mortality as well as ventilator-free and organ failure-free days in patients with ARDS. Methods We genotyped linkage disequilibrium-based tag single-nucleotide polymorphisms in the ProteinC, Thrombomodulin and Endothelial Protein C Reptor Genes among 320 self-identified white patients of European ancestry from the ARDS Network Fluid and Catheter Treatment Trial. We then tested their association with mortality as well as ventilator-free and organ-failure free days. Results The GG genotype of rs1042580 (p = 0.02) and CC genotype of rs3716123 (p = 0.002), both in the thrombomodulin gene, and GC/CC genotypes of rs9574 (p = 0.04) in the endothelial protein C receptor gene were independently associated with increased mortality. An additive effect on mortality (p < 0.001), ventilator-free days (p = 0.01), and organ failure-free days was observed with combinations of these high-risk genotypes. This association was independent of age, severity of illness, presence or absence of sepsis, and treatment allocation. Conclusions Genetic variants in thrombomodulin and endothelial protein C receptor genes are additively associated with mortality in ARDS. These findings suggest that genetic differences may be at least partially responsible for the observed associations between dysregulated coagulation and poor outcomes in ARDS. Electronic supplementary material The online version of this article (doi:10.1186/s13054-016-1330-5) contains supplementary material, which is available to authorized users. Background Acute respiratory distress syndrome (ARDS) is a common cause of respiratory failure characterized by acute pulmonary edema and lung inflammation [1]. ARDS occurs in both adults and children and has an incidence of approximately 200,000 patients per year in the United States. Estimates of mortality range from 18 % to 58 % [2,3]. The majority of deaths among patients with ARDS are attributed to multiorgan failure [1,4]. A number of experimental and human studies suggest that excessive activation of coagulation is associated with increased mortality in patients with ARDS [5][6][7][8][9][10][11]. Activated protein C (PC) is an endogenous regulator of coagulation that has both anticoagulant and antiinflammatory effects [12,13]. Protein C is activated by thrombin in the presence of thrombomodulin (TM), and membrane-bound endothelial protein C receptor (EPCR) potentiates this activation [14]. Human lung epithelial cells express protein C, EPCR, and TM, and the lung epithelium can actively modulate the protein C pathway [15]. Alterations in plasma levels of protein C, TM, and soluble EPCR are associated with increased mortality and greater severity of illness among patients with ARDS [13,[16][17][18]. It is unclear if these alterations and their associations with clinical outcomes are determined exclusively by environmental factors that precipitate ARDS, such as the virulence of the infection, the extent of aspiration, and the severity of shock, or whether they are also influenced by genetic variation. Common genetic variation (e.g., polymorphisms with minor allele frequency >5 %) in the genes encoding for PC, EPCR, and TM has been well-characterized and is associated with adverse clinical outcomes in disorders such as sepsis and cardiovascular disease [19][20][21][22][23][24][25][26][27][28][29]. Since the genetics of diseases such as ARDS are likely complex and therefore likely to involve several low-penetrance loci [30,31], analyzing multiple variants of genes encoding components of the same physiological cascade may prove to be a more powerful approach than studies of single candidates [32][33][34][35]. We hypothesized that common genetic variations in the genes encoding for protein C, EPCR, and TM are associated with adverse clinical outcomes in patients with ARDS. We also hypothesized that variants in each individual gene may each have a small effect and that a combination of these variants would be associated with adverse clinical outcomes. Study population The study population included subjects enrolled in the ARDS Network Fluid and Catheter Treatment Trial (FACTT) from whom DNA was available. FACTT was a multicenter trial in which researchers compared conservative and liberal strategies of fluid management using explicit protocols applied for 7 days in patients with ARDS [36,37]. Participants were also randomly assigned to receive either a pulmonary arterial catheter or a central venous catheter in a two-by-two factorial design [36]. All patients were ventilated using a lung-protective ventilation strategy. The primary outcome was mortality at 60 days [36,37]. As part of the primary enrollment in the FACTT trial, patients were also asked to co-enroll in an ancillary study designed to study the role of genetic biomarkers. Patients who consented to participate in the ancillary study had additional whole blood collected, from which DNA was extracted. DNA was extracted and made available for this study by the ARDS Network DNA repository in the Center for Human Genetics Research at Vanderbilt University (Nashville, TN, USA). DNA was available from 470 patients, 320 of whom were of self-identified white race of European ancestry. We limited the present analysis to white patients of European ancestry to avoid confounding due to population stratification. The institutional review boards of each participating hospital reviewed and approved the primary and ancillary studies. Written informed consent was obtained from participants or their legally authorized surrogates. Outcome measures The primary outcome measure was mortality at 60 days. The secondary outcome measures were (1) the number of ventilator-free days (VFDs) [38] and (2) the number of organ failure-free days during the first 28 days of hospitalization [36]. Single-nucleotide polymorphism selection and genotyping To comprehensively characterize all the common genetic variation in these genes, we genotyped linkage disequilibrium (LD)-based single-nucleotide polymorphisms (tag SNPs) in the genomic region and 2000 bp upstream and downstream in the PC, EPCR, and TM genes. We used the resequencing data on these genes available from the Seattle SNPs website (http://snp.gs.washington.edu/Seat-tleSeqAnnotation144/) and selected tag SNPs using MUL-TIPOP software [39] with minimum allele frequency set at 5 % and r 2 set at 0.8. SNPs were genotyped using three commercially available technologies according to the manufacturers' instructions: Illumina Golden Gate 384-plex (Illumina, San Diego, CA, USA), GenomeLab SNPstream 48-plex (Beckman Coulter, Brea, CA, USA), and template-directed primer extension with fluorescence polarization detection using the AcycloPrime II kit (PerkinElmer, Waltham, MA, USA) [40]. Primer sequences are available upon request. Samples were arrayed on 96-well plates with negative and positive controls (duplicates) on each plate. Investigators blinded to the clinical status scored the genotypes. Data analysis Nonpolymorphic SNPs and SNPs with minor allele frequency <5 % were removed from the analysis. All analyzed SNPs were tested and found to be in Hardy-Weinberg equilibrium. We assessed genotypic effects at single SNP loci in each of the three genes using the χ 2 test to compare the associations of genotypes with 60-day mortality. In the univariate analysis of individual SNP genotypes, additive, dominant, and recessive models were considered. Next, we used multivariate logistic regression, incorporating the associated SNPs in a single multivariate model to assess if each SNP was independently associated with mortality when tested with other SNPs in the same gene. We included age, presence of nonpulmonary sepsis, fluid management strategy, and Acute Physiology and Chronic Health Evaluation (APACHE) score as covariates in the model because of their previously reported association with clinical outcomes in patients with ARDS. We corrected for multiple comparisons for the multiple SNPs genotyped within each gene by using multiple permutations as implemented in PLINK [41]. SNPs from each gene with an independent effect on mortality were retested in a logistic regression model with the covariates mentioned above and all SNPs with a statistically significant association with outcome. We also conducted an exploratory analysis, testing for a haplotype effect. Haplotype frequencies in each gene were estimated from unphased genotype data using the PHASE algorithm in Haploview, and the association of the imputed haplotypes on mortality was assessed using a case-control approach as implemented in Haploview [42,43]. We examined the joint effect of a combination of these "high-risk genotypes" associated with mortality on clinical outcomes. For the purpose of this analysis, we defined the following genotypes with an independent effect on mortality as high-risk genotypes: 1. CC genotype of the rs3176123 SNP in the TM gene 2. GG genotype of the rs1042580 SNP in the TM gene 3. GC/GG genotypes of the rs9574 SNP in the EPCR gene The association of combinations of genotypes with outcomes was assessed using the Cochran-Armitage trend test to compare clinical outcomes across categories of patients stratified by the number of high-risk genotypes possessed by each individual. We used regression models to adjust for age, severity of illness (APACHE), presence of sepsis, and allocation to treatment arm. In addition, given the a priori hypothesis that all three genes in the protein C pathway (i.e., protein C, EPCR, and TM) would have an effect on mortality in patients with ARDS, we conducted an additional analysis with the rs1799810 SNP of the protein C gene included in the combined model. All analyses were carried out using Stata 9 (StataCorp, College Station, TX, USA), PLINK [42], and Haploview [43,44] software. On the basis of an assumption of minimum allele frequency of 5 % and a dominant model, the sample size of 320 patients has a power of 80 % to detect an increase in mortality by a relative ratio of 2.1 or greater at p < 0.05. Results The baseline characteristics of the self-identified white patients of European ancestry enrolled in the FACTT trial stratified by the 320 for whom DNA was available and the 321 for whom DNA was not available are depicted in Table 1. The baseline characteristics of the study population are similar to the white patients of European ancestry in the FACTT trial for whom DNA was not available. Genotype frequencies of the assayed SNPs in the protein C, EPCR, and TM genes and the frequencies stratified by mortality at 60 days are depicted in Tables 2, 3 and 4. In the EPCR gene, the GC/GG genotypes of the rs9574 SNP were associated with higher mortality (24 % vs. 11 %) compared with the CC genotype (p = 0.04 after correction for multiple SNPs tested in the EPCR gene). Two other SNPs in the EPCR gene-rs2069952 and rs2069948-that were highly correlated with rs9574 (r 2 = 0.95), were also associated with mortality at 60 days; however, when tested in a regression model that included these two SNPs and rs9574, only rs9574 retained the association with mortality, suggesting that the effect was not independent of the effect of the rs9574 SNP. The association of the rs9574 SNP with mortality was independent of age, severity of illness, sepsis as the primary cause of ARDS, and the fluid management arm (Table 5). Two SNPs in the TM gene were independently associated with mortality at 60 days. The CC genotype of the rs3176123 SNP was associated with higher mortality (57 % vs. 20 %) compared with the AC/CC genotypes (p = 0.002 corrected for multiple SNPs tested). The GG genotype of the rs1042580 SNP was also associated with APACHE Acute Physiology and Chronic Health Evaluation, ARDS acute respiratory distress syndrome, FiO 2 fraction of inspired oxygen, PaO 2 partial pressure of arterial oxygen, PEEP positive end-expiratory pressure, PIP peak inspiratory pressure higher mortality (35 % vs. 19 %) compared with the AG/ AA genotypes (p = 0.02 corrected for multiple SNPs tested). These two SNPs had limited correlation with each other (r 2 = 0.14), and both SNPs were independently associated with mortality when analyzed in a joint logistic regression model including the two SNPS and age, presence of non-pulmonary sepsis, APACHE score, and allocation to fluid management strategy ( Table 5). The association of the rs3176123 and rs1042580 SNPs with mortality was independent of age, severity of illness, sepsis, and the fluid management arm (Table 5). Of the protein C SNPs analyzed, the AA genotype of the rs1799810 SNP showed a trend toward increased mortality (26 % vs. 19 %) compared with the TT/AT genotypes, but this trend was not statistically significant (p = 0.18). We tested the effect of a combination of these unfavorable SNPs on mortality at 60 days in a joint logistic regression model. On the basis of their biological interaction in a common pathway, we hypothesized that multiple unfavorable SNPs would have a more significant impact on the protein C pathway and therefore would be associated with increased mortality. The CC genotype of the rs3176123 SNP, the GG genotype of the rs1042580 SNP in the TM gene, and the GC/CC genotypes of rs9574 SNP in the EPCR gene were independently associated with mortality at 60 days. This association was independent of age, severity of illness, sepsis, and the fluid management arm (Table 5). We also examined mortality at 60 days among patients stratified by the number of high-risk genotypes possessed by each individual. There was an increase in mortality at 60 days with the presence of each additional high-risk genotype: 5. (Fig. 1). The association was independent of age, severity of illness, sepsis, and the fluid management arm. The number of VFDs was also compared among patients stratified by the number of high-risk genotypes carried by each individual. There was a decrease in the number of VFDs with the presence of each additional high-risk genotype (p = 0.01) (Fig. 2). This association was also independent of age, severity of illness, sepsis, and the fluid management arm. The number of coagulation, renal, cardiovascular, and central nervous system organ failure-free days was compared among patients stratified by the number of high-risk genotypes carried by each individual. There was also a decrease in the number of coagulation, renal, cardiovascular, and central nervous system organ failurefree days with the presence of each additional high-risk genotype (Fig. 3). Given our a priori hypothesis that genetic heterogeneity in all three genes (protein C, EPCR, and TM) in this pathway would have an effect on mortality in patients with ARDS, we conducted an additional analysis with four SNPs in the model: the three SNPs that are independently associated with mortality at 60 days and a fourth, the rs1799810 SNP in the protein C gene that had a trend toward increased mortality which did not two, and three high-risk SNPs, respectively. Those with greater numbers of high-risk SNPs had fewer VFDs and fewer organ failure-free days (see Additional file 1). We did not find any joint haplotype effect on either the primary or secondary outcomes in any of these three genes. Thrombomodulin SNPs and plasma levels Plasma levels of soluble TM were higher among individuals carrying the GG genotype of the rs1042580 SNP (median 106 ng/ml, interquartile range 75-187 ng/ml) as compared with the individuals with the AG and AA genotypes (median 90 ng/ml, interquartile range 59-141 ng/ml) (p < 0.05). The plasma TM levels among individuals with the CC genotype of the rs3176123 SNP (median 94 ng/ml, interquartile range 53-148 ng/ml) did not have a statistically significant difference from the individuals carrying the AC and CC genotypes (median 93 ng/ml, interquartile range 61-149 ng/ml) (p = 0.85). EPCR SNPs and plasma levels Plasma levels of soluble EPCR were higher among individuals carrying the GG genotype of rs9574 SNP (median 66.1 ng/ml, Interquartile range 45-104 ng/ml) as compared with the individuals with the AG and AA genotypes (median 88.5 ng/ml, interquartile range 56-133 ng/ml) (P < 0.01). To test whether the effect of the SNPs of the rs1042580 was mediated via plasma TM levels, we first tested the association of rs1042580 with mortality in a logistic regression model and then added plasma TM levels to the regression model. With addition of plasma TM to the regression model, the odds of mortality decreased slightly from 2.32 (95 % CI 1.06-5.1, p < 0.04) to 2.15 (95 % CI 0.93-4.9, p = 0.07), suggesting that plasma TM levels are at most a minor part of the effect of the SNP on mortality. To test whether the effect of the SNPs of the rs9574 was mediated via plasma soluble Endothelial Protein C Receptor (sEPCR) levels, we tested the association of rs9574 with mortality in a logistic regression model and then added plasma EPCR as a mediator in the regression model. With addition of plasma sEPCR to the regression model, the odds of mortality decreased from 2.53 (95 % Discussion The results of this study indicate that common genetic variations in the protein C pathway are associated with adverse clinical outcomes in adult patients with ARDS. The GC/CC genotypes of the rs9574 SNP in the EPCR gene, as well as the GG genotype of the rs1042580 SNP and the CC genotype of the rs3176123 SNP, both in the TM gene, were independently associated with mortality in ARDS. We also found that a combination of these high-risk genotypes had an additive effect on mortality, ventilator-free days, and organ failure-free days in ARDS. However, the effect of SNPs was not mediated via their effect on the levels of soluble TM or EPCR in plasma, suggesting the SNPs may have their effect through other mechanisms related to altered functioning of these molecules. This finding has major implications for current understanding of the pathogenesis of ARDS. Abnormalities of the coagulation pathway and its regulatory proteins and the association of these abnormalities with clinical outcomes in patients with ARDS have been described previously [6][7][8][9][10][11]. However, these abnormalities were thought to be due largely to environmental factors (e.g., severity of illness, virulence of organisms, severity of lung injury). The present findings suggest that genetic susceptibility may contribute to this dysregulated coagulation and the poor clinical outcomes in patients with ARDS. Both intraalveolar and systemic coagulation are activated in patients with ARDS [8][9][10]44]. Although intraalveolar fibrin deposition may have beneficial effects on gas exchange by sealing leakage sites and compartmentalizing infection, excessive fibrin deposition can be harmful because it can activate neutrophils and fibroblasts, compromise endothelial integrity, contribute to a loss of surfactant activity, decrease alveolar fluid clearance, and induce thrombotic obstruction of the microcirculation [6,7]. The injury to the pulmonary microcirculation via inflammatory and thrombotic mechanisms may contribute to the increase in the pulmonary dead space fraction that is an independent predictor of mortality in ARDS [45]. In addition, systemic activation of coagulation may also contribute to hypercoagulability and the development of multiorgan failure with widespread microvascular thrombus formation [1,4,17]. Activated protein C is an endogenous regulator of coagulation that has both anticoagulant and antiinflammatory effects. Alterations in plasma levels of protein C, EPCR, or TM that may contribute to decreased availability of activated protein C are associated with adverse clinical outcomes in patients with ARDS [16,17]. In the present study, we examined the association of genetic variation in these protein C pathway genes with clinical outcomes in patients with ARDS. The results therefore add to the significance of the previously reported associations of alterations in these protein biomarker levels with protein C pathway proteins and adverse clinical outcomes, and they suggest that genetic predisposition may play a role in these previously reported associations of abnormalities in protein C pathway proteins and clinical outcomes in ARDS. Although we chose a hypothesis-free approach within the candidate genes, the SNPs that were associated with adverse clinical outcomes in this study have all previously been reported (directly or indirectly via a tightly linked SNP) to have an association with protein levels and/or clinical outcomes in other conditions. The rs3176123 SNP in the TM gene is in tight LD with the rs1042579 SNP (r 2 = 1), a coding region nonsynonymous SNP. The minor allele of this SNP is associated with increased incidence of venous thrombosis [46,47]. In the present study, the minor allele homozygotes had increased mortality. The GG genotype of the rs1042580 SNP in the TM gene, which was associated with increased mortality in this study, has been associated with increased cardiovascular disease in females in combination with factor V Leiden in previous studies [32]. In our present study, the rs1042580 SNP was associated with variation in the soluble TM levels in plasma, but that explained only a minor part of the overall effect of the SNP on mortality, suggesting that the SNP may have its effect of through other mechanisms, including the possibility of altering the functional activity of TM. Finally, in a previous study, the haplotype tagged by the C allele of the rs9574 SNP in the EPCR gene was associated with increased levels of activated protein C levels and reduced risk of venous thromboembolism [21,22]. In the present study, we found that the G allele at this locus is associated with increased mortality. The rs9574 SNP was associated with variation in the soluble EPCR levels in plasma, but that variation explained only a part of the overall effect of the SNP on mortality, suggesting that the SNP may have its effect through other mechanisms, including the possibility that the SNP may have an additional effect on the functional activity of EPCR. A novel feature of this study is the combined analysis of the three genes. In a complex illness such as ARDS, the impact of genetic factors is likely to be determined by several variants of small effect size and their possible interactions. When acting together, these gene variants may affect the disease outcomes more profoundly than do the single predisposing variants [32]. An analysis comprising several genes that belong to the same pathway may reveal cumulative allelic effects (additive or multiplicative) as compared with single SNPs, which individually may have only a modest effect or no measurable impact on clinical outcomes. The combined analysis of these three proteins is based on the hypothesis that the three proteins of the protein C pathway act together to generate the final product (i.e., activated protein C, which might be the biologically active product responsible for the clinical effect). Given our a priori hypothesis that all three genes in this pathway would have an effect on mortality in patients with ARDS, we carried out an additional exploratory analysis using the rs1799810 SNP from the protein C gene based on the biological role of protein C in the common pathway, even though this SNP individually had only showed a trend toward association with increased mortality. Interestingly, the combination of high-risk genotypes from all three genes in the pathway was associated with mortality and adverse clinical outcomes upon addition of the protein C SNP (which was not independently associated with mortality) to the model. There are several strengths of our study. These include the well-characterized clinical phenotype and the multicenter cohort of patients from a large, well-designed clinical trial. The previously reported association of clinical outcomes with protein levels and of the coagulation genes chosen for this study provides the biological plausibility for the candidate genes chosen for this study. All the SNPs associated with poor outcomes in ARDS in our study (or a tightly linked SNP) have been previously reported to affect protein levels and clinical outcomes in other populations [19-21, 26, 32, 48]. This finding further supports the biological plausibility and increases the prior probability for the reported associations. This point is important because genetic association studies with high prior probability are likely to have a low probability of reporting false-positive associations [49]. Finally, we found not only an association of genotypes with mortality but also a strong association with multiple-organ failure-free days. This association is significant because multiorgan failure is the most common attributable cause of mortality among patients with ARDS [1,4]. This could be related to the fact that multiorgan failure and death are associated with each other; alternatively, the association of the high-risk genotypes with the number of organ failure-free days may suggest that multiorgan failure may be an intermediate phenotype, which in turn leads to the higher mortality associated with the high-risk genotypes. This study also has some limitations. First, the results were obtained only in self-identified white patients of European ancestry. Second, we did not correct for studywide multiple comparisons. Since our hypothesis was based on the biological plausibility of the three genes in the protein C pathway and previously reported associations with protein levels, we corrected for multiple comparisons for multiple SNPs tested within each gene rather than for study-wide multiple comparisons as is the case with hypothesis-free genome-wide association studies. Therefore, it is unlikely that these associations were due to chance alone. However, as is true for all genetic epidemiology studies, these findings need to be tested and validated in other patient cohorts. Conclusions On the basis of this cohort, the GC/GG genotypes of the rs9574 SNP in the EPCR gene, as well as the CC genotype of rs3716123 and the GG genotype of the rs124580 SNPs in the TM gene, were associated with increased mortality in adults with ARDS. A combination of these genotypes had an additive effect on mortality, ventilatorfree days, and organ failure-free days in patients with ARDS. These findings suggest that genetic differences may be at least partially responsible for the observed associations between dysregulated coagulation and poor outcomes in patients with ARDS. If confirmed, these findings may support the potential value of testing targeted therapies for ARDS in genetically predisposed patients. Additional file Additional file 1: Figure S1. There is a stepwise increase in mortality with increasing number of high-risk genotypes (p < 0.001). Figure S2 Patients are stratified on the basis of the number of high-risk genotypes (four-SNP model) each one carries, and the height of the bars represents the number of ventilator-free days in each group. There is a stepwise decrease in the number of ventilator-free days with increasing number of high-risk genotypes (p = 0.01). Figure S3 Patients are stratified on the basis of the number of high-risk genotypes (four-SNP model) each individual possesses. Results are shown by organ system (i.e., coagulation [Coag], renal, cardiovascular [Cardio], and central nervous system [CNS]). The y-axis represents the number of organ failure-free days. There is a stepwise decrease in the number of organ failure-free days with increasing number of high-risk genotypes in all four organ systems (p values for each system are reported in parentheses along the x-axis). Table S1 Characteristics of protein C tag SNPs. Abbreviations APACHE: Acute Physiology and Chronic Health Evaluation; ARDS: acute respiratory distress syndrome; CNS: central nervous system; EPCR: endothelial protein C receptor; FACTT: Fluid and Catheter Treatment Trial; FiO 2 : fraction of inspired oxygen; LD: linkage disequilibrium; PaO 2 : partial pressure of arterial oxygen; PC: protein C; PEEP: positive end-expiratory pressure; PIP: peak inspiratory pressure; SNP: single-nucleotide polymorphism; THBD: thrombomodulin gene; TM: thrombomodulin; VFD: ventilator-free day. Competing interests The authors declare that they have no competing interests.
2018-04-03T01:50:02.287Z
2016-05-23T00:00:00.000Z
18964200
s2orc/train
v2
Coagulation markers and echocardiography predict atrial fibrillation, malignancy or recurrent stroke after cryptogenic stroke
Coagulation markers and echocardiography predict atrial fibrillation, malignancy or recurrent stroke after cryptogenic stroke Abstract We evaluated the utility of left atrial volume index (LAVI) and markers of coagulation and hemostatic activation (MOCHA) in cryptogenic stroke (CS) patients to identify those more likely to have subsequent diagnosis of atrial fibrillation (AF), malignancy or recurrent stroke during follow-up. Consecutive CS patients who met embolic stroke of undetermined source (ESUS) who underwent transthoracic echocardiography and outpatient cardiac monitoring following stroke were identified from the Emory cardiac registry. In a subset of consecutive patients, d-dimer, prothrombin fragment 1.2, thrombin-antithrombin complex and fibrin monomer (MOCHA panel) were obtained ≥2 weeks post-stroke and repeated ≥4 weeks later if abnormal; abnormal MOCHA panel was defined as ≥2 elevated markers which did not normalize when repeated. We assessed the predictive abilities of LAVI and the MOCHA panel to identify patients with subsequent diagnosis of AF, malignancy, recurrent stroke or the composite outcome during follow-up. Of 94 CS patients (mean age 64 ± 15 years, 54% female, 63% non-white, mean follow-up 1.4 ± 0.8 years) who underwent prolonged cardiac monitoring, 15 (16%) had new AF. Severe LA enlargement (vs normal) was associated with AF (P < .06). In 42 CS patients with MOCHA panel testing (mean follow-up 1.1 ± 0.6 years), 14 (33%) had the composite outcome and all had abnormal MOCHA. ROC analysis showed LAVI and abnormal MOCHA together outperformed either test alone with good predictive ability for the composite outcome (AUC 0.84). We report the novel use of the MOCHA panel in CS patients to identify a subgroup of patients more likely to have occult AF, occult malignancy or recurrent stroke during follow-up. A normal MOCHA panel identified a subgroup of CS patients at low risk for recurrent stroke on antiplatelet therapy. Further study is warranted to evaluate whether the combination of an elevated LAVI and abnormal MOCHA panel identifies a subgroup of CS patients who may benefit from early anticoagulation for secondary stroke prevention. Introduction Of the 87% of strokes that are ischemic in origin, 30% to 40% are classified as cryptogenic in origin. [1] In the absence of a clear cause current American Heart Association/American Stroke Association guidelines recommend the combination of antiplatelet therapy and risk factor modification given that prior studies have shown no benefit to anticoagulation. [2] However, recent studies suggest that cryptogenic stroke (CS) patients may have thromboembolic causes including occult atrial fibrillation (AF), occult malignancies and an estimated recurrent stroke rate of 4% per year despite antiplatelet therapy. [1,3] Left atrial structural abnormalities including enlarged left atrial size have been associated with patients more likely to have occult AF however they are limited in identifying other causes of CS. [4,5] Markers of coagulation and hemostatic activation (MOCHA) tests have previously been shown to increase in patients with AF, cancer or cardioembolic stroke however there is limited data on their use in CS patients. [6][7][8][9][10] The objective of our study was to evaluate left atrial size and MOCHA tests in their ability to identify a subgroup of CS patients who are more likely to have subsequent detection of occult AF, occult malignancy or recurrent stroke. Participants Consecutive CS patients according to embolic stroke of undetermined source (ESUS) criteria [11] seen in the Emory Clinic from January 1, 2015 to December 31, 2016 were included in this analysis if they were ≥18 years of age and completed prolonged outpatient cardiac monitoring with either 30-day mobile cardiac outpatient telemetry (MCOT) and/or implantable loop recorder (ILR) (Reveal LINQ, Medtronic, Minneapolis, MN) from the Emory cardiac registry. Briefly, all patients underwent brain imaging with a CT or MRI that displayed a non-lacunar brain infarct that excluded extra-and intracranial arterial stenosis or occlusion due to atherosclerosis, vasculitis, dissection, and excluded a documented cardioembolic source after 12-lead ECG, cardiac monitoring for ≥24 h with automated rhythm detection and echocardiography. Beginning January 1, 2016 we initiated the MOCHA panel as part of our CS workup measuring serum levels of d-dimer (reference value <500 ng/mL), prothrombin fragment 1.2 (reference value 65-288 pmol/L), thrombinantithrombin complex (reference value 1.0-5.5 mcg/L) and fibrin monomer (reference value <7 mcg/mL) ≥2 weeks after stroke onset. If any of the initial 4 markers were elevated, the panel was repeated ≥4 weeks after initial testing to determine whether there was persistent elevation in markers or normalization. For this analysis we excluded patients on anticoagulation therapy at the time of MOCHA testing and patients with history of venous thromboembolism. Echocardiography Standard 2-dimensional and Doppler transthoracic echocardiography (TTE) was performed on a GE Vivid 7 and E9 (General Electric, Milwaukee, WI) or Philips IE 33 (Philips, Andover, MA). We evaluated LA echocardiographic parameters obtained by TTE including left atrial volume index (LAVI) and left atrial diameter. A bubble study was performed to evaluate the presence of a patent foramen ovale and was considered positive if seen on TTE or transesophageal echocardiography. All echocardiography imaging was reviewed by a board-certified cardiologist. Measurement of plasma concentrations of MOCHA markers All assays were done using 3.2% citrated plasma. Plasma Ddimer levels were measured using high sensitivity latex dimer assay (Instrumentation Laboratories, Bedford, Massachusetts). Prothrombin fragment 1.2 and thrombin antithrombin complexes were both performed using the Enzygnost ELISA kit (Siemens Healthcare, Tarrytown, New York, NY). Soluble fibrin monomer was performed using the latex immunoassay (Stago, Parsippany, NJ). Patient monitoring and follow-up Outpatient follow-up after hospitalization was performed according to our CS algorithm ( Fig. 1 appropriate cancer screenings as suggested by the US Preventive Services Task Force (USPSTF), [12] cardiac monitoring reports were reviewed for evidence of new AF, and history and neurological examination was obtained to identify potential signs of new stroke. All diagnoses were verified by specialists including a board-certified cardiac electrophysiologist for AF, board-certified oncologist for malignancy and board-certified neurologist for stroke. Standard protocol approvals, registrations, and patient consents This study was approved by the Emory Institutional Review Board. Statistical analysis This is a retrospective analysis of prospectively collected data. Comparisons of baseline characteristics and vascular risk factors of our cohort were compared between those who underwent MOCHA panel testing and those who did not. All continuous variables were assessed for normality of distribution; specifically, if the Shapiro-Wilk test P-value was <.05, medians and IQR were reported and non-parametric statistical tests were performed. For pairwise non-parametric comparison, the Mann-Whitney U test was performed. For >2 group comparisons, the Kruskal-Wallis test was performed with post-hoc pairwise comparisons using Bonferroni correction. Two-sample t tests were used for continuous variables and Chi-square (or Fisher exact test) was used for categorical variables. A univariable analysis was performed to identify baseline characteristics and echocardiographic parameters associated with newly diagnosed AF during follow-up. Within the CS subgroup of patients who had MOCHA testing, we assessed the number of elevated MOCHA markers in each patient based on initial testing and then based on repeat testing. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were quantified at 0, 1, 2, 3, and 4 elevated markers and obtained for both the initial MOCHA test and based on repeat testing. The usefulness of echocardiographic and MOCHA markers was tested by using receiver operating characteristic (ROC) analysis. In patients with MOCHA tested, we employed forward stepwise logistic regression (Likelihood Ratio method, entry threshold P < .2 and P < .15 for retention in the model) to identify independent predictors of each outcome including these univariate predictors as well as other potential risk factors such diabetes, hypertension, hyperlipidemia, PFO and migraine. P values were calculated and statistical significance was determined using P < .05. Characteristics Total ROC analysis showed that abnormal MOCHA markers (AUC = 0.72) and elevated LAVI (AUC = 0.69) had higher discriminative power for the detection of AF than left atrial diameter (AUC = 0.50) (Fig. 3). For the detection of malignancy, MOCHA abnormalities also had moderate discriminative power on initial (AUC 0.76) as well as repeat (AUC 0.83) testing showing persistent elevation. For the detection of stroke, MOCHA abnormalities were associated with an AUC 0.63 based on initial testing and AUC 0.64 when repeat MOCHA testing showed persistent elevation. Together, the combination of elevated LAVI and a persistently abnormal MOCHA panel was associated with a higher AUC for the composite outcome (0.84) compared with any testing alone. We measured levels of each marker comparing patients with AF or malignancy to those with none of the composite outcome (Fig. 2). Fibrin monomer levels were significantly higher in patients with malignancy (P < .02) and AF (P < .05) compared with patients who did not have the composite outcome. Thrombin-antithrombin levels had a trend toward higher levels in patients with malignancy compared with no composite outcome (P < .10). Levels of d-dimer were significantly higher in AF patients compared to those with none of the composite outcome (P < .04) and a trend toward higher levels in malignancy patients (P < .11). Prothrombin fragment 1.2 levels had a trend toward increased levels in patients with AF compared to patients with no composite outcome (P < .08) but no significant difference in patients with malignancy. In patients with MOCHA tested, we also employed stepwise regression to identify independent predictors of each outcome. Univariate predictors (P < .2) of the composite outcome included MOCHA abnormalities (continuous variable, P < .04), age (P < .12) and left atrial size (P < .16). Using forward stepwise logistic regression including these univariate predictors as well as other potential risk factors including diabetes, hypertension, hyperlipidemia, PFO, and migraine we observed that the final model only retained an abnormal MOCHA profile as a significant predictor of the composite outcome (OR = 1.74, 95% CI 1.004-3.015, P < .048). For AF as the outcome measure, only severe LA dilatation was identified as a significant predictor (OR = 3.51, 95%CI 1.17-10.5, P < .025) while MOCHA was not. For new diagnosis of malignancy, MOCHA abnormalities trended towards significance as an independent predictor (OR = Table 2 Endpoints stratified by MOCHA markers. Discussion We found that CS patients had a high rate of occult AF, occult malignancy or recurrent stroke with 31% of patients having the composite outcome during follow-up while on antiplatelet therapy. MOCHA marker elevation combined with elevated LAVI seen on echocardiogram had good predictive ability for identifying patients with the composite outcome. Notably, patients with normal MOCHA levels post-stroke had no subsequent endpoints during follow-up with an NPV 100%. Our study has several important implications on the evaluation and treatment of CS patients: Patients have a relatively high frequency of occult AF detected when prolonged outpatient cardiac monitoring is performed similar to other prior studies [3,4,[13][14][15] ; MOCHA panel elevation of ≥2 markers post-stroke was able to effectively predict patients that subsequently were diagnosed with occult AF suggesting that an underlying left atrial cardiopathy in these patients may contribute to a prothrombotic state detected by the MOCHA panel before the arrhythmia is ever detected; given that non-cardiac causes such as occult malignancy can contribute to CS, a combination of cardiac markers such as the LAVI on TTE and non-cardiac markers such as the MOCHA panel will be more effective at identifying patients who may benefit from early anticoagulation than cardiac markers alone; a normal MOCHA panel on antiplatelet therapy may identify a subgroup of CS patients who are unlikely to benefit from early anticoagulation. We chose to evaluate the MOCHA panel in our study because CS is primarily thought to be mediated through a thromboembolic event. Because the four markers in the panel are associated with coagulation activation (PTF 1.2, TAT, FM) or fibrinolysis (DD), we anticipated that a persistent elevation in these tests beyond 2 weeks post-stroke would be a marker of an underlying coagulopathic state. Additionally, previous studies have shown the individual markers to be associated with elevations in AF, coronary artery disease, malignancy, and cardioembolic stroke. [6][7][8][9] All of our patients were placed on antiplatelet therapy after their CS based on current treatment guidelines, however, we chose these prespecified endpoints because they were considered indications that would prompt providers to switch patients from antiplatelet to anticoagulation therapy. Further, CS patients with abnormal MOCHA markers on antiplatelet therapy who were transitioned to anticoagulation after having an endpoint in the study had normalization of all of their markers suggesting that their hypercoagulable condition was suppressed with anticoagulation therapy. Given the recent cessation of the NAVIGATE-ESUS study with no benefit seen in CS patients placed on rivaroxaban 20 mg daily versus aspirin 325 mg daily, [16,17] evaluation of biomarkers soon after stroke will be useful to identify patients who may require early anticoagulation in this trial and the other ongoing trials including RESPECT-ESUS [18] and ATTICUS. [19] Our study has several limitations: Our small sample size of CS patients who underwent MOCHA evaluation requires further validation in a larger cohort study; NONE given that 38% of patients did not want to undergo ILR placement, we may have missed detection of occult AF in some of these patients; our patients were all treated initially after their CS with antiplatelet therapy which may affect the generalizability of our recurrent stroke rates with other cohorts that allowed anticoagulation therapy. In summary, abnormal MOCHA levels identified CS patients who were more likely to have a subsequent diagnosis of AF, malignancy or recurrent stroke during follow-up and may be complementary to LA structural abnormalities in identifying patients who could benefit from early anticoagulation. Given that normal MOCHA levels in CS patients on antiplatelet therapy had a 100% NPV for our composite outcome, the MOCHA panel may additionally identify a subgroup of patients who are unlikely to benefit from early anticoagulation. Evaluating MOCHA in a larger CS cohort is warranted.
2019-01-18T14:14:35.170Z
2018-12-01T00:00:00.000Z
58009400
s2orc/train
v2
Postcards from Beijing: Annual Meeting Abstracts
Postcards from Beijing: Annual Meeting Abstracts The following are highlights from the scientific presentations of the 8th Annual Congress of the Asia-Pacific Association of Medical Toxicology, which was held in Beijing, October 2009. Clinicians and researchers from over a dozen countries attended this meeting, where more than 100 abstracts were showcased as either oral platform or poster presentations. hospital with pacing facilities, use normal ECG machine as well as Holter monitors, use serum potassium as secondary outcome measure. Phase III: This study has randomised 41 patients since February 2009 out of 319 oleander self-poisoning admissions. There have been 5 deaths and no adverse reactions to FDP. The study remains blinded. An interim analysis will be conducted after 120 patients. Conclusions: Our findings from the phase II study suggested FDP is well tolerated and could have favorable effects on electrolytes. This study greatly helped to design a Phase III study well placed to determine the effectiveness of FDP in oleander induced cardiac toxicity, which is now progressing well and should be completed in 2011. and supportive including intravenous fluids and atropine intravenously; maximum of 1.8 mg of atropine was needed for reversal of muscarinic symptoms. In all cases, full recovery occurred within 10 h post exposure. Conclusion: Supportive and symptomatic along with for patients with ingestion of muscarine-containing mushrooms resulted in a favorable outcome. concentrations of cocaine and its metabolites, including cocaethylene, to confirm the findings seen in vitro and animal models of cardiac ion channel dysfunction. Methods: On 8th June 2008, 83 patients (male 50, female 33) of Singra Upazila Natore were admitted in Rajshahi Medical College Hospital and Natore Sadar Hospital with the history of consumption of Puffer fish. A presumptive diagnosis of Puffer fish poisoning was made on the basis of classical clinical presentations followed by Puffer fish ingestion. Blood and urine sample were taken from 38 patients and sent for toxicological analysis to Frankfurt, Germany. Results: Important symptoms observed were peri-oral paresthesia (71), tingling over entire body (50), nausea and vomiting (43), dizziness (35), headache (20), and abdominal pain (13). Muscular paralysis of the limbs was noted in 13 patients, of which seven patients developed respiratory involvement. All the patients who developed respiratory involvement died. Out of 83 patients, 76 patients were improved with conservative management and seven patients died. Out of 38 blood samples sent for toxicological analysis, 27 patients had detectable levels of Tetrodotoxin (TTX) in their blood, and in 11 patients blood TTX level was not detectable (<1.6 ng/ml). Blood TTX level seems to have a strong correlation with development of neuromuscular paralysis. Average TTX concentration in patients who developed neuromuscular paralysis was 8.1 ng/ml. Early safe Although Puffer fish daily should be familiar with the clinical and management prepared to handle such potentially life-threatening intoxications. patients ( p <0.001). Conclusion: (1) Ultrasonography is of no value in diagnosis of opium body packing. (2) Plane abdominal X-ray is simple but not efficient. (3) CT scan is the best diagnostic technique in opium body packing. The following are highlights from the scientific presentations of the 8th Annual Congress of the Asia-Pacific Association of Medical Toxicology, which was held in Beijing, China, October 2009. Clinicians and researchers from over a dozen countries attended this meeting, where more than 100 abstracts were showcased as either oral platform or poster presentations. Although it is challenging to distill a whole meeting into a collection of abstracts, we feel that these selections share qualities common to all innovative research in our fieldthey each compel us to think differently about how best to care for the poisoned patient. Collectively, these brief reports also provide a window into an exciting current development: the emergence of medical toxicology as a vital subspecialty in many countries where the burden of poisoning is tremendous. We hope that these abstracts will encourage Journal of Medical Toxicology readers to contribute to future international toxicology meetings and research collaborations worldwide. For those who are interested, the next congress of the Asia-Pacific Association of Medical Toxicology will be held in Hanoi, Vietnam, on November 17 to 19, 2010. (see http://www.apamt2010.vn/ for details). All presentation from recent APAMT meetings, and other useful information, can also be found at http://www.asiatox.org. Introduction: Some well-defined neurological syndromes are seen following acute organophosphorus (OP) poisoning. However, it is not clear whether there is medium to longterm autonomic nervous system dysfunction. Therefore, we aimed to examine the autonomic nervous system function in patients with acute OP poisoning. Method: A case-control follow-up study was conducted. Sympathetic skin response (SSR) latency and amplitude of dominant hand, R-R (heart rate) interval variation during standing, deep breathing and Valsalva maneuver were measured in 21 patients with acute OP poisoning around the time of discharge (participants were otherwise well) and 1-2 months later. Assessments were performed a mean of 8± 8 days (first assessment) and 46±9 days (second assessment) from exposure. First assessment was done mean 3±2 days following cessation of atropine therapy. Twenty-one controls matched for age and gender were also examined. ANOVA and Post Hoc comparison were used for the analysis. Results: The mean ages of cases (and controls) was 31± 13 years and there were 16 males in each group. The mean HbA 1 C of cases and controls were 5.2±0.32% and 5.4± 0.51%, respectively. Atropine was commenced on six patients at peripheral hospital and transferred. All others had cholinergic features before the commencement of atropine therapy. Three patients were admitted to the intensive care unit (ICU) during the hospital stay and two were ventilated. All patients were treated with atropine. Nineteen patients received pralidoxime. The mean latency of SSR in controls, first, and second assessments of cases was 1,527±125 ms, 1,634± 123 ms, and 1,532±123 ms, respectively (F=4.08 (p<0.05)). Mean amplitude of SSR in controls, first, and second assessments of cases were 1.48±1.0 mV, 0.33±0.30 mV, and 1.05±0.81 mV, respectively (F=12.25, (p<0.01)). Post Hoc comparison showed statistically significant differences in amplitude between the controls and the first assessment (p<0.01), and between the first and the second assessments (p=0.01). The first assessment latency was also significantly different from the controls (p<0.05). Heart rate variability analysis did not show any statistical significant difference between cases and controls. Conclusion: Statistically significant amplitude reduction and prolongation of latency was observed in sympathetic skin responses at the time of discharge (mean 8 days following acute exposure to OP) which was not present 1 to 2 months later. ACUTE FORMIC ACID POISONING IN SOUTH INDIA Ashish J Mathew, Dae Dalus Department of Internal Medicine, Medical College Hospital, Trivandrum, Kerala, India Introduction: Complications of ingestion of formic acid, the diluted form of which is used in coagulation of rubber latex, are not described in literature. Kerala, a state in south-western India, is well known for its rubber plantations. Easy accessibility to formic acid makes it susceptible to be used for deliberate self-harm in this region. This retrospective study was conducted to study the patterns of presentation and identify the predictors of morbidity and mortality of acute formic acid poisoning. Methods: Data regarding patients admitted to the medical wards from January 2007 to December 2008 (2 years) with formic acid ingestion were retrieved and analyzed for symptoms at presentation, clinical parameters, and complications. Results: Of the 302 patients (181 males), with a mean age of 42.78 years (13-85 years), accidental ingestion was reported in 23 patients (7.6%). The mean time taken for presentation to our center after consumption was 2.5 h. Formic acid was mixed in alcohol for consumption by 24.2% patients. Common symptoms at presentation were vomiting (78.1%), respiratory distress (44%), hematemesis (42.1%), and hematuria (30.1%). Complications of the poisoning were oral cavity burns (87.7%), metabolic acidosis (70.2%), septicemia (51.3%), dysphagia (51%), esophageal stricture (ES; 32.5%), gastro-intestinal perforation (GIP; 12.9%), aspiration pneumonia (47.4%), ARDS (33.8%), acute renal failure (38.7%), chemical pneumonitis (25.5%), and shock (24.2%). Rare complications were tracheo-esophageal fistula (four), pneumomediastinum (two), and chemical injury to the cornea (one). Of the 33 patients who underwent hemodialysis, nine developed deep vein thrombosis. Logistic regression was employed to predict morbidity (ES). Metabolic acidosis with pH <7.3 (OR 27.78, 95% CI 3.5-223.2), hematemesis (OR 5.5, 95% CI 2.7-11.1), and age >40 years (OR 0.976, 95% CI 0.95-0.99) were independent predictors of morbidity. Hematemesis (p=0.000) and melena (p=0.000) had significant associations with ES. Hematuria (p<0.001), respiratory distress (p< 0.001), hematemesis (p < 0.001), and GIP (p=0.000) at presentation were significantly associated with mortality. Conclusion: Easy availability of formic acid should be curtailed by enforcing statutory limitations in its distribution. Metabolic acidosis, if taken care of by administration of sodium bicarbonate intravenously at the local medical centers, before referring the patient to a tertiary setup, may reduce mortality and morbidity in acute formic acid poisoning. Patients with hematemesis or melena, if they survive, should be followed-up with serial esophageogastroduodenal endoscopy for diagnosis and early treatment of strictures. Introduction: Fructose-1, 6-diphosphate (FDP) was shown to be effective in oleander-induced cardiac toxicity in dogs. It is widely used for other indications and regarded as safe. We wished to explore its potential as an antidote in humans. The Phase II study was to choose the optimal dose, the phase III study to examine for effectiveness. Methods: Phase II: We conducted a double-blind phase II placebo controlled dose ranging study of four doses levels of FDP in two rural hospitals. Patients received one of the four doses of FDP (30, 60, 125, 250 mg/kg) or placebo (normal saline). At each dose tested, six subjects received FDP and two placebo. Phase III: This study is a randomised double blind clinical trial in 240 patients of FDP (250mg/kg loading dose of FDP over 20 minutes followed by 6mg/kg/hr for 24 hours) vs. placebo in acute yellow oleander poisonings. All patients admitted to Kurunegala Teaching Hospital are initially resuscitated following the national guidelines. Consenting patients with AV block are randomised to receive FDP or placebo. The primary outcome is the sustained reversion to sinus rhythm with a heart rate greater than 50/min within 2 hours of the FDP/placebo bolus. Secondary outcomes include death, reversal of hyperkalaemia on the 6, 12, 18 and 24 hour samples and maintenance of sinus rhythm on the holter monitor. Analysis will be on intention-to-treat. Results: Phase II: The FDP was well tolerated and there were no adverse reactions observed at any dose level. Our primary outcome measure: the reversion of atrioventricular block to sinus rhythm within 2 hours proved impractical as most (28/32) patients were transferred for cardiac pacing in a tertiary hospital within this time and there was frequent electrical interference with Holter readings. Favorable dose-related falls were seen in the serum calcium and potassium within 30 minutes of the infusion (p=0.09 & p=0.03, ANOVA). These supported the following in the Phase III study design: use of the highest bolus dose plus an infusion, conduct study only in tertiary hospital with pacing facilities, use normal ECG machine as well as Holter monitors, use serum potassium as secondary outcome measure. Phase III: This study has randomised 41 patients since February 2009 out of 319 oleander self-poisoning admissions. There have been 5 deaths and no adverse reactions to FDP. The study remains blinded. An interim analysis will be conducted after 120 patients. Conclusions: Our findings from the phase II study suggested FDP is well tolerated and could have favorable effects on electrolytes. This study greatly helped to design a Phase III study well placed to determine the effectiveness of FDP in oleander induced cardiac toxicity, which is now progressing well and should be completed in 2011. MUSHROOM POISONING FOLLOWING CONSUMPTION OF INOCYBE SPECIES Shyam P. Lohani, Arjun Subedi, Umesh R. Aryal, Surath Upadhaya, Saroj Nepal. Nepal Drug and Poison Information Center, Shamakhushi, Kathmandu, Nepal Introduction: Mushroom poisoning is quite frequent in Nepal during rainy season. Approximately 40 species of poisonous mushroom are found in Nepal. There are currently eight recognized classes of mushroom poisonings, seven of which are caused by specific known toxins. The mushroom of Inocybe species (Inocybe patouillardii) is the only muscarine-containing mushroom found in Nepal. The clinical course of patients with typical muscarinic presentations after consumption of muscarine-containing mushrooms is reported. Methods: A retrospective analysis was done for all calls to a poison center related to muscarinic symptoms following mushroom ingestion during the period of July 1998 to June 2008. A total of 77 consecutive cases were reported to Nepal Drug and Poison Information Center. Results: Fifty-eight percentage of cases involved female (n=45) and remaining were male (42%, n=32). Ages ranged from 4 to 67 years, mean 24.70 (±17.96). Combinations of nausea, vomiting, diarrhea, abdominal pain, urination, hypersalivation, bradycardia, hypotension, lacrimation, blurred vision, and miosis were initial presenting symptoms. Time to onset of toxicity ranged from 30 min to 2 h after consumption of mushroom. Treatment was symptomatic and supportive including intravenous fluids and atropine intravenously; maximum of 1.8 mg of atropine was needed for reversal of muscarinic symptoms. In all cases, full recovery occurred within 10 h post exposure. Conclusion: Supportive and symptomatic treatment along with atropine for patients with ingestion of muscarinecontaining mushrooms resulted in a favorable outcome. Introduction: QRS and QTc prolongation, and associated cardiac arrhythmias, following cocaine use are due to cocainerelated cardiac ion channel dysfunction. The simultaneous use of cocaine and ethanol leads to an increased production of the cocaethylene metabolite, which has greater binding to cardiac ion channels and potentially therefore greater risk of cardiac arrhythmias. The effects of simultaneous cocaine-ethanol use on the QRS and QTc duration compared to those seen with lone cocaine use have not been reported. Methods: A 24-month retrospective review of patients with acute toxicity related to self-reported lone cocaine or simultaneous cocaine-ethanol use was undertaken. Data on the sex, presenting symptoms/signs and physiological parameters were extracted on these presentations. ECGs were reviewed for all presentations, where available, and QRS duration and QTc calculated using Bazett's formula were extracted. Comparison of the QRS and QTc durations was undertaken between the two groups. Results: There were 48 and 31 presentations with acute toxicity related to self-reported simultaneous cocaine-ethanol use and self-reported lone cocaine use, respectively. There was no significant difference in the mean (SD) age of those with simultaneous cocaine-ethanol use (29.8±10.2 years) compared to those with lone cocaine use (29.3±7.7 years; p=0.80). There were no significant differences between the mean (SD) heart rate (p=0.90), systolic blood pressure (p=0.81), and temperature (p=0.61) in the simultaneous cocaine-ethanol and lone cocaine use groups. The mean (SD) QRS and QTc durations were 87.3±10.8 ms (range 60-108) and 397.4±32.0 ms (range 323-484) for the simultaneous cocaine-ethanol use and 86.9±12.5 ms (range 67-126) and 396.2±34.6 ms (317-488) for lone cocaine use groups (p=0.87 and p=0.88, respectively). There were no QTc or QRS related cardiac arrhythmias in either group. Conclusions: In this study, we have not detected a significant difference in the QRS and QTc durations between those selfreported simultaneous cocaine-ethanol use compared to those with lone cocaine use. Further studies are needed correlate the concentrations of cocaine and its metabolites, including cocaethylene, to confirm the findings seen in vitro and animal models of cardiac ion channel dysfunction. Introduction. Phenylpropanolamine (PPA) which is often combined with acetaminophen in cold preparations has been known to cause many adverse effects including cardiovascular involvement and is an independent risk factor for hemorrhagic stroke in women. This study described prospectively the features of its poisoning. Method. This is a prospective and descriptive study performed in patients who get sick after the use of cold preparation containing phenylpropanolamine and is treated at our poison control center from September 2002 to September 2005. The poisoning or adverse effects are considered if the drugs were used very recently (within 6 h after the last dose) in a previously healthy person. The clinical and laboratory parameters were evaluated and monitored if exist. PPApoisoned patients were treated symptomatically and discharged when all the signs and symptoms resolve. Results. Forty-six patients including 21 males (45.7%) and 25 females (54.3%) were enrolled. The age of the patients was 32.89±10.13 (range, 17-55 years). Rhumenol® (Tenamyd, Canada, 30 mg phenylpropanolamin per tablet) and Decolgen Forte® (United Pharma, 25 mg phenylpropanolamin per tablet) were two most common proprietary products leading to PPA poisoning, 95.65%. The reason for taking medicine was selftreatment of cold: 43/46 cases (93.5%), 37 patients (80.4%) presented symptoms of PPA poisoning at the first dose of PPA. Thirty-three cases (71.7%) got poisoning at doses of PPA equal or less than 60 mg. The symptoms of PPA poisoning were: headache (93.5%), nausea (50%), dizziness (37%), and elevated systolic blood pressure (higher than 140 mmHg; 95.7%); bradycardia (39.1%); hypokalemia was presented in 11 patients (23.3%). Adalat (oral liquid nifedipin) was given with doses of 3-10 mg. All the patients recovered from hypertension after 7.6 ±5.03 h, 43 cases (93.5%) were discharged on the admitted day. No complications of hypertension or death were observed. Conclusions. The manifestations of PPA poisoning include acute hypertension, which happens even if patients take the recommended dose of PPA for cold management. Poisoned patients recover quickly after PPA discontinuation and the use of a simple and rapid acting antihypertensive agent. The result from this study contributed to the decision by the ministry of health of Vietnam to remove all the pharmaceutical agents containing phenylpropanolamine from the domestic market since 2003. Introduction: The study was carried out in the Medicine and Pediatrics department of Rajshahi Medical College Hospital, and Natore Sadar Hospital, both are located in the northern territory of Bangladesh. Methods: On 8th June 2008, 83 patients (male 50, female 33) of Singra Upazila Natore were admitted in Rajshahi Medical College Hospital and Natore Sadar Hospital with the history of consumption of Puffer fish. A presumptive diagnosis of Puffer fish poisoning was made on the basis of classical clinical presentations followed by Puffer fish ingestion. Blood and urine sample were taken from 38 patients and sent for toxicological analysis to Frankfurt, Germany. Results: Important symptoms observed were peri-oral paresthesia (71), tingling over entire body (50), nausea and vomiting (43), dizziness (35), headache (20), and abdominal pain (13). Muscular paralysis of the limbs was noted in 13 patients, of which seven patients developed respiratory involvement. All the patients who developed respiratory involvement died. Out of 83 patients, 76 patients were improved with conservative management and seven patients died. Out of 38 blood samples sent for toxicological analysis, 27 patients had detectable levels of Tetrodotoxin (TTX) in their blood, and in 11 patients blood TTX level was not detectable (<1.6 ng/ml). Blood TTX level seems to have a strong correlation with development of neuromuscular paralysis. Average TTX concentration in patients who developed neuromuscular paralysis was 8.1 ng/ml. Conclusion: Early diagnosis and supportive management could ensure a safe and favorable outcome. Although Puffer fish poisoning is uncommonly encountered in our daily practice, physicians should be familiar with the clinical presentations and management and get prepared to handle such potentially life-threatening intoxications. LOW MOLECULAR WEIGHT HEPARIN OVERDOSE Adeline Ngo Su-Yin, Daryl Tan Chen Lung, Kent R Olson California Poison Control System-San Francisco Division, San Francisco, USA Introduction Low molecular weight heparin (LMWH) has been used for the treatment and prevention of several disorders including deep vein thrombosis, pulmonary embolism, unstable angina, and myocardial infarction. Its anticoagulant effect creates the potential for bleeding. There are several studies showing the risk of major bleeding from therapeutic anticoagulant use but there have been no reports in the literature on acute overdose in adults to date. As the California Poison Control Centre (PCC) has been consulted on several cases of LMWH overdose, this series aimed to provide data that may help poison center practice. We also believe this to be the first case series of patients reported with an acute overdose on LMWH. Method A retrospective chart review of PCC database: Visual Dot Lab between 1997 and 2007 was obtained. Inclusion criteria included all patients with a reported overdose on LMWH. The route of exposure is subcutaneous. Cases were excluded if therapeutic doses of LMWH were administered. Results There were 21 patients (mean 42.4 years). The reasons for overdose include medical miscalculation (three cases, all infants), intentional misuse (two patients), accidental overdose (seven cases), suicidal attempt (seven cases), and unknown in two patients. Seven cases were documented to have overdosed more than two times the therapeutic dose. The overdose ranged from 0.1 to 80 times the therapeutic range. No patients were documented to have bleeding or thrombocytopenia. Six patients were documented to have no bleed and were well after at least 36 h. Reassurance was given to patients with less than 0.14 times the therapeutic dose. Two patients in the series received protamine because they received more than 2.5 times the therapeutic dose of LMWH. Conclusion Given the rarity of overdose, there is no clear consensus on its management. Most patients had no complications and were not treated with protamine. This series suggests that a large dosage of LMWH is unlikely to result in any lifethreatening complications. ROTUNDIN POISONING IN VIETNAM Thu Hong Be, Due Pham Poison Control Center, Bach Mai hospital, Hanoi, Vietnam Introduction: Rotundin (L-tetrahydropalmatin) was extracted from the plant Stephania rotunda which has been known for the sedative and analgesic effects and used widely in Vietnam. However, this agent has been increasingly used without prescription and the incidence of poisoning ensued as a result. Methods: This is a prospective descriptive study. All the patients overdosed with rotundin (by history and sample of medications from the patients), with the urine or gastric fluid positive with rotundin by thin layer chromatography were admitted to our poison control center from December 2003 to January 2005. Results: 122 patients (27 males, 22.1% and 95 females, 77.9%) were included. Age of the patient, 23.6±6.3 (12-52 years), reasons for poisoning was suicide in 120 patient (98.4%). The mean dose was 1,258.7±1,082.81 mg (range, 300-6,000 mg). Severity of poisoning: mild, 69.7%; moderate, 25.4%; severe, 2.5% and no death occurred. The most common symptoms were CNS depressant (32.8%), nausea (22.1%), vomiting (20.5%), sinus bradycardia (3.3%), and hypotension (0.8%). ECG abnormalities accounted for 74.6% of the patients including prolonged QT (27%), sinus extrasystole (3.3%), and first degree AV block (1.6%). Sinus bradycardia, sinus tachycardia, elevated ST segment, and T wave inversion were also observed. Very mild elevation in AST and ALT were seen in 7.4% of the patients. The dose of rotundin was 77.8±474.09 mg in patients with normal ECG and 1,432.0±1,185.8 mg in patients with ECG abnormalities (p=0.004). The dose of rotundin was 1,172.0±1,026.67 mg in totally conscious patients and was 1,346.9±1,139.23 mg in patients with CNS depressant (p=0.385). Conclusion: Rotundin only causes mild CNS inhibition (if any). However, in our patients, it also causes cardiac abnormalities which associate with higher doses. Objective: Opium body packing is a common cause of admission to our Medical Toxicology ward. Since the body packers are drug smugglers, they are mostly brought to the hospital by police. Those who are alert usually deny body packing. Ultrasonography, plain X-ray, and CT scan are recommended for the diagnosis of body packing and stuffing. It was aimed to compare the diagnostic values of the three techniques in opium body packing. Methods: A questionnaire was designed to record all clinical and paraclinical findings of all body packers admitted to the ward between 10 October 2000 and 11 October 2008. Ultrasonography, plane X-ray, and CT scan were performed for all body packers on admission and at certain intervals as clinically indicated. Magnesium sulfate was used as a cathartic in all admitted body packers; naloxone was administered in symptomatic patients. The asymptomatic cases were under close observation both medically and forensically. The packets recovered in the feces were counted, weighted, and collected by the police. The comatose patients with many packets who did not respond to the medical treatment or revealed bowel obstruction consulted surgically. The surgically removed packets were also counted, weighted, and collected by the police. The results of the three techniques were compared with the clinical findings and the number of recovered packets. Statistical analysis (Chi square test) was made using SPSS. Results: Out of 3,281 poisoned patients admitted to the ward over the years, 490 patients (15%) had narcotic poisoning, of which 50 patients (5%) were opium body packers. There were two female body packers (a 16-year-old girl and a 35-year-old woman), one of whom had severe opium poisoning; she underwent emergent surgery and died a day later in ICU. Out of 48 male patients, two (30 and 69 years old) also had surgery and died. The other 47 patients aged 17 to 58 (mean 31) years were treated medically and all survived despite the severe intoxication in 18 of them. The body packers were either illiterate (28%), primary educated (32%), or secondary educated (40%). More than 44% of them were drug addicts. The number of packets varied between 1 and 48 (mean of 21) with weights of 6 to 102 g (mean of 46). Ultrasonography did not show any clear countable packets, whereas plain abdominal X-ray revealed the packets in 24 patients (48%) and abdominal CT scan were positive in 48 (96%) patients (p<0.001). Conclusion: (1) Ultrasonography is of no value in diagnosis of opium body packing. (2) Plane abdominal Xray is simple but not efficient. (3) CT scan is the best diagnostic technique in opium body packing. (1) Department of Emergency Medicine, Bangkok Metropolitan Administration Medical College and Vajira Hospital, Bangkok, Thailand. Introduction: Centipede envenomation occurs commonly in tropical countries. At the present time, however, there are no epidemiologic or clinical studies of centipede envenomations in Thailand. This study is the first of its kind studying epidemiology, clinical manifestations, and various treatments of envenomations by centipedes in patients registered at the Department of Emergency Medicine, Bangkok Metropolitan Administration Medical College and Vajira Hospital, Bangkok, Thailand. Method: We retrospectively analyzed 104 cases of definite envenomations by centipedes among patients that presented to the hospital, between 1 January, 2004 and 30 June, 2009. Demographic data, data on local and systemic effects, and treatments after centipede envenomations were collected. Results: There were 104 cases included in this study. Fiftytwo percent were female. Mean age was 27.8±17.8 years old (ranging from 1 month to 76 years). The time from envenomation to presentation at the Emergency Department ranged between 15 min and 48 h (median=40 min). Most of the envenomations (85.9%) occurred at night time between 6 pm and 6 am. The incidence of envenomations was highest in the summer months; April and May, and winter months; October through December. Envenomation sites were recorded in 96% of patients, and 91 out of 100 cases were stung only once. Feet (32%) and hands (25%) were the parts of the body most often envenomated. Local effects were common. Ninety-six percent of patients had localized pain and 78% had swelling at the site of envenomation. Systemic effects consisted of nausea (7.7%), vomiting (5.8%), rash (2.9%), fever (1.9%), systemic swelling (1.9%), abdominal pain (1%), palpitations (1%), and wheezing (1%). Anaphylaxis was diagnosed in three patients with two or more systemic effects, but neither wheezing lungs nor shock was found. For pain control, 98.1% received analgesic drugs, while 33.7% were injected with local anesthesia. Antibiotics, antihistamines, and steroids were prescribed in 73.1%, 24%, and 9.6%, respectively. Fortunately, no deaths occurred in this study. Conclusion: Most of the centipede envenomations we see in Thailand occur in two clusters each year. The first cluster is in April and May and the second is in October through December. Patients were the most vulnerable to centipede envenomations during the night time hours. Nearly all patients had local effects, in contrast to systemic effects which rarely presented. Most of the patients had favorable outcome. Introduction: Opthalmic injury by the venom of Chinese Cobra (Naja atra) is an uncommon but well-recognized mode of injury. The clinical value of local antivenom for this type of injury is uncertain although animal studies showed possible benefits. We report a case of ocular injury by spitting Chinese Cobra with rapid relief of symptoms after local antivenom irrigation. Case Report: A 50-year-old man had left eye injury by a spitting Chinese Cobra at a 3 ft distance and attended the Accident and Emergency Department 30 min later. He had persistent symptoms with left eye pain and blurred vision after local irrigation with copious amount of normal saline solution. Examination showed bilateral eye congestion, visual acuity was 6/21 on the left eye compared with 6/9 on right side. There was no corneal abrasion. Naja Antivenin® (Shanghai Institute of Biological Products, Ministry of Health, China) diluted to 500-mL normal saline irrigation was performed for persistent symptoms. The patient had relief of his symptoms almost immediately after irrigation with the diluted antivenom. Topical antibiotic solutions containing polymyxin, neomycin, and gramicidin (tetracycline eyedrop was not available in our center) was also given and his left eye congestion and visual acuity improved overnight. He remained asymptomatic on assessment by ophthalmologist 3 days after the injury. Discussion: It is generally believed that the cardiotoxin component of the venom of the Cobra family causes ocular injury. Local toxicities reported for other species of Cobra ranges from pain, redness, corneal injury to blindness. Animal model and case reports had showed possible beneficial effect from the use of tetracycline eyedrops and local administration of antivenom. Although the role of local administration of antivenom is controversial for other species of Cobra, the antivenom used was not specific in other parts of the world. On the contrary, the Naja Antivenin® is specific to Chinese Cobra and it is possible that treatment with the antivenom is more specific. In fact, local antivenom eyedrop has been used for ocular injury by Chinese Cobra in some centers in China with good outcome. Conclusion: Apart from irrigation by water or normal saline solution, and application of local antibiotics, the administration of local antivenom, either as eyedrops or in diluted irrigation can be considered in cases of ocular injury by spitting Chinese Cobra with persistent symptoms or severe injury. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2016-05-12T22:15:10.714Z
2010-06-12T00:00:00.000Z
17310810
s2orc/train
v2
Unifying Gaussian LWF and AMP Chain Graphs to Model Interference
Unifying Gaussian LWF and AMP Chain Graphs to Model Interference An intervention may have an effect on units other than those to which it was administered. This phenomenon is called interference and it usually goes unmodeled. In this paper, we propose to combine Lauritzen-Wermuth-Frydenberg and Andersson-Madigan-Perlman chain graphs to create a new class of causal models that can represent both interference and non-interference relationships. Specifically, we define the new class of models, introduce global and local and pairwise Markov properties for them, and prove their equivalence. We also propose an algorithm for maximum likelihood parameter estimation for the new models, and report experimental results. Finally, we adapt Pearl's do-calculus for causal effect identification in the new models. Motivation Graphical models are among the most studied and used formalisms for causal inference. Some of the reasons of their success are that they make explicit and testable causal assumptions, and there exist algorithms for causal effect identification, counterfactual reasoning, mediation analysis, and model identification from nonexperimental data [16,20,23]. However, in causal inference in general and graphical models in particular, one assumes more often than not that there is no interference, i. e. an intervention has no effect on units other than those to which the intervention was administered [28]. This may be unrealistic in some domains. For instance, vaccinating the mother against a disease may have a protective effect on her child, and vice versa. A notable exception is the work by Ogburn and VanderWeele [11], who distinguish three causal mechanisms that may give rise to interference and show how to model them with directed and acyclic graphs (DAGs). In this paper, we focus on what the authors call interference by contagion: One individual's treatment does not affect another individual's outcome directly but via the first individual's outcome. Ogburn and VanderWeele [11] also argue that interference by contagion typically involves feedback among different individuals' outcomes over time and, thus, it may be modeled by a DAG over the random variables of interest instantiated at different time points. Sometimes, however, the variables are observed at one time point only, e. g. at the end of a season. The observed variables may then be modeled by a Lauritzen-Wermuth-Frydenberg chain graph, or LWF CG for short [5,7]: Directed edges represent direct causal effects, and undirected edges represents causal effects due to interference. Some notable papers on LWF CGs for modeling interference by contagion are Ogburn et al. [12], Shpitser [21], 1 Shpitser et al. [22], and Tchetgen et al. [27]. For instance, the previous mother-child example may be modeled with the LWF CG in Figure 1 (a), where V 1 and V 2 represent the dose of the vaccine administered to the mother and the child, and D 1 and D 2 represent the severity of the disease. This paper only deals with Gaussian distributions, which implies that the relations between the random variables are linear. Note that the edge D 1 − D 2 represents a symmetric relationship but has some features of a causal relationship, in the sense that a change in the severity of the disease for the mother causes a change in severity for the child and vice versa. This seems to suggest that we can interpret the undirected edges in LWF CGs as feedback loops. That is, that every undirected edge can be replaced by two directed edges in opposite directions. However, this is not correct in general, as explained at length by Lauritzen and Richardson [8]. In some domains, we may need to model both interference and non-interference. This is the problem that we address in this paper. In our previous example, for instance, the mother and the child may have a gene that makes them healthy carriers, i. e. the higher the expression level of the gene the less the severity of the disease but the load of the disease agent (e. g., a virus) remains unaltered and, thus, so does the risk of infecting the other. We may model this situation with the LWF CG in Figure 1 (b), where G 1 and G 2 represent the expression level of the healthy carrier gene in the mother and the child. However, this model is not correct: In reality, the mother's healthy carrier gene protects her but has no protective effect on the child, and vice versa. This non-interference relation is not represented by the model. In other words, LWF CGs are not expressive enough to model both interference relations (e. g., intervening on V 1 must have an effect on D 2 ) and non-interference relations (e. g., intervening on G 1 must have no effect on D 2 ). To remedy this problem, we propose to combine LWF CGs with Andersson-Madigan-Perlman chain graphs, or AMP CGs for short [1]. We call these new models unified chain graphs (UCGs). As we will show, it is possible to describe how UCGs exactly model interference in the Gaussian case. Works such as Ogburn et al. [12], Shpitser [21], Shpitser et al. [22], and Tchetgen et al. [27] use LWF CGs to model interference and compute some causal effect of interest. However, they do not describe how interference is exactly modeled. The rest of the paper is organized as follows. Section 2 introduces some notation and definitions. Section 3 defines UCGs formally. Sections 4 and 5 define global, local and pairwise Markov properties for UCGs and prove their equivalence. Section 6 proposes an algorithm for maximum likelihood parameter estimation for UCGs, and reports experimental results. Section 7 considers causal inference in UCGs. Section 8 discusses identifiability of LWF and AMP CGs. Section 9 closes the paper with some discussion. The formal proofs of all the results are contained in Appendix A. Preliminaries In this paper, set operations like union, intersection and difference have the same precedence. When several of them appear in an expression, they are evaluated left to right unless parentheses are used to indicate a different order. Unless otherwise stated, the graphs in this paper are defined over a finite set of nodes V. Each node represents a random variable. We assume that the random variables are jointly normally distributed. The graphs contain at most one edge between any pair of nodes. The edge may be undirected or directed. We consider two types of directed edges: Solid (→) and dashed ( ). The parents of a set of nodes X in a graph G is the set Moreover, the subgraph of G induced by X is denoted by G X . A route between a node V 1 and a node V n in G is a sequence of (not necessarily distinct) nodes V 1 , . . . , V n such that V i ∈ Ad(V i+1 ) for all 1 ≤ i < n. Moreover, V j , . . . , V j+k and V j+k , . . . , V j with 1 ≤ j ≤ n − k are called subroutes of the route. The route is called undirected if V i − V i+1 for all 1 ≤ i < n. If A route of distinct nodes is called a path. A chain graph (CG) is a graph with (possibly) directed and undirected edges, and without semidirected cycles. A set of nodes of a CG G is connected if there exists an undirected route in G between every pair of nodes in the set. A chain component of G is a maximal connected set. Note that the chain components of G can be sorted topologically, i. e. for every edge A → B or A B in G, the component containing A precedes the component containing B. The chain components of G are denoted by Cc(G). CGs without dashed directed edges are known as LWF CGs, whereas CGs without solid directed edges are known as AMP CGs. We now recall the interpretation of LWF CGs. 2 Given a route ρ in a LWF CG G, a section of ρ is a maximal undirected subroute of ρ. We say that ρ is Z-open with Z ⊆ V when (i) all the collider sections in ρ have some node in Z, and (ii) all the nodes that are outside the collider sections in ρ are outside Z. We now recall the interpretation of AMP CGs. 3 Given a route ρ in an AMP CG G, C is a collider node in ρ if ρ has a subroute A C B or A C − B. We say that ρ is Z-open with Z ⊆ V when (i) all the collider nodes in ρ are in Z, and (ii) all the noncollider nodes in ρ are outside Z. Let X, Y and Z denote three disjoint subsets of V. When there is no Z-open route in a LWF or AMP CG G between a node in X and a node in Y, we say that X is separated from Y given Z in G and denote it as X ⊥ G Y|Z. Moreover, we represent by X ⊥ p Y|Z that X and Y are conditionally independent given Z in a probability distribution p. We say that p satisfies the global Markov property with respect to G when X ⊥ p Y|Z for all X, Y, Z ⊆ V such that X ⊥ G Y|Z. When X ⊥ p Y|Z if and only if X ⊥ G Y|Z, we say that p is faithful to G. Finally, let X, Y, W and Z be disjoint subsets of V. A probability distribution p that satisfies the following five properties is called graphoid: If p also satisfies the following property, then it is called compositional graphoid: Unified chain graphs In this section, we introduce our causal models to represent both interference and non-interference relationships. Let p denote a Gaussian distribution that satisfies the global Markov property with respect to a LWF or AMP CG G. Then, . . , K n denote the chain components of G that precede K in an arbitrary topological ordering of the components. Assume without loss of generality that p has zero mean vector. Moreover, let Σ and Ω denote respectively the covariance and precision matrices of p(K, Pa(K)). Each matrix is then of dimension (|K| + |Pa(K)|) × (|K| + |Pa(K)|). Bishop [2,Section 2.3.1] shows that the conditional distribution p(K|Pa(K)) is Gaussian with covariance matrix Λ K and the mean vector is a linear function of Pa(K) with coefficients β K . Then, β K is of dimension |K| × |Pa(K)|, and Λ K is of dimension |K| × |K|. Moreover, β K and Λ K can be expressed in terms of Σ and Ω as follows: with and where Σ K,Pa(K) denotes the submatrix of Σ with rows K and columns Pa(K), and Σ −1 Furthermore, if G is an AMP CG, then (β K ) i,j = 0 for all i ∈ K and j ∈ Pa(K) \ Pa(i), because i ⊥ G Pa(K) \ Pa(i)|Pa(i). Therefore, the directed edges of an AMP CG are suitable for representing non-interference. On the other hand, the directed edges of a LWF CG are suitable for representing interference. To see it, note that if G is a LWF CG, then Ω i,j = 0 for all i ∈ K and j ∈ Pa because (Ω K,Pa(K) ) 4,1 = 0 as 1 → 4 is not in G. In other words, if there is a path in a LWF CG G from j to i through nodes in K, then (β K ) i,j is not identically zero. Actually, (β K ) i,j can be written as a sum of path weights over all such paths. This follows directly from Equation 3 and the result by Jones and West [6, Theorem 1], who show that (Ω −1 K,K ) i,l can be written as a sum of path weights over all the paths in G between l and i through some (but not necessarily all) nodes in K. Specifically, where π i,l denotes the set of paths in G between i and l through nodes in K, |ρ| denotes the number of nodes in a path ρ, ρ n denotes the n-th node in ρ, and (Ω K,K ) \ρ is the matrix with the rows and columns corresponding to the nodes in ρ omitted. Moreover, the determinant of a zero-dimensional matrix is taken to be 1. As a consequence, a LWF CG G does not impose zero restrictions on β K , because K is connected by definition of chain component. Previous works on the use of LWF CGs to model interference (e. g., [12,21,22,27]) focus on developing methods for computing some causal effects of interest, and do not give many details on how interference is really being modeled. In the case of Gaussian LWF CGs, Equations 3 and 5 shed some light on this question. For instance, consider extending the mother-child example in Section 1 to a group of friends. The vaccine for individual j has a protective effect on individual j developing the disease, which in its turn has a protective effect on her friends, which in its turn has a protective effect on her friends' friends, and so on. Moreover, the protective effect also works in the reverse direction, i. e. the vaccine for the latter has a protective effect on individual j. In other words, the protective effect of the vaccine for individual j on individual i is the result of all the paths for the vaccine's effect to reach individual i through the network of friends. This is exactly what Equations 3 and 5 tell us (assuming that the neighbors of an individual's disease node in G are her friends' disease nodes). Appendix B gives an alternative decomposition of the covariance in terms of path coefficients, which may give further insight into Equation 5. The discussion above suggests a way to model both interference and non-interference by unifying LWF and AMP CGs, i. e. by allowing →, and − edges as long as no semidirected cycle exists. We call these new models unified chain graphs (UCGs). The lack of an edge − imposes a zero restriction on the elements of Ω K,K , as in LWF and AMP CGs. The lack of an edge in an UCG imposes a zero restriction on the elements of β K , as in AMP CGs. Finally, the lack of an edge → imposes a zero restriction on the elements of Ω K,Pa(K) but not on the elements of β K , as in LWF CGs. Therefore, edges may be used to model non-interference, whereas edges → may be used to model interference. For instance, the mother-child example in Section 1 may be modeled with the UCG in Figure 1 (c). Every UCG is a CG, but the opposite is not true due to the following requirement. We divide the parents of a set of nodes X in an UCG G into mothers and fathers as follows. The fathers of X are We require that Fa(K) ∩ Mo(K) = for all K ∈ Cc(G). Therefore, the lack of an edge imposes a zero restriction on the elements of β K corresponding to the mothers, and the lack of an edge → imposes a zero restriction on the elements of Ω K,Pa(K) corresponding to the fathers. In other words, G imposes the following zero restrictions: The reason why we require that Fa(K) ∩ Mo(K) = is to avoid imposing contradictory zero restrictions on β K , e. g. the edge j → i excludes the edge j i by definition of CG, which implies that (β K ) i,j is identically zero, but j → i implies the opposite. In other words, without this constraint, UCGs would reduce to AMP CGs. The following lemma formalizes this statement. Global Markov property In this section, we present a separation criterion for UCGs. Given a route ρ in an UCG, C is a collider node in ρ if ρ has a subroute A We say that ρ is Z-open with Z ⊆ V when (i) all the collider nodes in ρ are in Z, (ii) all the collider sections in ρ have some node in Z, and (iii) all the nodes that are outside the collider sections in ρ and are not collider nodes in ρ are outside Z. Let X, Y and Z denote three disjoint subsets of V. When there is no Z-open route in G between a node in X and a node in Y, we say that X is separated from Y given Z in G and denote it as X ⊥ G Y|Z. Note that this separation criterion unifies the criteria for LWF and AMP CGs reviewed in Section 3. Finally, we say that a probability distribution p satisfies the global Markov property with respect to an The next theorem states the equivalence between the global Markov property and the zero restrictions associated with an UCG. Theorem 3. A Gaussian distribution p satisfies Equations 1 and 6-8 with respect to an UCG G if and only if it satisfies the global Markov property with respect to G. We have mentioned before that Jones and West [6] prove that the covariance between two nodes i and j can be written as a sum of path weights over the paths between i and j in a certain undirected graph (recall Equation 5 for the details). Wright [29] proves a similar result for DAGs. The following theorem generalizes both results. Block-recursive, pairwise and local Markov properties In this section, we present block-recursive, pairwise and local Markov properties for UCGs, and prove their equivalence to the global Markov property for Gaussian distributions. Equivalence means that every Gaussian distribution that satisfies any of these properties with respect to an UCG also satisfies the global Markov property with respect to the UCG, and vice versa. The relevance of these results stems from the fact that checking whether a distribution satisfies the global Markov property can now be performed more efficiently: We do not need to check that every global separation corresponds to a statistical independence in the distribution, it suffices to do so for the local (or pairwise or block-recursive) separations, which are typically considerably fewer. This is the approach taken by most learning algorithms based on hypothesis tests, such as the PC algorithm for DAGs [23], and its posterior extensions to LWF CGs [24] and AMP CGs [14]. The results in this section pave the way for developing a similar learning algorithm for UCGs, something that we postpone to a future article. We say that a probability distribution p satisfies the block-recursive Markov property with respect to G if for any chain component Theorem 5. A Gaussian distribution p satisfies Equations 1 and 6-8 with respect to an UCG G if and only if it satisfies the block-recursive Markov property with respect to G. We say that a probability distribution p satisfies the pairwise Markov property with respect to an UCG G if for any non-adjacent vertices i and j of G with i ∈ K and K ∈ Cc(G) Theorem 6. The pairwise and block-recursive Markov properties are equivalent for graphoids. We say that a probability distribution p satisfies the local Markov property with respect to an UCG G if for Theorem 7. The local and pairwise Markov properties are equivalent for Gaussian distributions. Theorems 3 and 5-7 imply the following corollary, which summarizes our results. Maximum likelihood parameter estimation In this section, we introduce a procedure for computing maximum likelihood estimates (MLEs) of the parameters of an UCG G, i. e. the MLEs of the non-zero entries of (β K ) K,Mo(K) , Ω K,K and Ω K,Fa(K) for every chain component K of G. Specifically, we adapt the procedure proposed by Drton and Eichler [4] for computing MLEs of the parameters of an AMP CG. The procedure combines generalized least squares and iterative proportional fitting. Suppose that we have access to a data matrix D whose column vectors are the data instances and the rows are indexed by V. Moreover, let D X denote the data over the variables X ⊆ V, i. e. the rows of D corresponding to X. Note that thanks to Equation 1, the MLEs of the parameters corresponding to each chain component K of G can be obtained separately. Moreover, recall that Therefore, we may compute the MLEs of the parameters corresponding to the component K by iterating between computing the MLEsΩ K,K andΩ K,Fa(K) and thus (β K ) K,Fa(K) while fixing (β K ) K,Mo(K) , and computing the MLEs Mo(K) to zero and then iterate through the following steps until convergence: The first step corresponds to computing the MLEs of the parameters of a LWF CG and, thus, it is solved by running the iterative proportional fitting procedure as indicated in Lauritzen [7]. This procedure optimizes the likelihood function iteratively over different sections of the parameter space. Specifically, each iteration adjusts the covariance matrix for one clique marginal. The second step above is solved by Equation 3. The third step corresponds to computing the MLEs of the parameters of an AMP CG and, thus, it is solved analytically by Equation 13 in Drton and Eichler [4]. Note that to guarantee convergence to the MLEs, the first and third step should be solved jointly. Therefore, our procedure is expected to converge to a local rather than global maximum of the likelihood function. As Drton and Eichler [4] note, this is also the case for their procedure, upon which ours builds. As for convergence, note that our procedure consists in interleaving the iterative proportional fitting procedure in the first step, and the analytical solution to generalized least squares in the third step. Drton and Eichler [4, Proposition 1] prove that each of these steps increases the likelihood function, which implies convergence since the likelihood function is bounded. See also Lauritzen [7,Theorem 5.4]. The experiments in the next section confirm that our procedure converges to satisfactory estimates within a few iterations. Experimental evaluation First, we generate 1000 UCGs as follows. Each UCG consists of five mothers, five fathers and 10 children. The edges are sampled independently and with probability 0.2 each. If the edges sampled do not satisfy the following two constraints, the sampling process is repeated: (i) There must be an edge from every parent to some child, i. e. the parents are real parents, and (ii) the children must be connected by an undirected path, i. e. they conform a chain component. Then, we parameterize each of the 1000 UCGs generated above as follows. The 10 children are denoted by K, and the 10 parents by Pa(K). The non-zero elements of β K corresponding to the mothers are sampled uniformly from the interval [−3, 3]. The non-zero elements of Ω K,K and Ω K,Fa(K) are sampled uniformly from the interval [−3, 3], with the exception of the diagonal elements of Ω K,K which are sampled uniformly from the interval [0, 30]. If the sampled Ω K,K is not positive definite then the sampling process is repeated. The reason why the diagonal elements are sampled from a wider interval is to avoid having to repeat the sampling process too many times. Finally, note that each of the 1000 parameterized UCGs generated above specifies one probability distribution p(K|Pa(K)). The goal of our experiments is to evaluate the accuracy of the algorithm presented before to estimate the parameter values corresponding to p(K|Pa(K)) from a finite sample of p(K, Pa(K)). To generate these learning data, we sample first p(Pa(K)) and then p(K|Pa(K)). Each of the 1000 probability distributions p(Pa(K)) is constructed as follows. The off-diagonal elements of Ω Pa(K),Pa(K) are sampled uniformly from the interval [−3, 3], whereas the diagonal elements are sampled uniformly from the interval [0, 30]. As before, we repeat the sampling process if the resulting matrix is not positive definite. Note that no element of Ω Pa(K),Pa(K) is identically zero. For the experiments, we consider samples of size 500, 2500 and 5000. For each of the UCGs and corresponding samples generated above, we run the parameter estimation procedure until the MLEs do not change in two consecutive iterations or 100 iterations are performed. We then compute the relative difference between each true parameter value θ and the MLEθ , which we define as abs((θ −θ )/θ). We also compute the residual difference, which we define as the residual with the parameter estimates minus the residual with the true parameter values. The procedure is implemented in R and the code is available at https://www.dropbox.com/s/b9vmqgf99da3qxm/UCGs.R?dl=0. The results of the experiments are reported in Table 1. The difference between the quartiles Q1 and Q3 is not too big, which suggests that the column Median is a reliable summary of most of the runs. We therefore focus on this column for the rest of the analysis. Note however that some runs are exceptionally good and some others exceptionally bad, as indicated by the columns Min and Max. To avoid a bad run, one may consider a more sophisticated initialization of the parameter estimation procedure, e. g. multiple restarts. The sample size has a clear impact on the accuracy of the MLEs, as indicated by the decrease in relative difference. For instance, half of the MLEs are less than 27 %, 12 % and 8 % away from the true values for the samples sizes 500, 2500 and 5000, respectively. The effect of the sample size on the accuracy of the MLEs can also be appreciated from the fact that the residual difference does not grow with the sample size. The parameters (β K ) K,Mo(K) seem easier to estimate than (β K ) K,Fa(K) , as indicated by the smaller relative difference of the former. This is not surprising since the latter may accumulate the errors in the estimation of Ω K,K and Ω K,Fa(K) (recall Equation 3). Next, we repeat the experiments with an edge probability of 0.5 instead of 0.2, in order to consider denser UCGs. The results of these experiments are reported in Table 2. They lead to essentially the same conclusions as before. When comparing the two tables, we can see that the MLE accuracy is slightly worse for the denser UCGs. This is expected because the denser UCGs have more parameters to estimate from the same amount of data. However, the residual difference is better for the denser UCGs. This is again expected because the denser UCGs impose fewer constraints. All in all, both tables show that the quartile Q3 of the residual difference is negative, which indicates that the MLEs induce a better fit of the data than the true parameter values. We therefore conclude that the parameter estimation procedure proposed works satisfactorily. Finally, we conduct a sanity check aimed to evaluate the behavior of the parameter estimation procedure proposed when the UCG contains spurious or superfluous edges, i. e. edges whose associated parameters are zero and thus may be removed from the UCG. To this end, we repeat the previous experiments with a slight modification. Like before, the elements of (β K ) K,Mo(K) , Ω K,Fa(K) and Ω K,K associated to the edges in the UCG are sampled uniformly from the interval [−3, 3]. However, we now set 25 % of these parameters to zero. We expect the estimates for these zeroed parameters to get closer to zero as the sample size grows. The results of these experiments with an edge probability of 0.5 are reported in Table 3. The results for an edge probability of 0.2 are similar. The first three rows of the table report the number of edges. Each edge has associated one parameter, and 25 % of these parameters have been zeroed in the experiments. In the next rows, the table reports the absolute difference between the zeroed parameters and their estimates, i. e. the absolute values of the estimates. As expected, the larger the sample size is the closer to zero the MLEs of the zeroed parameters become. For instance, 75 % of MLEs of the zeroed parameters take a value smaller than 0.76, 0.33 and 0.24 for the samples sizes 500, 2500 and 5000, respectively. Note that these numbers are much lower for the zeroed (β K ) K,Mo(K) parameters (specifically 0.06, 0.03, 0.02) because, recall, our parameter estimation procedure initializes (β K ) K,Mo(K) to zero. Table 3 also shows the residual difference, which is comparable to that in Table 2, which suggests that the existence of spurious edges does not hinder fitting the data. We therefore conclude that the parameter estimation procedure proposed behaves as it should in this sanity check. Causal inference In this section, we show how to compute the effects of interventions in UCGs. Intervening on a set of variables X ⊆ V modifies the natural causal mechanism of X, as opposed to (passively) observing X. For simplicity, we only consider interventions that set X to constant values. We represent an intervention that sets X = x as do(X = x). Given a chain component K of an UCG G, we have that as discussed at length in Section 3, or equivalently We interpret the last equation as a structural equation, i. e. it defines the natural causal mechanism of the variables in K. Specifically, the natural causal mechanism of V i ∈ K is given by the following structural equation: Note that all of these subroutes are part of the natural causal mechanism of V i which has been replaced and, thus, they are inactive, i. e. they are not really {V i }-open. 4 The single variable interventions described in the paragraph above can be generalized to sets by simply replacing the corresponding equation for each variable in the set. Graphically, we can represent the natural and interventional causal mechanisms of the variables in an UCG G by adding a new parent to each variable. We denote the resulting UCG by G ὔ . The new parent of a variable V i is a variable F V i that has the same domain as V i plus a state labeled idle: F V i = a represents the intervention do(V i = a), whereas F V i = idle represents no intervention on V i . In other words, F V i = a represents that the natural causal mechanism is inactive and the interventional causal mechanism is active, whereas F V i = idle represents the opposite. See Pearl [16, Section 3.2.2] for further details. In order to decide whether to augment G with F V i → V i or F V i V i , we have to study the interventional causal mechanism. This is unlike previous works where the value set by the intervention is all that matters. Specifically, if F V i = idle then the interventional causal mechanism is inactive and, thus, it does not matter whether we add We may even decide not to add F V i at all. On the other hand, if F V i = a then the interventional causal mechanism is active and, thus, the type of arrow added matters: If the interventional causal mechanism is such that the intervention delivered may affect (i. e., interfere with) the rest of the variables in K, then we We illustrate this with the mother-child example from Section 1. An intervention that immunizes the mother against the disease (i. e., do(D 1 = 0)) may or may not protect the child (i. e., interfere with D 2 ), depending on how the intervention is delivered: It protects the child when the interventional causal mechanism is the intake of some medication (different from the vaccination V 1 but comparable), but it does not protect the child when the interventional causal mechanism is a gene therapy (different from the healthy carrier genotype G 1 but comparable). Then, the former case should be modeled as F D 1 → D 1 , whereas the latter should be modelled as F D 1 D 1 . As mentioned, our goal is to compute or identify a causal effect p(Y|do(X = x)) with the help of a given UCG. That is, we would like to leverage the UCG to transform the causal effect of interest into an expression that contains neither the do operator nor latent variables, so that it can be estimated from observational data. Pearl's do-calculus consists of three rules whose repeated application together with standard probability manipulations do the job for acyclic directed mixed graphs [16,Section 3.4]. We show next that the calculus carries over into UCGs. Specifically, the calculus consists of the following three rules: -Rule 1 (insertion/deletion of observations): -Rule 2 (intervention/observation exchange): -Rule 3 (insertion/deletion of interventions): Theorem 9. Rules 1-3 are sound for UCGs. Corollary 10. Consider an intervention on a set of variables X in an UCG G. If the interventional causal mechanism of each variable V i ∈ X implies interference (i. e., it is modelled by augmenting G with the edge F Note that the corollary above implies that the causal effect of any intervention that involves interference is identifiable in a parametrized UCG. The parameter values may be provided by an expert or estimated from data as shown in Section 6. In the latter case, all the variables in the model are assumed to be measured in the data. When the model contains latent variables, we may still perform causal effect identification via rules 1-3. It is also worth mentioning that the corollary above is generalization of causal effect identification in LWF CGs as proposed by Lauritzen and Richardson [8], which in turn is a generalization of causal effect identification in DAGs [16]. This may come as a surprise because undirected edges in UCGs represent interference relationships whereas, in the work by Lauritzen and Richardson [8], they represent dependencies in the equilibrium distribution of a dynamic system with feedback loops. However, it has been suggested that interference is nothing but dependencies in an equilibrium distribution [11,12,21]. Identifiability of LWF and AMP CGs This section proves that identifiability of LWF and AMP CGs is possible when the error variables have equal variance. The error variable ϵ A associated with a variable A ∈ V represents the unmodelled causes of A. In other words, given a probability distribution p that is faithful to a LWF or AMP CG G, we can identify the Markov equivalence class of G (recall Theorem 1) from p by running, for instance, the learning algorithm developed by Studený [24] for LWF CGs and by Peña [14] for AMP CGs. We prove below that we can actually identify G from p if the error variables have equal variance. We discuss this assumption at the end of the section. Our result generalizes a similar result reported by Peters and Bühlmann [18] for DAGs. Specifically, let G denote a LWF or AMP CG. Assume that the non-zero entries of β K and Λ −1 K in Equation 2 have been selected at random. This implies that p is faithful to G with probability almost 1 by Peña [13, Theorems 1 and 2] and Levitz et al. [10,Theorem 6.1]. 5 We therefore assume faithfulness hereinafter. Moreover, we rewrite Equation 2 as K = β K Pa(K) + ϵ K (9) in distribution, where ϵ K ∼ N (0, Λ K ). For any V i ∈ V, we have then that in distribution, where K denotes the chain component of G that contains V i , and ϵ i denotes an error variable representing the unmodelled causes of V i . All such error variables are jointly denoted by ϵ which is distributed according to N (0, Λ), where Λ is a block diagonal matrix whose blocks correspond to the covariance matrices Λ K for all K ∈ Cc(G). Moreover, we assume that the errors ϵ i have equal but unknown variance λ 2 . Note that if the error variances are unequal and unknown but have the form Λ i,i = λ 2 i λ 2 for some known ratios λ 2 i , then we can satisfy the equal error variance assumption by rescaling each variable V i by dividing it with λ i , i. e. V i → V i /λ i . This implies that the linear coefficients and the errors get rescaled as β i,j → β i,j λ j /λ i and ϵ i → ϵ i /λ i . The following lemma proves that, after the rescaling, the error covariance matrix is still positive definite and keeps all the previous (in)dependencies which implies that, after the rescaling, p is still faithful to G with probability almost 1. Lemma 11. Consider the rescaling ϵ i → ϵ i /λ i for all i. Then, the error covariance matrix represents the same independences before and after the rescaling. Moreover, the error covariance matrix is positive definite after the rescaling if and only if it was so before the rescaling. We are now ready to state formally the main result of this section. Theorem 12. Let p be a Gaussian distribution generated by Equation 10 with equal error variances. Then, G is identifiable from p. Note that the theorem above implies that two LWF CGs or AMP CGs that represent the same separations are not Markov equivalent under the constraint of equal error variances, i. e. Theorem 1 does not hold under this constraint. The suitability of the equal error variances assumption should be assessed on a per domain basis. However, it may not be unrealistic to assume that it holds when the variables correspond to a relatively 5 Peña [13] proves this result for LWF CGs using a different parameterization than β K and Λ −1 K . However, there is a one-to-one mapping between both parameterizations by Equations 3 and 4. So, his result applies to the parameterization used in this paper. homogeneous set of individuals. For instance, in the case of a contagious disease, the error variable represents the unmodelled causes of an individual developing the disease, e. g. environmental factors. We may assume that these factors are the same for all the individuals, given their homogeneity. Therefore, we may assume equal error variances. We conjecture that the theorem above also holds for UCGs. However, a formal proof of this result requires first a characterization of Markov equivalence for UCGs, something that we postpone to a future article. Discussion LWF and AMP CGs are typically used to represent independence models. However, they can also be used to represent causal models. For instance, LWF CGs have been shown to be suitable for representing the equilibrium distributions of dynamic systems with feedback loops [8]. AMP CGs have been shown to be suitable for representing causal linear models with additive Gaussian noise [15]. LWF CGs have been extended into segregated graphs, which have been shown to be suitable for representing causal models with interference [21]. In this paper, we have shown how to combine LWF and AMP CGs to represent causal models of domains with both interference and non-interference relationships. Moreover, we have defined global, local and pairwise Markov properties for the new models, which we have coined unified chain graphs (UCGs), and shown that these properties are equivalent for Gaussian distributions. We have also proposed and evaluated an algorithm for computing MLEs of the parameters of an UCG. Finally, we have shown how to perform causal inference in UCGs. It is worth mentioning that we are not the first to unify LWF and AMP CGs. Lauritzen and Sadeghi [9] recently proposed a new class of graphical models that unify many existing classes, including LWF and AMP CGs. Specifically, they consider acyclic graphs with four types of edges: Directed edges, bidirected edges, and solid and dotted undirected edges. Several edges between any pair of nodes are allowed. If their graphs only contain directed and solid undirected edges, then they coincide with LWF CGs. If they only contain directed and dotted undirected edges, then they coincide with AMP CGs. The authors develop global and pairwise Markov properties for the new models, and prove their equivalence. However, the pairwise Markov property only applies to graphs that have no dotted undirected edges. So, it does not apply to AMP CGs or superclasses of it. Other differences with our work are that no local Markov property is proposed, no parameterization or parameter learning algorithm is proposed, and no causal interpretation is given. However, the main difference with our work is that their models cannot accommodate both interference and non-interference relationships, because they rely on a single type of directed edge. For instance, our mother-child example may be modeled with a graph that contains the edges V 1 → D 1 , G 1 → D 1 and D 1 − D 2 or D 1 ⋅ ⋅ ⋅ D 2 or both. However, this graph cannot represent that intervening on V 1 must have an effect on D 2 while intervening on G 1 must not, because the paths from V 1 and G 1 to D 2 contain the same types of edges in the same order, namely a directed edge followed by a solid or dotted undirected edge. This leads us to conclude that the models proposed by Lauritzen and Sadeghi [9] do not subsume UCGs. Finally, we would like to mention some questions that we have not studied in this paper but which we will. We plan to extend UCGs to categorical random variables. When dealing with continuous random variables, assuming that these are jointly Gaussian simplifies the problem by restricting the relations to be linear. However, in our opinion, the main simplification that the Gaussian assumption brings is that checking whether an independence holds reduces to checking whether a linear coefficient or an entry in a precision matrix is identically zero. Discrete UCGs will not enjoy this advantage. In any case, finding a suitable/amenable parameterization of discrete UCGs is of utmost importance. Moreover, Drton [3] has shown that discrete LWF CGs are smooth models but discrete AMP CGs are not. Non-smoothness implies that some standard asymptotic distribution results (e. g., normal distribution limits for MLEs and χ 2 -limits for likelihood ratios) may not hold for the model at hand. Therefore, we need to investigate whether non-smoothness hinders discrete UCGs from representing interference and non-interference relationships. Other questions that we plan to study are (i) characterize Markov equivalent UCGs, (ii) develop structure learning algorithms based on the local and pairwise Markov properties, (iii) extend UCGs to model confounding via bidirected edges, and (iv) make use of the linearity of the relations for causal effect identification in UCGs along the lines in Pearl [16,Chapter 5]. A pure collider route is a route whose all intermediate nodes are collider nodes or in a collider section of the route, i. e. Lemma 14. Given an UCG G and three disjoint subsets X, Y and Z of V such that X ⊥ G Y|Z, then there is no pure collider route in G X∪Y∪Z between some A ὔ ∈ X ὔ and B ὔ ∈ Y ὔ . Lemma 13. Assume to the contrary that there is a pure collider route ρ in For the same reason, C 2 ∈ Y ὔ or C 2 ∈ Z. However, any of the four combinations contradicts that . . , C n ∉ Z to avoid contradicting that X ὔ ⊥ G X∪Y∪Z Y ὔ |Z. However, this implies that some node in X ὔ is adjacent to some node in Y ὔ , which contradicts that X ὔ ⊥ G X∪Y∪Z Y ὔ |Z. The following observation follows from the lemma above and will be used later. Let p(W) and p ὔ (W) denote the distribution of W before and after setting (β L ) i,j = 0 in the lemma, i. e. N (0, Σ) and N (0, Σ ὔ ). Then, the lemma implies that p(W \ {i}) = p ὔ (W \ {i}). Assume that i, j ∈ U. Then, Equation 11 implies that We show below that the first (respectively second) condition holds if there are no pure collider routes in G between i and j through nodes in U (respectively L). -By the induction hypothesis, (Λ −1 U ) i,j = 0 if there are no pure collider routes in G between i and j through nodes in U. These conditions rule out the existence of pure collider routes in G between i and j through nodes in L. Now, assume that i ∈ U and j ∈ L. Then, Equation 11 implies that Either case holds if there are no pure collider routes in G between i and j. Finally, assume that i, j ∈ L. Then, Equation 11 implies that . Finally, for any non-adjacent vertices i, j ∈ K, clearly i ⊥ G j|K \ {i, j} ∪ Pa(K) and thus i ⊥ p j|K \ {i, j} ∪ Pa(K), which implies Equation 8. We now prove the only if part. Consider three disjoint subsets X, Y and Z of V such that X ⊥ G Y|Z. Let K 1 , . . . , K n denote the topologically sorted chain components of G X∪Y∪Z . Note that G K 1 ∪⋅⋅⋅∪K n and G X∪Y∪Z only differ in that the latter may not have all the edges in the former. Note also that K 1 , . . . , K n are chain components of G, too. Consider a topological ordering of the chain components of G, and let Q 1 , . . . , Q m denote the components that precede K n in the ordering, besides K 1 , . . . , K n−1 . Note that the edges in G from any Q i to any K j must be of the type because, otherwise, Q i would be a component of G X∪Y∪Z . Therefore, G Q 1 ∪⋅⋅⋅∪Q m ∪K 1 ∪⋅⋅⋅∪K n and G Q 1 ∪⋅⋅⋅∪Q m ∪ G X∪Y∪Z only differ in that the latter may not have all the edges in the former. In other words, the latter may impose additional zero restrictions on the elements of some β K j corresponding to the mothers. Consider adding such additional restrictions to the marginal distribution p(Q 1 , . . . , Q m , K 1 , . . . , K n ) obtained from p via Equations 1 and 2, i. e. consider setting the corresponding elements of β K j to zero (recall that β K j are such that the mean vector of p(K j |Pa(K j )) is a linear function of Pa(K j ) with coefficients β K j ). Call the resulting distribution p ὔ (Q 1 , . . . , Q m , K 1 , . . . , K n ). Finally, recall again that G Q 1 ∪⋅⋅⋅∪Q m ∪K 1 ∪⋅⋅⋅∪K n and G Q 1 ∪⋅⋅⋅∪Q m ∪G X∪Y∪Z only differ in that the latter may not have all the edges in the former. Note also that every node in X ∪ Y ∪ Z has the same mothers in G Q 1 ∪⋅⋅⋅∪Q m ∪K 1 ∪⋅⋅⋅∪K n and G Q 1 ∪⋅⋅⋅∪Q m ∪ G X∪Y∪Z . Then, p(X ∪ Y ∪ Z) = p ὔ (X ∪ Y ∪ Z) by Lemma 15 and, thus, X ⊥ p Y|Z if and only if X ⊥ p ὔ Y|Z. Now, recall from Lemma 14 that X ⊥ G Y|Z implies that there is no pure collider route in G X∪Y∪Z between any vertices A ὔ ∈ X ὔ and B ὔ ∈ Y ὔ . Then, there is no such a route either in G Q 1 ∪⋅⋅⋅∪Q m ∪ G X∪Y∪Z , because this UCG has no edges between the nodes in V(G Q 1 ∪⋅⋅⋅∪Q m ) and the nodes in V(G X∪Y∪Z ). This implies that has no edges between the nodes in V(G Q 1 ∪⋅⋅⋅∪Q m ) and the nodes in V(G X∪Y∪Z ). This implies X ⊥ p ὔ Y|Z by contraction and decomposition which, as shown, implies X ⊥ p Y|Z. By the induction hypothesis, (Λ U ) k,j can be written as a sum of products of weights over the edges of the open paths between k and j in G. Moreover, as discussed in Section 3 in relation to Equation 5, (β L ) i,k can be written as a sum of products of weights over the edges of the paths from k to i through nodes in L. Since the latter paths start all with a directed edge out of k, the previous observations together imply the desired result. Finally, As discussed in Section 3, (Λ L ) i,j can be written as a sum of products of weights over the edges of the paths between i and j through nodes in L. By the induction hypothesis, (Λ U ) k,l can be written as a sum of products of weights over the edges of the open paths between k and l in G. Again, as discussed in Section 3, (β L ) i,k (respectively, (β L ) j,l ) can be written as a sum of products of weights over the edges of the paths from k to i (respectively, from l to j) through nodes in L. Since the latter paths start all with a directed edge out of k (respectively, out of l), the previous observations together imply the desired result. . We now prove the only if part. Note that if p satisfies Equations 1 and 6-8 with respect to G, then it satisfies the global Markov property with respect to G by Theorem 3. Now, it is easy to verify that the block-recursive property holds. Specifically, the separations i ⊥ G Nd(K) \ K \ Fa(K) \ Mo(i)|Fa(K) ∪ Mo(i) for all i ∈ K with K ∈ Cc(G) imply B1 by the global Markov property. Likewise, the separations i ⊥ G Nd(K) \ K \ Fa(i) \ Mo(K)|K \ {i} ∪ Fa(i) ∪ Mo(K) for all i ∈ K with K ∈ Cc(G) imply B2 by the global Markov property. Finally, the separations i ⊥ G j|K \ {i, j} ∪ Pa(K) for all i, j ∈ K with K ∈ Cc(G) and such that i − j is not in G imply B3 by the global Markov property. Proof of Theorem 6. Note that Nd(i) = Nd(K). Then, the properties P1 and P2 imply respectively B1 and B2 by repeated application of intersection. Similarly, P2 implies i ⊥ p Nd(K)\{i}\Pa(K)\Ne(i)|Pa(K)∪Ne(i) by repeated application of intersection, which implies i ⊥ p K \ {i} \ Ne(i)|Pa(K) ∪ Ne(i) by decomposition, which implies B3 by weak union. Finally, the property B1 implies P1 by weak union. Likewise, B2 implies P2 by weak union if j ∉ K. Assume now that j ∈ K and note that B2 implies i ⊥ p Nd(K) \ K \ Pa(K)|K \ {i} ∪ Pa(K) by weak union. Note also that B3 implies that p(K|Pa(K)) satisfies the global Markov property with respect to G K by Lauritzen [7,Theorem 3.7] and, thus, i ⊥ p K \ {i} \ Ne(i)|Pa(K) ∪ Ne(i). Then, i ⊥ p Nd(K) \ {i} \ Pa(K) \ Ne(i)|Pa(K) ∪ Ne(i) by contraction, which implies P2 by weak union. Proof of Theorem 7. The properties L1 and L2 imply respectively P1 and P2 by weak union. Note also that the pairwise Markov property implies the global property by Theorems 3, 5 and 6. Now, it is easy to verify that the local property holds. Specifically, the separations i ⊥ G Nd(i) \ K \ Fa(K) \ Mo(i)|Fa(K) ∪ Mo(i) for all i ∈ K with K ∈ Cc(G) imply L1 by the global Markov property. Likewise, the separations i ⊥ G Nd(i) \ {i} \ Pa(i) \ Ne(i) \ Mo(Ne(i))|Pa(i) ∪ Ne(i) ∪ Mo(Ne(i)) for all i ∈ K with K ∈ Cc(G) imply L2 by the global Markov property. Proof of Theorem 9. Recall from the main text that p(V \ X|do(X)) satisfies the global Markov property with respect to G V\X . Then, it also satisfies the global Markov property with respect to (G V\X ) ὔ , because this UCG is a supergraph of G V\X . Then, rule 1 holds. Actually, G V\X and (G V\X ) ὔ represent the same separations over V \ X, because all the variables F V i are observed (i. e., F V i = a or F V i = idle) and, thus, they do not open new routes. For the proof of rule 2, assume that Z and Y are singletons. The generalization to sets of variables is trivial. First, assume that F Z Z is in (G V\X ) ὔ . The antecedent of rule 2 implies that all the W-open routes in (G V\X ) ὔ Then, there is an edge Y → L that is in G but not in G ὔ . Let Q denote all the parents of L in G except Y. Then, we have from G that L = β Q Q + β Y Y + ϵ L by Equation 10. Note that β Y ̸ = 0 by the faithfulness assumption. Define L * = L| Q=q and Y * = Y| Q=q in distribution. Since ϵ L ⊥ p Q ∪ {Y} by construction, we have from G that [19,Lemma 2] and, thus, var(L * ) > var(ϵ L ) = λ 2 . However, we have from G ὔ that var(L * ) ≤ λ 2 [18,Lemma A1]. This is a contradiction and, thus, N = N ὔ . Note that the undirected edges between the nodes in N must be the same in G and G ὔ , because Markov equivalent CGs have the same adjacencies as shown by Frydenberg [5,Theorem 5.6] for LWF CGs and by Andersson et al. [1,Theorem 5] for AMP CGs. For the same reason, the directed edges from N to V \ N must be the same in G and G ὔ . Since G V\N and G ὔ V\N generate p(V \ N|N = n) via Equation 10, we can repeat the reasoning above replacing G, G ὔ and p by G V\N , G ὔ V\N and p(V \ N|N = n). This allows us to conclude that the nodes with no parents in G V\N and their corresponding edges coincide with those in G ὔ V\N . By continuing with this process, we can conclude that G = G ὔ . where the first equality is due to Jones and West [6,Theorem 1], and the third is due to Lauritzen [7, p. 130 The variance ratio in the lemma is an inflation factor (≥ 1) that accounts for the overreduction of the variance of ρ n when conditioning on the rest of the variables in K. In other words, conditioning on the rest of the variables not only blocks all the rest of the paths from ρ n−1 to ρ n but also overreduces the variance of ρ n , which bias the causal effect of ρ n−1 on ρ n represented by β ρ n ,ρ n−1 |K\{ρ n ,ρ n−1 } . The inflation factor remedies this. See Pearl [17,Section 3] for some related phenomena (e. g., selection bias) in acyclic directed mixed graphs.
2018-11-15T22:16:15.624Z
2018-11-11T00:00:00.000Z
53279210
s2orc/train
v2
Trends and Determinants of Up‑to‑date Status with Colorectal Cancer Screening in Tennessee, 2002‑2008
Trends and Determinants of Up‑to‑date Status with Colorectal Cancer Screening in Tennessee, 2002‑2008 Background: Screening rates for colorectal cancer (CRC) are increasing nationwide including Tennessee (TN); however, their up‑to‑date status is unknown. The objective of this study is to determine the trends and characteristics of TN adults who are up‑to‑date status with CRC screening during 2002‑2008. Methods: We examined data from the TN Behavioral Risk Factor Surveillance System for 2002, 2004, 2006 and 2008 to estimate the proportion of respondents aged 50 years and above who were up‑to‑date status with CRC screening, defined as an annual home fecal occult blood test and/or sigmoidoscopy or colonoscopy in the past 5 years. We identified trends in up‑to‑status in all eligible respondents. Using multivariable logistic regression models, we delineated key characteristics of respondents who were up‑to‑date status. Results: During 2002‑2008, the proportion of respondents with up‑to‑date status for CRC screening increased from 49% in 2002‑ 55% in 2006 and then decreased to 46% in 2008. The screening rates were higher among adults aged 65‑74 years, those with some college education, those with annual household income ≥$35,000 and those with health‑care access. In 2008, the respondents who were not up‑to‑date status with CRC screening included those with no health‑care coverage (adjusted odds ratio [OR] 0.46, 95% confidence interval [CI] 0.33‑0.63), those aged 50‑54 years (OR 0.62, 95% CI 0.46‑0.82) and those with annual household income <$25,000 (OR 0.65, 95% CI 0.52‑0.82). Conclusions: TN adults who are up‑to‑date status with CRC screening are increasing, but not across all socio‑demographic subgroups. The results identified specific subgroups to be targeted by screening programs, along with continued efforts to educate public and providers about the importance of CRC screening. INTRODUCTION Colorectal cancer (CRC), defined as the neoplasm of colon and rectum, contributes to significant morbidity and mortality in the United States (US). It ranks second in most commonly diagnosed cancers and cancer deaths among older adults in the US. [1,2] In 2008, 142,950 people (73,183 men and 69,767 women) in the US were diagnosed with CRC and 52,857 people (26,933 men and 25,924 women) died from the disease. [3] Among all existing prevention strategies, the most effective strategy in reducing the morbidity and mortality from cancer is screening. [4,5] Screening tests identify individuals with precancerous lesions including adenomatous polyps that are asymptomatic and amenable to cure at an early age, thereby preventing them from progressing to invasive cancer. In addition, screening for CRC has been identified to be highly impact and cost-effective in general population. [6][7][8][9] It has been found that if all adults with ages 50 years and above were screened for CRC regularly, approximately 10,000 additional deaths could be prevented at an expenditure of $11,900 per life year annually. [10] In comparision to other prevention strategies such as risk factors reduction and increased diagnostic and treatment measures, few modeling studies demonstrated screening for CRC as the most effective strategy with impact greater than others. [11,12] The key to reduce the incidence and mortality of CRC is regular screening, beginning at 50 years age. In March 2008, the American Cancer Society, the US Multisociety Task Force on CRC and the American College of Radiology recommended that all adults 50 years and older should be screened for CRC regularly. [13][14][15] In October 2008, the US Preventive Services Task Force (USPSTF) updated the 2002 recommendations for CRC screening to include adults aged only 50-75 years. [16] Routine CRC screening is not recommended in adults aged 76 years and above except on an individual basis. [4] Multiple modalities of CRC screening tests have been recommended: An annual fecal occult blood test (FOBT), flexible sigmoidoscopy (FS) or double contrast barium enema every 5 years, a combination of FS every 5 years with FOBT every 3 years or colonoscopy every 10 years. Despite strong effectiveness, expert group recommendations and multiple screening modalities, CRC screening rates remain far below compared with rates of other screening procedures like mammography for breast cancer, prostate-specific antigen screening for prostate cancer and pap smear screening for cervical cancer. [17,18] The screening rates for CRC were low during the 1990s; however, recent reports indicated moderate increase in screening rates during the 2000s, with rates currently leveling off. [19] During 2002-2008, the percentage of adults who reported FOBT screening within the past 12 months or lower endoscopy within the past 10 years increased from 53.8% in 2002-64.2% in 2008. [20] This signifies that the incidence and mortality rates for CRC are decreasing with screening rates increasing nationwide. Similar patterns were identified in Tennessee (TN), but not at a similar rate. In 2008, the age-adjusted incidence rate for CRC in both males and females is high (47.6/100,000) for TN when compared to that of US (45.5/100,000). Moreover, the mortality rate due to CRC in both males and females is 18.7/100,000, which is higher than the national average of 16.7/100,000. Several studies have been conducted to estimate the CRC screening rates in TN by gender, [21] race including African Americans, [22] health literacy, [23] and response to colonoscopy; [24] however, until date, no study has been conducted to identify the trends of up-to-date status with CRC screening among Tennesseans. Identifying trends and factors associated with up-to-date status with CRC screening among Tennesseans will assist public health professionals and health care providers in earlier detection of cancer, reduce the incidence and mortality from the disease and improve the quality-of-life by providing support and resources. Moreover, disparities across demographic subgroups continue to play a vital role in screening for cancers, especially among certain racial/ ethnic minority populations, those without health insurance or health-care access, those with lower household income and those with less education that are necessary to be evaluated. [20,[25][26][27] Therefore, it is important to identify such populations who are less up-to-date with CRC screening and in need of support and resources to improve the performance of CRC screening and reduce the incidence and mortality from the cancer. In this study, we used the TN Behavioral Risk Factor Surveillance System (BRFSS), a state representative data, to not only identify trends in up-to-date status with CRC screening in TN adults, but also identify key factors associated with such status; thereby, scarce resources could be diverted toward those needy populations. A r c h i v e o f S I D Veeranki and Zheng: Up-to-date status with colorectal cancer in tennessee METHODS We used the TN BRFSS to identify trends and key factors associated with up-to-date status with CRC screening in TN. BRFSS is a multistage, random-digital dialing, state-based telephonic health survey for adult US residents 18 years and older to collect information on risk behaviors, clinical preventive health practices and health-care access primarily related to chronic diseases and injury. [28,29] The BRFSS survey questionnaire consists of approximately 80 core questions with additional optional modules for topics including the questions for CRC screening. [30] Individual states have the option to supplement these additional modules based on the assessment and data needs of their respective states. Measures During the 4 years, the interviewers asked TN BRFSS participants four questions related to their CRC screening status. They were asked whether they had ever been screened for CRC either with sigmoidoscopy/colonoscopy or a home FOBT and if so, when they received their screening [ Table 1]. In 1999, the endoscopy questions were revised to reflect the evidence regarding colonoscopy and proctoscopy. Therefore, the participants were asked about their screening with "sigmoidoscopy/colonoscopy" instead of "sigmoidoscopy/proctoscopy." Furthermore, in 2008, a new question has been added to the BRFSS questionnaire to differentiate between the sigmoidoscopy or colonoscopy tests that the survey participants underwent. This additional question has been added to the new CRC screening guidelines identifying either sigmoidoscopy during the past 5 years or colonoscopy during the past 10 years. [14] In this study, we defined the up-to-date status with CRC screening for those individuals who were screened for home FOBT in the past 12 months and/or a sigmoidoscopy or colonoscopy in the past 5 years. Although the updated screening guidelines in 2008 restricted the age category to 50-75 years age, we included all survey respondents aged 50 years and above for uniformity in data analysis. Moreover, we restricted including the colonoscopy screening data during the past 5 years for uniformity in the data analysis; although, the updated guidelines stated colonoscopy screening test during the past 10 years. The Institutional Review Board at East Tennessee State University approved the research study. Data analysis For each year, 2002, 2004, 2006 and 2008, we calculated the proportion of respondents who were up-to-date status with screening for CRC along with socio-demographic characteristics of the respondents including age, gender, race, education and annual household income. Those participants who either did not respond or who responded "do not know/not sure" or "refused" to the questions [ Table 1] were not included. In concordance with the screening guidelines, the responses of survey participants aged 49 and younger were dropped from the study. To identify key factors of up-date status with CRC screening, we conducted a multivariable logistic regression analyses for the 4 years distinctly. Adjusted odds ratios (ORs) along with 95% confidence intervals (CI) were reported. A 2-sided 5% significance level was used for all statistical inferences. SAS 9.2 (SAS Institute, Cary, NC, USA) was used for data analysis. Table 2 presents the proportion of all respondents that were up-to-date status with CRC screening in TN. The proportion of survey respondents 50 years old or older who reported a home FOBT in the past 12 months and/or sigmoidoscopy/colonoscopy in the past 5 years How long has it been since you had your last blood stool test using a home kit? a. Within the past year (anytime less than 12 months ago) b. Within the past 2 years (1 year but less than 2 years ago) c. Within the past 3 years (2 years but less than 3 years ago) d. Within the past 5 years (2 years but less than 5 years ago) e. 5 or more years ago f. Don't know/not sure g. Refused Sigmoidoscopy and colonoscopy are exams in which a tube is inserted in the rectum to view the colon for signs of cancer or other health problems. Have you ever had either of these exams? a. Yes b. No c. Don't know/not sure d. Refused **For a sigmoidoscopy, a flexible tube is inserted into the rectum to look for problems. A colonoscopy is similar, but uses a longer tube and you are usually given medication through a needle in your arm to make you sleepy and told to have someone else drive you home after the test. Was your most recent exam a sigmoidoscopy or a colonoscopy? a. Sigmoidoscopy b. Colonoscopy c. Don't know/not sure d. Refused How long has it been since you had your last sigmoidoscopy or colonoscopy? a. Within the past year (anytime less than 12 months ago) b. Within the past 2 years (1 year but less than 2 years ago) c. Within the past 3 years (2 years but less than 3 years ago) d. Within the past 5 years (2 years but less than 5 years ago) e. Within the past 10 years (5 years but less than 10 years ago) f. 10 A r c h i v e o f S I D Veeranki and Zheng: Up-to-date status with colorectal cancer in tennessee and above), Caucasians and African Americans, those with any type of education, those with household income below $50,000 and those having access to health-care. Subsequently the proportion of adults with up-to-date status with CRC screening decreased during 2006-2008 similar to trends in the overall study population. In addition, for those who did not have health insurance and those with household income $50,000 and above, the trend in up-to-date status was negative, then positive and finally negative during 2002-2008. Males were more up-to-date status with CRC screening than females, except during 2006. Those aged 65-74 years, those having more than college level education, those with household income ≥$35,000 and those having access to health-care DISCUSSION We found that the proportion of TN BRFSS survey respondents with up-to-date screening status for CRC were below than that of national rates. In 2002 , 52% of US adults aged 50-75 years were up-to-date with screening, which is defined as FOBT in the past year or a lower endoscopy in the past 10 years, [1] compared to 48.6% adults in TN. Similarly, in 2008, approximately 64% of adults aged 50-75 years were up-to-date with CRC screening nationwide, [1] in comparison to 46 www.SID.ir A r c h i v e o f S I D Veeranki and Zheng: Up-to-date status with colorectal cancer in tennessee less than high school education, 16% had annual household incomes below $15,000 and 12.4% lived below poverty level, approximately 24%, 19% and 13.5% of Tennesseans had less than high school education, annual household income below $15,000 or lived below poverty level respectively. [31] Although the up-to-date screening rates for CRC in TN rates were below the national rates, we found that there was an increase in percentage of TN respondents with up-to-date screening for CRC during 2002-2006. This could be attributed to significant nationwide and state health promotion efforts to encourage screening tests for CRC for the adult population in TN. [32,33] [14] changes in BRFSS sampling procedures by regional health departments and changes to TN Medicaid reform in 2005. [34] We found higher rates of up-to-date status with CRC screening among individuals with high levels of education, high annual household income or having a health insurance. These findings regarding the relationships between some socio-demographic characteristics and screening patterns were in consistent with results from prior studies. [35][36][37] In 2008, the TN BRFSS respondents aged 50-54 years had low rates of up-to-date screening compared to those aged 75 years and above and individuals with low annual household incomes had lower rates of up-to-date screening status compared to those with higher levels of annual household income. Similarly, rates of up-to-date screening status for insured or those who have access to health-care were almost twice as high among those with no health-care access. These findings indicate that current public health education and awareness programs to promote CRC screening may not be reaching these sub-group populations, which may subsequently lead these adults to progress to invasive cancer. The lack of education and promotion initiatives is also supplemented by poor access to health-care or insurance or lack of income to pay for screening tests. The differences in up-to-date status with CRC screening across socio-demographic groups can be reflected upon the disparities as stated above and addressed at individual, community and policy levels. At the individual level, it is important that all Tennesseans receive at least some education that will not only improve their quality-of-life, but also contribute to their annual household income and increased access to health-care. At the community and policy levels, the public health professionals, community workers and policy makers should effectively communicate the importance of screening, campaign for increasing education and awareness and advocate for state-funded resources for all unemployed or uneducated Tennesseans, thereby increasing screening rates. Thus, a collective action by everyone, such as "TN Cancer Coalition Network" is necessary to reduce the disparities among populations, increasing up-to-date status with CRC screening and reducing the burden of CRC in TN. [38] Non-Caucasians, especially African-Americans are diagnosed at an advanced stage of CRC than Caucasians and have higher mortality rates than Caucasians. These disparities may be attributed in part to low rates of screening. [39,40] However, in this study, we found that in 2008, non-Caucasians are more associated with up-to-date status with CRC screening than Caucasians as identified in previous studies. [41] Although the finding is encouraging with more than a quarter of African-Americans being up-to-date status with screening, the lack of significance and fluctuations during 2000-2008 could be attributed to sampling errors, changes in the definition of up-to-date CRC screening status and family history of cancer or other risk factors that might have increased their perception toward benefits of screening and thereby contributing to increased screening rates. Instead, efforts to educate and promote the screening for CRC among non-Caucasians, especially African-Americans, should continue as these subgroup populations have higher rates of diagnoses and mortality in advanced stages of cancer. In addition, although not significant, we identified that proportion of males who is up-to-date status with CRC screening is higher than that of females, similar to earlier studies that reported higher prevalence of screening test among males than females. [42][43][44] Moreover, greater use of FOBT has been reported by females while men favored endoscopy more often. [45] It is noteworthy that several factors need to be considered while addressing the gender gap in screening rates in terms of preference, complications and efficacy of a screening modality, effective health communication, level of comfort, frequency, time and cost; thereby, future health education and promotion efforts could be targeted to deal with such factors while addressing the gender differences. A r c h i v e o f S I D The study is subject to merits and limitations and as per our knowledge it is the first investigation to identify trends and characteristics of TN adults associated with up-to-date status with CRC screening. We utilized the TN BRFSS survey data, a state representative data to conduct this study; therefore, the study findings can be generalizable to the entire population. Although the study has significant strengths and draws important conclusions, limitations do exist. First, the updated screening guidelines for CRC in 2008 to identify individuals screened for sigmoidoscopy within the past 5 years and colonoscopy within the past 10 years, along with changes in Tenn Care reforms and sampling procedures may have resulted in lower screening rates in 2008. The extent to which these changes may have affected the results remains unclear and need further evaluation. The definition of up-to-date screening status for individuals as either colonoscopy or sigmoidoscopy within the past 5 years may underestimate the actual percentage of those who are up-to-date, since individuals who had a colonoscopy within the past 6-10 years are in compliance with current guidelines. Moreover, we cannot distinguish the use of CRC screening test for either diagnostic or screening procedure from the BRFSS questions and responses, possibly resulting in under/ overestimation of the actual screening rates. Second, the TN BRFSS survey is cross-sectional in nature; hence, no causal relationships can be established. Third, the TN BRFSS survey is a telephone-based survey; therefore, responses are limited to individuals who owned home telephones. The survey response rates are low and the respondents may have answered differently than those who either did not own a telephone or chose not to participate, a measure of non-respondent bias. Another limitation is recall bias as the responses of survey participants are self-reported and may not accurately reflect the actual screening status. However, previous studies identified a fair-to-good agreement between self-reports and medical records. [46,47] Finally, other influencing factors or confounders such as transportation, accessibility to health education and screening initiatives, physician recommendations for CRC screening, individuals with the family history for CRC, patient compliance with sigmoidoscopy/ colonoscopy screening procedures are not taken into consideration, which may affect the accuracy of these up-to-date screening status estimates in TN adults. CONCLUSION Although the CRC screening rates in TN are lower than the national rates, the percentage of TN adults who are up-to-date status with CRC screening is increasing. While this is an encouraging finding, many adults aged 50 years and above are still not up-to-date with current guidelines and some socio-demographic groups such as the uninsured, those aged 50-54 years, those with household income less than $25,000 have particularly low rates for up-to-date status with CRC screening. Therefore, there is a need for public health awareness programs to promote screening for the public, especially targeted toward subgroup populations, who had low percentages of respondents with up-to-date status and public health education for health-care providers to promote and encourage the screening for CRC thereby improving the quality-of-life among adults in TN.
2017-06-18T00:33:58.468Z
2014-03-15T00:00:00.000Z
25073810
s2orc/train
v2
Historical School Buildings. A Multi-Criteria Approach for Urban Sustainable Projects
Historical School Buildings. A Multi-Criteria Approach for Urban Sustainable Projects : It is recognized, in Europe and elsewhere, that there is a need to implement sustainable urban intervention policies based also on the recovery of existing public real estate assets. In Italy, the schools are a significant part of public property. At this time (2019), many buildings destined for teaching need to be redeveloped, both from a structural and plant engineering point of view, and with regard to the management of the spaces available for teaching and social activities. Although, there have been many attempts by the legislator to regulate the modus operandi in the school construction field, it is clear that there is a lack of a unique regulatory system in which the technical and functional-managerial aspects relating to the same school are considered together. On this basis, with this study a multi-criteria evaluation protocol to support intervention planning for the redevelopment of existing school buildings is proposed. The study defines an evaluation framework with which we can establish the design priorities to be carried out in accordance with the building features and community needs. The evaluation framework is tested on a renewal project regarding a school building located in the historic center of Rome (Italy). Introduction The urban policies of many countries, both European and not, are characterized by sustainable intervention practices [1,2]. Among these, some concern specifically the preservation of ecosystems, while others concern the conservation of territorial infrastructure and existing buildings [3][4][5][6][7]. Since 2007, with the Leipzig Charter [8], the European Union Member States promoted development policies based on integrated planning actions, mainly for the regeneration and upgrading of both buildings and urban areas [9,10]. There are many European strategies and plans in which the aspects linked to the physical regeneration of territory and those regarding its economic, social and environmental system are considered jointly [11,12]. In Italy, the Legislative Decree 102/2014 [13], which implements the European Directive 2012/27/EU, outlines a strategic reference framework specifically aimed at promoting urban renewal projects on public real estate assets in order to improve their energy efficiency [14][15][16][17]. This also with the use of renewable resources present in nature, building systems and technological solutions capable of producing a low environmental impact [18][19][20][21][22]. With regard to the Italian context, the set of public buildings consists of 1,056,404 units divided into 11 homogeneous clusters [23]. The clusters with more than 10,000 buildings include offices the Law n • 23/1996 (Regulations on school building) [32]; • the Legislative Decree n • 106/2009 (concerning the protection of health and safety in the workplace) [33]; • the Agreement of 18 November 2010 according to Article 9 of Legislative Decree of 27 August 1997, n • 281, between the Government, regions, autonomous provinces of Trento and Bolzano, provinces and municipalities (Guidelines for the prevention of indoor risk factors for allergies and asthma in schools) [34]; • the Ministerial Decree (Ministry of Infrastructure and Transport) of 17 January 2018 (Technical Regulations for Construction, in particular Chapter 8, which contains the anti-seismic safety parameters to be followed by existing structures) [35]; • the Law n • 107/2015 (Reform of the national education and training system and delegation for the reorganization of current legislative provisions) [36]; • the Law n • 81/2019 (art. 4-bis: Adaptation of school buildings to fire regulations) [37] (for a more specific description see Section 2). Significant among the guidelines that directly affect the construction and redevelopment of school buildings are those relating to the: • internal architecture of schools, by the Education, University and Research Ministry after hearing the Unified Conference (MD 11 April 2013), bearing the technical standards-framework, containing the minimum and maximum indices of urban functionality, construction, also with reference to technologies in the field of efficiency and energy saving and production from renewable energy sources, and teaching, to ensure appropriate and homogeneous reference design guidelines on the national territory [38]; • recommendations for the energy upgrading of school buildings, produced by the Italian Ente per le Nuove tecnologie, l'Energia e l'Ambiente (ENEA) aimed to disseminate knowledge and operational tools at the basis of the energy upgrading of buildings for training, following an updated approach to the latest regulations and the current possibilities for economic incentives [39] for a healthy school environment in Europe. • environmental quality in external and internal spaces of European schools, in 2015 the pilot project SINPHONIE (Indoor Pollution in Schools and Health-European Observatory) [40], funded by the European Parliament and supported by the European Commission, aimed at investigating the quality of air inside and outside school environments, and the effects that pollutants can have on the health of school users. Through the SINPHONIE pilot project, it was possible to establish methodologies and define standardized tools for characterizing school interiors and assessing health risks for students and staff, in order to maintain a healthy and livable school environment [41][42][43]. Each of the legislative acts take into account one of the many aspects to be considered with reference to the life cycle of the school building. This is in relation to mono-dimensional logics that consider in a disjointed manner the design and construction components of the transformation or redevelopment project preferring a solving approach to that of integrated problem based-solving type [44][45][46]. The many investment programs aimed at upgrading the existing school heritage and new schools, promoted at the national and/or regional level since the second half of the last century, have followed a solving approach logic. The allocations for interventions on the school building stock have been directed to finance interventions on the school buildings for their construction, renovation, safety, anti-seismic adaptation, energy efficiency and innovation [47]. However, from 2015, according to Art. 10 of Legislative Decree n • 104/2013 (now Law n • 128/2013), the three-year national planning of school building interventions was introduced. The substantial funding allocated to school building in recent years has preferred a modus operandi to solve specific problems, not often taking into account the mutual relations that there may be between multiple technical and regulatory aspects of a transformation/redevelopment intervention. On the basis of such a complex and articulated technical-regulatory framework, due to the multiple aspects and mutual relations that characterize the reference system to be taken into account in the field of school building, the execution of actions on the same school building in different times in a non-integrated way may involve intervening in the same building in a way not perfectly congruent functionally and aesthetically with the need to interrupt or move educational activities with a consequent increase in the costs [48,49]. So, in the present work a multi-criteria evaluation protocol for the definition of integrated action strategies regarding new school buildings or existing ones is proposed. With reference to the part of the school building already in use, the implementation of the proposed methodology is aimed at identifying, on the one hand, the need and degree of a functional and structural adaptation of the school, and on the other, the re-modelling of the internal spaces and external ones to be used by the community. This takes into account both the technical and structural features of the building, and the socio-economic characteristics of the urban context. Using the proposed evaluation methodology, the types of intervention to be adopted in order to ensure the building conforms to the regulations in force and to satisfy the people's needs are defined. The validity and flexibility of the proposed method are tested by implementing the evaluation approach in the case of a redevelopment project concerning the school building located in the historic centre of Rome (Italy). In the following sections, Section 2 describes the type of technical-regulatory material for the execution of initiatives on school buildings with particular reference to Italian case, Section 3 defines the phases of the multi-criteria methodology, and Section 4 illustrates the case study. Finally, conclusions are reached, and the potential for application of the proposed instrument and future research prospects are defined. Premise In order to highlight the multi-dimensional character of initiatives aimed at enhancing and/or modifying the existing schools in an integrated manner, it is necessary to take into account, in the planning and design phase, multiple aspects according to a multicriteria logic [50][51][52][53][54]. For the recovery and/or enhancement of existing school buildings, it is advisable to verify at the planning and design stage the correct sizing of the spaces for teaching in compliance with the endowment of minimum areas per capita, the compliance with the parameters of seismic safety and fire prevention, as well as the overcoming of any architectural barriers present, and the energy efficiency of the building and the healthiness of the indoor spaces. These checks allow one to define the objectives to be pursued, and the actions to be carried out for the requalification/enhancement of the building. In addition to technical and regulatory considerations, it is also important to highlight which are the primary needs of the urban context to be answered, both through the reorganization of the existing educational offer and by allocating some areas to the exercise of extra-curricular activities. In the following, the technical and regulatory aspects of the Italian context are taken into account. Thus, preferring a logic of integration between the design aspects related to the same intervention, the proposed multi-criteria evaluation approach includes each of them in order to establish the priorities for action in compliance with the current regulatory system and technical-structural characteristics of the building, as well as the economic and social conditions of the urban context in which the school is located. A careful survey of the state of the building allows us to establish the degree of transformability from the interventions that need to be carried out for the adaptation of the building in compliance with the technical and legislative provisions in force and the community's needs. It should be noted that the main aspects to be considered in order to upgrade the existing building in an integrated manner are listed below. We will not go into the details of each one, especially for those of technical nature, because for each aspect a specific sector study by experienced professionals (such as the one on the energy performance of the building, and/or the rehabilitation of its structure) is required. The methodology proposed supports the programming of sustainable interventions in existing school buildings, and also accessing the funds for the redevelopment and renovation of the schools, taking into account multiple aspects in an integrated way. Regulatory Overview and In-Depth Analysis of the Main Italian Legislative Measures for School Construction As previously specified, in Italy, since the 1970s, there have been many legislative measures in the field of school building aimed at providing planning and/or design indications for the execution of interventions on existing schools, preferring a problem-solving-based approach, and not one of integration and compensation between multiple effects deriving from the same settlement transformation intervention. The main reference standard documents currently in force (2019) for school building are the: • Ministerial Decree (MD) of 18 December 1975, which illustrates the Technical Standards referring to school buildings, including the educational, building and urban planning functionality indices, to be observed in the design and verification of interventions on existing schools. These indices vary according to the type of study courses and the morphological-urban characteristics of the urban context in which the school building is located. The same MD also distinguishes between indoor spaces (units for teaching and special activities, sanitary facilities, indoor gyms, administrative offices, classrooms for common events) and outdoor ones (outdoor sports fields and parking areas); • Law n • 23/1996 (School Building Regulations), aimed at the construction of a national information system on existing school buildings resulting from the collection of data on the state of maintenance, the safety level of existing school structures and the rate of usability of the same by the community in extra-educational time. With the above law, Anagrafe dell'Edilizia Scolastica (AES), with which the management of interventions on school building at national and regional level is carried out, is established; These standards can be used as a reference both for existing buildings and new constructions. From each one, the technical-regulatory objectives to be pursued can be defined, and the corresponding parameters (Evaluation Criteria) can be identified with which one it is possible to express the degree of achievement of each objective. Table 1 shows some of the main regulatory references in the field of Italian school buildings. The basic purpose is described for each of them, and the corresponding evaluation criteria is specified with which one can express the level of conformity between the actual state of the school building and the legal requirements to be complied with. With respect to the list of normative references described above, in Table 1 the normative references containing provisions for Italian school field still in force are taken into consideration. Among the normative references illustrated above, the Law n • 23/1996 (Norms on school building) underlines the necessity to carry out a planning phase of interventions aimed at the conservation of the existing building considering the survey about the characteristics of the school structures in use and of the urban context. Through this law, the Anagrafe Regionale dell'Edilizia Scolastica (ARES) is established with the aim of systematizing the information system on the regional school property assets. Each region and autonomous province independently manages access to ARES, and the provinces and municipalities are responsible for compiling, updating and implementing the data collection forms for each individual school building. This was done through direct surveys and inspections conducted at the school of interest. The module for collecting data on school buildings contained the elements needed to acquire information ascertained through the completion of two questionnaires: (i) Questionario Edificio (QE), aimed at collecting elements to evaluate quantitatively and qualitatively the school in use; (ii) Questionario istituzione scolastica, aimed at collecting information on individual school units, i.e., whether or not there are several school units in the same building, what type and how they are organised. The data that can be deduced from (i) on the school building are summarized in the Appendix of Questionario Edificio in ARES by Law 23/1996. For a more detailed specification, please refer to the Instruction Manual of QE in ARES for the compilation of the school building stock survey sheets. In order to have a clear view of the current state of the suitability of the school building for its functions when new work is to be carried out on the building, the relevant data which are particularly interesting include the most recent work on the structure (Point 9.0-Subsequent transformations). For each intervention, it is important to establish the intervention class and the execution year. The classes of intervention referred to in the survey, aimed at determining the state of maintenance, and consequently the level of adequacy of the building to the functions for which the building is intended, both from the technological point of view and in terms of amount of space, are: • extension and/or super-elevation: a complex of works that have the effect of enlarging an existing building, creating additional spaces or volumes. The extension can be done by "horizontal addition" (in which case it involves an increase in coverage), or "vertical addition" (i.e., elevation), or finally with actions of both expansion and elevation; • building renovation: interventions aimed at transforming building organizations through a systematic set of works that can lead to a building organization in whole or in different parts. These interventions include the restoration or replacement of certain elements of the building, or even the elimination, modification and insertion of new elements and systems; • integral and conservative restoration: interventions aimed at preserving the building organism and ensuring its functionality through a systematic works set that-in compliance with the typological, formal and structural elements of the building organism-allow its compatible use with them. These interventions include: the consolidation, restoration and renewal of the building's constituent elements; the insertion of ancillary elements and systems required by the use needs; the elimination of extraneous elements to the building organism; • extraordinary maintenance: works aimed at renovating and replacing parts, including structural ones, of buildings, as well as the construction and integration of sanitary and technological services, while respecting the volumes and surfaces of the individual building units. The state of maintenance regarding the building works and systems (Point 13.0-State of conservation) is evaluated qualitatively by attributing a score according to the following classification: 6 = does not require any intervention; 5 = requires partial maintenance; 4 = requires complete maintenance; 3 = requires replacement or partial refurbishment; 2 = requires replacement or complete refurbishment; 1 = requires ex-novo installation; X = system is not necessary; The functional and dimensional characteristics of the rooms include the location, functional destination, size of the rooms on each floor of the building, shape (when the floor plan of a room is clearly different from a rectangular square, and it is difficult to carry out the educational activities), natural and artificial lighting, hygienic conditions (dependent not on poor cleaning but on a physical deficiency of the inner shell), and the acoustic conditions of each space. Depending on the observations and specifications previously made, it is clear that in order to plan interventions on existing school buildings, it is necessary to comply with a number of regulatory requirements, and to obtain information on the building to be recovered. In general, the information concerning the school structure can be of various types. Each one regards a specific school aspect, both technical and organizational-functional, especially regarding the way in which the available space is used. On the basis of the information system proposed by Law 23/1996, it can be observed, however, that the phase of reconnaissance of the state concerning the school building does not include the acquisition of data in the urban context. With reference to the Constitutional Court (judgments 62/2013, 284/2016 and, lastly, 71/2018), within the discipline on school buildings "more subjects intersect, such as "territorial governance", "energy" and "civil protection". Thus, the proposed evaluation methodology, as specified in the following paragraph, aims to jointly consider regulatory aspects, school building features, and characteristics of the territory. Some of the information illustrated in Table 2 is identified from the data list contained in Questionario Edificio. For each one, an alternative reference document to the QE is indicated, from which the data relating to the information to be quantified can be extrapolated. This is with a view to creating an information system useful for the planning and execution of projects that respect the characteristics of the building to be recovered, and that can also satisfy the citizens' needs in view of the distinctive characteristics of the local economy expressed in terms of existing services and those potentially settled in the area. In order to establish what further activities can be offered to the community by taking advantage of the internal spaces and connected externally to the school, it is necessary to identify the users' class to which the type of service provided is allocated (as specified in the following Section 3.1.2). Evaluation Framework The proposed evaluation method seeks to verify the technical, regulatory and management conditions of existing school buildings (especially with regard to the use of available space for educational activities) in order to plan transformation and/or conservation activities, not only observing the regulatory requirements in force and the physical characteristics of the building, but also the needs of the community. The method consists of two steps: (a) knowledge phase, (b) evaluation phase. In the first phase, the reference legislation is analyzed, and technical-management information on the school is collected. In the second phase, the regulatory requirements are verified, and the main methods of intervention are defined. This is with regard to both the physical and management system of the building and the services that the school can offer to the community. Figure 1 shows the outline of the proposed evaluation methodology and highlights the mutual relations between each of its sub-phases. The diagram in Figure 1 aims to graphically illustrate how the physical and functional characteristics of the school buildings are checked and evaluated in accordance with the reference regulations, as well as to program interventions compatible with existing community's needs. Compatibility and consistency assessments provide useful information on the types of interventions to be implemented for the renovation/recovery of the building. Each step (knowledge phase and evaluation phase) is made of specific sub-phases analysed in the following. Knowledge Phase 3.1.1. Collection of Data to Describe the School of Interest from a Historical, Technical (Structural, Technological, Plant Engineering) and Architectural Point of View, According to its Own Training Plan and the Active Extra-Didactic Services Offered to the Community As already highlighted in the previous Section 2.1, it is through the ARES, established by Law no. 23/1996, that the verification and collection of information about the consistency (surface and volume) and the management system of the school space takes place, also for the purpose of planning interventions compatible with the building and the training system in force. In many cases, this database is still incomplete, inaccurate and not always updated in the information. So, it is necessary to verify the data on the school building that is the subject of the evaluation problem of intervention contained in the database through inspections and field surveys to obtain more up-to-date and complete data to be used in the implementation of the proposed methodology. Taking into account the data contained in the QE of ARES, the main information to be considered when implementing the procedure concerns the geographical location and morphological characteristics of the building (year of construction; architectural layout, overall dimensions of the building, etc.), the type of training offered and information on the total number of students enrolled both in the last year of activity and in previous years, the superficial consistencies of the spaces (internal and external) used for teaching and not, the safety conditions (earthquake-proof, fireproof, hygienic-sanitary and environmental) of the school, and the types of services offered to students and/or people not attending school during ordinary teaching activities in teaching hours and not. Table 2 specifies for each type of information the reference source in order to find and quantify the data of interest, even in the event that the descriptive sheet of the school in question is not present in the ARES information system. For each type of data, the usefulness (expressed in terms of objectives to be pursued) is also illustrated. Table 2. Types of information to be collected for describing the state of the school. Information Type Reference Source Usefulness Information Geographical location and morphology of the building Knowledge of the surface area of each environment inside and/or outside the school is necessary information to verify the per-capita budget that must be guaranteed to each student. Types of services offered (didactic and extra) School Self-Assessment Report (SAR) The specification of the services offered by the school during extra-curricular hours shows how the school is used Figure 1. Diagram of the proposed evaluation methodology for the characterization of the interventions types to be carried out on existing school buildings. Figure 1. Diagram of the proposed evaluation methodology for the characterization of the interventions types to be carried out on existing school buildings. Socio-Economic Analysis of the Urban Context With the aim of carrying out specific interventions that can also satisfy the needs of social aggregation of the community, it is appropriate to examine the urban context of reference in terms of services for the population. This is done by analyzing the market conditions in terms of demand and services supply that characterize the territory in which the school assumes a polarizing function. To do this, it is necessary to demarcate the territorial area of interest in which it is necessary to detect and quantify the demand and services supply characterizing the market of the urban context of reference. The criteria for identifying the analysis area can specifically concern the composition and characteristics (economic, social, etc.) of the local population, the economic-productive system of the territory, the socio-cultural apparatus of the place, or even the morphology of the urban fabric of reference of which the school is a part. On the basis of morphological aspects, the field of investigation can coincide either with the perimeter of a single district, or of a part of it, in which the school can assume a catalytic and barycentric function with respect to the evolutionary dynamics of the surrounding urban fabric, or even with larger areas in the case of schools with locations located in the city and that are distant from each other. This, in part also depends on the demographic dimension and the territorial extension of the municipality and its parts in which the school structure is located. The definition of the gravitational field can therefore be carried out on the basis of data relating to: settlement type, socio-demographic information (age, sex, nationality, educational qualification/level, social class, employment, income, etc.), geographical information (region/province/common, urban/suburban/rural area, city size, population, climate, etc.), psychographic information (lifestyle, habits, etc.), and historical, cultural and productive information. Operationally, the quantification of these indicators and, in particular, of the number and location of services present in the urban environment, can be carried out by identifying a cluster of circular analyses with a center at the point where the school of interest is located and a certain radius. In the Circular N • 425/1967 in Appendix C, the measures of influence rays at each level of the school to be considered in order to identify the territorial area in which to include the catchment area referred to the school building considered are specified. Depending on the territorial scope of the survey thus identified and the type of supply/demand characterizing the cluster in which the school falls, the types of services offered by the school institution are compared with those present in the territory. Within the cluster analysis, the type of services considered are those illustrated in the Ministerial Decree of April 14, 2013 and include libraries, commercial activities, accommodations, gyms, bars and restaurants, and spaces for the community. Evaluations of Consistency According to the Technical and Regulatory Requirements to Be Complied with at the Design Stage and the Actual State of the School From the identification and collection of data, both of the building to be redeveloped and of the urban context in which the school is located, it is necessary to proceed to the verification of congruence between the actual state of the school and the reference law provisions. Specifically, on the basis of the surface textures measured for each space intended for teaching (frontal and laboratory), the minimum surface area per student for each type of environment (classroom, laboratory, gym) is verified in correspondence with each floor of the building, in compliance with the minimum regulatory targets to be respected. The value of the surface area per capita to be considered during the verification phase varies with respect to the school training offer, as regulated in the Ministerial Decree of 18 December 1975; in compliance with the provisions of the law on school building, the level of compliance of the school's functional, seismic, fireproof and hygienic-sanitary system with the safety and use conditions to be guaranteed is assessed. By means of an evaluation index (L i ), the degree of adequacy of the spatial-functional (L f ), structural (L s ), fireproof (L a ) and hygienic-sanitary (L i-s ) system of the school building to the requirements expressed in the i-th reference standard is qualitatively measured, as well as the level of correlation between the services currently present in the school and those found in the territorial area of investigation (L ser ). For each aspect, the corresponding L i is measured qualitatively by assigning a score (p i ) according to the scale of values from 1 to 6 used in the filling in of the questionnaire for ARES (6 = does not require any intervention; 5 = requires partial maintenance; 4 = requires complete maintenance; 3 = requires partial replacement or renovation; 2 = requires replacement or complete renovation; 1 = requires ex-novo installation). On the basis of the scale of values used to fill in the ARES questionnaire, the attribution of the score to the L i parameter for each aspect is a function of the greater and/or lesser level of adaptation of the state (spatial, physical, functional, plant engineering, sanitation, environmental, structural) of the school to the reference regulatory requirements evaluated according to a suitable technical-regulatory criterion (C i ) (see Table 2), and the degree of correspondence between the types of services (S i ) currently present in the school and those found in the urban context in which the building to be renovated is located. Using an algebraic-linear formulation, the L i parameter can be expressed through the following mathematical function: In the following, for each L i concerning the structural, plant engineering, sanitary, environmental, and spatial-functional aspects of the building and the services currently present in the school, the corresponding scoring system is illustrated according to the scale of values from 1 to 6. For each L i , especially for those referring to the technical-regulatory aspects, the measurement parameters considered are such that it is possible to use a qualitative evaluation approach. For the evaluation of technical and plant engineering aspects of the sector, for which it will be necessary to take into account the act of implementing the planning of interventions to be carried out on the existing building, it is mandatory to take into account a judgment expressed through more detailed design drawings drawn up by technical professionals in the fields of seismic adjustment, energy, and plant engineering that give quantitative information. In the case of this work, aimed at providing an evaluation methodology to support the definition and planning of sustainable projects as compatible with the physical-spatial apparatus of the school building, also with a view to encouraging a more correct completion of the procedures for obtaining public funding, an evaluation methodology of a qualitative type is proposed. The use of a parameter L of reference allows us to express the level of correspondence between the state of affairs of the school building and the regulatory requirements to be complied with in the design phase. In particular, for each evaluation index: a) the value of the parameter L f is assigned on the basis of the increase and decrease in the surface index deriving from the direct survey of school spaces (internal and external) (If) with respect to the parameter of law If*. Figure 2 below shows the extremes of the incremental and decremental intervals (∆If), expressed in percentages, defined starting from the If* value and the average L i_f score to be assigned to the i-th space according to the corresponding ∆Ifi. Sustainability 2020, 11, x FOR PEER REVIEW 11 of 25 requirements expressed in the i-th reference standard is qualitatively measured, as well as the level of correlation between the services currently present in the school and those found in the territorial area of investigation (Lser). For each aspect, the corresponding Li is measured qualitatively by assigning a score (pi) according to the scale of values from 1 to 6 used in the filling in of the questionnaire for ARES (6 = does not require any intervention; 5 = requires partial maintenance; 4 = requires complete maintenance; 3 = requires partial replacement or renovation; 2 = requires replacement or complete renovation; 1 = requires ex-novo installation). On the basis of the scale of values used to fill in the ARES questionnaire, the attribution of the score to the Li parameter for each aspect is a function of the greater and/or lesser level of adaptation of the state (spatial, physical, functional, plant engineering, sanitation, environmental, structural) of the school to the reference regulatory requirements evaluated according to a suitable technical-regulatory criterion (Ci) (see Table 2), and the degree of correspondence between the types of services (Si) currently present in the school and those found in the urban context in which the building to be renovated is located. Using an algebraic-linear formulation, the Li parameter can be expressed through the following mathematical function: In the following, for each Li concerning the structural, plant engineering, sanitary, environmental, and spatial-functional aspects of the building and the services currently present in the school, the corresponding scoring system is illustrated according to the scale of values from 1 to 6. For each Li, especially for those referring to the technical-regulatory aspects, the measurement parameters considered are such that it is possible to use a qualitative evaluation approach. For the evaluation of technical and plant engineering aspects of the sector, for which it will be necessary to take into account the act of implementing the planning of interventions to be carried out on the existing building, it is mandatory to take into account a judgment expressed through more detailed design drawings drawn up by technical professionals in the fields of seismic adjustment, energy, and plant engineering that give quantitative information. In the case of this work, aimed at providing an evaluation methodology to support the definition and planning of sustainable projects as compatible with the physical-spatial apparatus of the school building, also with a view to encouraging a more correct completion of the procedures for obtaining public funding, an evaluation methodology of a qualitative type is proposed. The use of a parameter L of reference allows us to express the level of correspondence between the state of affairs of the school building and the regulatory requirements to be complied with in the design phase. In particular, for each evaluation index: a) the value of the parameter Lf is assigned on the basis of the increase and decrease in the surface index deriving from the direct survey of school spaces (internal and external) (If) with respect to the parameter of law If*. Figure 2 below shows the extremes of the incremental and decremental intervals (∆If), expressed in percentages, defined starting from the If* value and the average Li_f score to be assigned to the i-th space according to the corresponding ∆Ifi. b) With regard to the structural aspect, the corresponding evaluation index (Ls) is measured according to the number of rooms, used for teaching and not, distributed on each floor of the school building, which at the time of the on-site inspection are unusable and risky for the safety of school users. Figure 3 below shows the interval extremes, expressed as a percentage, b) With regard to the structural aspect, the corresponding evaluation index (L s ) is measured according to the number of rooms, used for teaching and not, distributed on each floor of the school building, which at the time of the on-site inspection are unusable and risky for the safety of school users. Figure 3 below shows the interval extremes, expressed as a percentage, established on the basis of the number of currently unusable environments (N ai ) defined with respect to the total number of spaces (N tot ) present on each school level, and the average score to be assigned, according to the values scale [1][2][3][4][5][6], in descending order on the basis of the number of impassable spaces by the persons surveyed at the time of the inspection. Sustainability 2020, 11, x FOR PEER REVIEW 12 of 25 established on the basis of the number of currently unusable environments (Nai) defined with respect to the total number of spaces (Ntot) present on each school level, and the average score to be assigned, according to the values scale [1][2][3][4][5][6], in descending order on the basis of the number of impassable spaces by the persons surveyed at the time of the inspection. c) As far as fire safety is concerned, the corresponding evaluation index (La) is measured taking into account the obligation deriving from the law (Law 81/2019) to provide each environment with appropriate fire protection devices (e.g., fire extinguishers, which are not exhaustive) with particular performance characteristics. In particular, the attribution of a high, or low, numerical value to the La index can be related to the frequency (in percentage terms) of spaces in correspondence with each floor of the school building, in which the presence, or absence, of fire extinguishers or other devices for fire risk prevention is found, compared to the total number of rooms on the same floor. Figure 4 below shows the reference diagram for the assignment of the score to the parameter (La) as a function of the frequency of the rooms without fire-fighting devices (Ne) compared to the total number (Ntot) of spaces on the i-th floor. d) With reference, instead, to the hygienic-sanitary aspect, it is possible to refer to the possible presence of superficial condensation inside the spaces specifically destined to the carrying out of didactic activities. Similar to the previous evaluation indexes described above, the value assumed by Li-s is related to the number of rooms on each floor within which there are forms of surface condensation (Ns) compared to the total spaces (Ntot) on the i-th floor. Figure 5 shows the scale of the scores (from 1 to 6) and the corresponding intervals of the measurement parameter considered (Ns/Ntot) expressed as a percentage. With reference, instead, to the evaluation of the congruence level between the services currently offered through the use of the internal environments of the building with those present in the analysis buffer (with a radius of 1000 m), a qualitative score from 1 to 6 is attributed to the corresponding evaluation index (Lser) according to the number of similar services found in the urban area of interest c) As far as fire safety is concerned, the corresponding evaluation index (L a ) is measured taking into account the obligation deriving from the law (Law 81/2019) to provide each environment with appropriate fire protection devices (e.g., fire extinguishers, which are not exhaustive) with particular performance characteristics. In particular, the attribution of a high, or low, numerical value to the L a index can be related to the frequency (in percentage terms) of spaces in correspondence with each floor of the school building, in which the presence, or absence, of fire extinguishers or other devices for fire risk prevention is found, compared to the total number of rooms on the same floor. Figure 4 below shows the reference diagram for the assignment of the score to the parameter (L a ) as a function of the frequency of the rooms without fire-fighting devices (N e ) compared to the total number (N tot ) of spaces on the i-th floor. Sustainability 2020, 11, x FOR PEER REVIEW 12 of 25 established on the basis of the number of currently unusable environments (Nai) defined with respect to the total number of spaces (Ntot) present on each school level, and the average score to be assigned, according to the values scale [1][2][3][4][5][6], in descending order on the basis of the number of impassable spaces by the persons surveyed at the time of the inspection. c) As far as fire safety is concerned, the corresponding evaluation index (La) is measured taking into account the obligation deriving from the law (Law 81/2019) to provide each environment with appropriate fire protection devices (e.g., fire extinguishers, which are not exhaustive) with particular performance characteristics. In particular, the attribution of a high, or low, numerical value to the La index can be related to the frequency (in percentage terms) of spaces in correspondence with each floor of the school building, in which the presence, or absence, of fire extinguishers or other devices for fire risk prevention is found, compared to the total number of rooms on the same floor. Figure 4 below shows the reference diagram for the assignment of the score to the parameter (La) as a function of the frequency of the rooms without fire-fighting devices (Ne) compared to the total number (Ntot) of spaces on the i-th floor. d) With reference, instead, to the hygienic-sanitary aspect, it is possible to refer to the possible presence of superficial condensation inside the spaces specifically destined to the carrying out of didactic activities. Similar to the previous evaluation indexes described above, the value assumed by Li-s is related to the number of rooms on each floor within which there are forms of surface condensation (Ns) compared to the total spaces (Ntot) on the i-th floor. Figure 5 shows the scale of the scores (from 1 to 6) and the corresponding intervals of the measurement parameter considered (Ns/Ntot) expressed as a percentage. With reference, instead, to the evaluation of the congruence level between the services currently offered through the use of the internal environments of the building with those present in the analysis buffer (with a radius of 1000 m), a qualitative score from 1 to 6 is attributed to the corresponding evaluation index (Lser) according to the number of similar services found in the urban area of interest c) As far as fire safety is concerned, the corresponding evaluation index (La) is measured taking into account the obligation deriving from the law (Law 81/2019) to provide each environment with appropriate fire protection devices (e.g., fire extinguishers, which are not exhaustive) with particular performance characteristics. In particular, the attribution of a high, or low, numerical value to the La index can be related to the frequency (in percentage terms) of spaces in correspondence with each floor of the school building, in which the presence, or absence, of fire extinguishers or other devices for fire risk prevention is found, compared to the total number of rooms on the same floor. Figure 4 below shows the reference diagram for the assignment of the score to the parameter (La) as a function of the frequency of the rooms without fire-fighting devices (Ne) compared to the total number (Ntot) of spaces on the i-th floor. d) With reference, instead, to the hygienic-sanitary aspect, it is possible to refer to the possible presence of superficial condensation inside the spaces specifically destined to the carrying out of didactic activities. Similar to the previous evaluation indexes described above, the value assumed by Li-s is related to the number of rooms on each floor within which there are forms of surface condensation (Ns) compared to the total spaces (Ntot) on the i-th floor. Figure 5 shows the scale of the scores (from 1 to 6) and the corresponding intervals of the measurement parameter considered (Ns/Ntot) expressed as a percentage. With reference, instead, to the evaluation of the congruence level between the services currently offered through the use of the internal environments of the building with those present in the analysis buffer (with a radius of 1000 m), a qualitative score from 1 to 6 is attributed to the corresponding evaluation index (Lser) according to the number of similar services found in the urban area of interest With reference, instead, to the evaluation of the congruence level between the services currently offered through the use of the internal environments of the building with those present in the analysis buffer (with a radius of 1000 m), a qualitative score from 1 to 6 is attributed to the corresponding evaluation index (L ser ) according to the number of similar services found in the urban area of interest and the relative distance (included in the analysis buffer of 1000 m) measured with respect to the point where the building is located. Figure 6 shows the scale of values from 1 to 6 according to the distance of the i-th service from the school. The maximum distance is assumed to be 1000 m, as indicated in Appendix C of Circular n • 425/1967. and the relative distance (included in the analysis buffer of 1000 m) measured with respect to the point where the building is located. Figure 6 shows the scale of values from 1 to 6 according to the distance of the i-th service from the school. The maximum distance is assumed to be 1000 m, as indicated in Appendix C of Circular n° 425/1967. Following the Li measurement operation, the intensity of the intervention to be carried out for the overall requalification of the school is identified, also taking into account the characteristics (in terms of existing services) of the urban context in which the school is located. In order to encourage requalification practices according to an integrated logic, it is necessary to jointly consider both the actions of spatial reorganization of the didactic and laboratory environments (where the value of the minimum surface endowment per students for each environment is not satisfied), and the physical interventions aimed at adapting the school to the reference regulations, both of enhancement and/or integration of the services currently offered and/or potentially to be added to the existing ones. From this perspective, three macro-types of intervention are outlined (Total Renovation, Regulatory Compliance, Distributional Challenges), which can be implemented in order to fully upgrade existing school buildings. Total Renovation (TR) is carried out when the school needs substantial interventions in terms of both structure, plant engineering, health and hygiene, management and use of space for teaching, and integration of new services for the community. In the case of existing school buildings, when the date of construction is prior to the year in which the first anti-seismic regulations were issued, it goes without saying that it is mandatory to carry out preliminary consolidation and adaptation work on the structure in order to ensure the safety of direct and indirect users of the school. Regulatory Compliance (RC) occurs when it is necessary to act on the school building in order to adapt it to some legal requirements regarding seismic risk, fire prevention, energy requalification, indoor quality improvement, together with a remodeling and re-functionalization of the internal and external spaces at the service of the school building and enhancement and/or integration of additional services identified according to the characteristics (supply/demand) of the analysis scope. The Distributional Challenge (DC) is carried out in the event that it is necessary to carry out a complete and/or partial reorganization and redefinition of the intended use of the internal and/or external school spaces, and the correspondence between the structural-implant and hygienic-sanitary state of the school with the regulatory requirements that must be considered in the planning phase of the requalification interventions is verified. It is possible to associate the identification of the intervention methods previously described (Total Renovation, Regulatory Compliance, Distributional Challenge) with a synthetic reference index (Ki). This index allows us to express the level of intensity of the action type that needs to be implemented to upgrade the school, from the point of view of both functional and structuralimplantistic-hygienic-sanitary (Kt) factors, as well as in terms of services offered to the public (Kser). Both Kt and Kser are obtained by algebraically aggregating the corresponding Li values previously specified in Section 3.2.1. The Ki parameter is obtained by means of a mathematical formulation such as: Identification of the Methods of Actions and Types of Interventions to Be Implemented in the Event of Non-Compliance with the Minimum Regulatory Requirements Regarding Security and Physical-Functional Management of the School Space Following the L i measurement operation, the intensity of the intervention to be carried out for the overall requalification of the school is identified, also taking into account the characteristics (in terms of existing services) of the urban context in which the school is located. In order to encourage requalification practices according to an integrated logic, it is necessary to jointly consider both the actions of spatial reorganization of the didactic and laboratory environments (where the value of the minimum surface endowment per students for each environment is not satisfied), and the physical interventions aimed at adapting the school to the reference regulations, both of enhancement and/or integration of the services currently offered and/or potentially to be added to the existing ones. From this perspective, three macro-types of intervention are outlined (Total Renovation, Regulatory Compliance, Distributional Challenges), which can be implemented in order to fully upgrade existing school buildings. Total Renovation (TR) is carried out when the school needs substantial interventions in terms of both structure, plant engineering, health and hygiene, management and use of space for teaching, and integration of new services for the community. In the case of existing school buildings, when the date of construction is prior to the year in which the first anti-seismic regulations were issued, it goes without saying that it is mandatory to carry out preliminary consolidation and adaptation work on the structure in order to ensure the safety of direct and indirect users of the school. Regulatory Compliance (RC) occurs when it is necessary to act on the school building in order to adapt it to some legal requirements regarding seismic risk, fire prevention, energy requalification, indoor quality improvement, together with a remodeling and re-functionalization of the internal and external spaces at the service of the school building and enhancement and/or integration of additional services identified according to the characteristics (supply/demand) of the analysis scope. The Distributional Challenge (DC) is carried out in the event that it is necessary to carry out a complete and/or partial reorganization and redefinition of the intended use of the internal and/or external school spaces, and the correspondence between the structural-implant and hygienic-sanitary state of the school with the regulatory requirements that must be considered in the planning phase of the requalification interventions is verified. It is possible to associate the identification of the intervention methods previously described (Total Renovation, Regulatory Compliance, Distributional Challenge) with a synthetic reference index (K i ). This index allows us to express the level of intensity of the action type that needs to be implemented to upgrade the school, from the point of view of both functional and structural-implantistic-hygienic-sanitary (K t ) factors, as well as in terms of services offered to the public (K ser ). Both K t and K ser are obtained by algebraically aggregating the corresponding L i values previously specified in Section 3.2.1. The K i parameter is obtained by means of a mathematical formulation such as: The K i intervals for each intervention mode are shown in Table 3. Table 3. Range of values of the K i parameter and corresponding switching modes. Figure 7 below illustrates a double entry scheme according to which it is possible to identify the intervention mode (TC, RC, DC) referring to both the system of services and the technical-implantistic-hygienic-sanitary one on the basis of the corresponding score of L i parameter. The proposed diagram shows on the abscissa and ordinate axis the intervals of values, respectively referred to K t and K ser , which identify the proposed macro-categories of intervention. The Ki intervals for each intervention mode are shown in Table 3. In Section 4, the proposed evaluation methodology is applied to the case study for the redevelopment of the Torquato Tasso classical high school located in the historic center of Rome (Italy). Case Study We intended to apply the proposed evaluation method considering the project of redevelopment, recovery and conservation of the school located in the historic center of Rome in Italy. The building is made of three institutes: Middle School, High School and Scientific High School. The methodology in Section 3 is tested on the part of the building relating to Torquato Tasso classical high school. Figure 8 shows (a) a historical picture of the building part where Torquato Tasso classical high school is, and (b) the façade on Sicilia Road where the principal entrance to the school is. In Section 4, the proposed evaluation methodology is applied to the case study for the redevelopment of the Torquato Tasso classical high school located in the historic center of Rome (Italy). Case Study We intended to apply the proposed evaluation method considering the project of redevelopment, recovery and conservation of the school located in the historic center of Rome in Italy. The building is made of three institutes: Middle School, High School and Scientific High School. The methodology in Section 3 is tested on the part of the building relating to Torquato Tasso classical high school. Figure 8 shows (a) a historical picture of the building part where Torquato Tasso classical high school is, and (b) the façade on Sicilia Road where the principal entrance to the school is. It should be specified that the data on the student population in the last five years, the information on the educational offer of the institute in question, as well as the survey of the geometric dimensions of the spaces used for teaching were obtained by carrying out some investigation campaigns in the school. Description of the School from the Historical, Technical and Architectural Point of View, According to Its Own Training Plan and the Extra-Educational Services Offered to the Community The school building is located inside the first Town Hall in the historic center of the city of Rome, specifically in the Ludovisi district. The building is strategically located with respect to both the main roads through which you can reach the city-center, and the train station of Roma Termini, as demonstrated by calculating the travel time between the school and the points of greatest infrastructural interest through Google Map. With specific regard to the Torquato Tasso classical high school, the institute, founded in 1908, was proposed from the beginning as a school serving the Ludovisi district and neighboring ones. Over the years, a series of enlargements, transformations and building maintenance interventions were carried out, especially on the structure, such as the original internal and external conformation of the building is partially modified. The school rooms are distributed over two floors as well as on the ground floor, where most of the administrative offices are located. As described in the Self-assessment Report (2019) of the institute, there are 44 classrooms for frontal teaching, three spaces for laboratories, a great hall, a Library and a Natural Sciences Museum, administrative offices and restrooms on each floor. Figure 9 shows a plan of a typical school floor plan from 1964. The plan in the figure was acquired from the school's administrative offices after an on-site inspection. It should be specified that the data on the student population in the last five years, the information on the educational offer of the institute in question, as well as the survey of the geometric dimensions of the spaces used for teaching were obtained by carrying out some investigation campaigns in the school. Description of the School from the Historical, Technical and Architectural Point of View, According to Its Own Training Plan and the Extra-Educational Services Offered to the Community The school building is located inside the first Town Hall in the historic center of the city of Rome, specifically in the Ludovisi district. The building is strategically located with respect to both the main roads through which you can reach the city-center, and the train station of Roma Termini, as demonstrated by calculating the travel time between the school and the points of greatest infrastructural interest through Google Map. With specific regard to the Torquato Tasso classical high school, the institute, founded in 1908, was proposed from the beginning as a school serving the Ludovisi district and neighboring ones. Over the years, a series of enlargements, transformations and building maintenance interventions were carried out, especially on the structure, such as the original internal and external conformation of the building is partially modified. The school rooms are distributed over two floors as well as on the ground floor, where most of the administrative offices are located. As described in the Self-assessment Report (2019) of the institute, there are 44 classrooms for frontal teaching, three spaces for laboratories, a great hall, a Library and a Natural Sciences Museum, administrative offices and restrooms on each floor. Figure 9 shows a plan of a typical school floor plan from 1964. The plan in the figure was acquired from the school's administrative offices after an on-site inspection. To date (2019), the school has 919 students enrolled in the first academic year (a.y.). Table 4 To date (2019), the school has 919 students enrolled in the first academic year (a.y.). Table 4 shows the number of students enrolled in the first year, the number of classes and sections from the academic year 2015-2016 to the academic year 2019-2020. On the basis of the data in Table 4, a progressive increase in the student population is evident. In particular, the trend towards an increase in the number of students enrolled in the first year at each academic year has produced, over time, the gradual fragmentation of classrooms in environments characterized by different sizes. On-site inspections of three different types of large, middle and small size classrooms are revealed in Table 5, which shows the dimensions (both linear and superficial) of the three types of classroom together with those of the other rooms serving the school. -Survey of the superficial consistencies of the internal and external spaces of the school In order to verify the per capita surface endowment of each space inside and outside the school building, a preliminary phase of a consistency survey of the building was carried out. Table 5 shows the geometric measurements of the rooms located on a typical floor (the first and second floors on which teaching activities have the same planimetric and distributive-spatial composition). Table 6, instead, shows the data of the surfaces of the internal and external spaces in an aggregate manner. On the basis of the data in Table 4, a progressive increase in the student population is evident. In particular, the trend towards an increase in the number of students enrolled in the first year at each academic year has produced, over time, the gradual fragmentation of classrooms in environments characterized by different sizes. On-site inspections of three different types of large, middle and small size classrooms are revealed in Table 5, which shows the dimensions (both linear and superficial) of the three types of classroom together with those of the other rooms serving the school. -Survey of the superficial consistencies of the internal and external spaces of the school In order to verify the per capita surface endowment of each space inside and outside the school building, a preliminary phase of a consistency survey of the building was carried out. Table 5 shows the geometric measurements of the rooms located on a typical floor (the first and second floors on which teaching activities have the same planimetric and distributive-spatial composition). Table 6, instead, shows the data of the surfaces of the internal and external spaces in an aggregate manner. The Ludovisi district is characterized by the significant presence of elements with a strong historical-artistic and architectural value. Not far from the school, in fact, there are the Museum Boncompagni-Ludovisi, the National Gallery of Ancient Art of Barberini Building and the Borghese Gallery. The historical connotation of the territory contributes to define a context of strong cultural value through which the school encourage the education and training of students. The building's position in relation to some research institutes (for example, the National Research Centre) and university buildings (Sapienza University) has also made it possible to establish collaborative relationships by developing a number of joint educational initiatives in order to support the educational growth of students and also encourage the local development of the territory. In order to analyze and characterize the territorial context of reference in which the school is placed also with regard to the types of existing services, the territorial area of analysis was included within a buffer with a radius of influence of 1000 m from the point where the school is located (Circular N • 425/1967). With the aim of identifying the prevailing services within the scope of the survey thus defined, a phase of georeferencing of the commercial, receptive, cultural and sports services was conducted. After the georeferencing of the information, an information map (see Figure 10) was created using GIS instrumentation (Google Maps) to support the identification and analysis of services within the 1000 m buffer. Through the use of this cartography, it was possible to see that the territory is characterized by the high density of commercial activities and accommodation facilities. This is due to the presence of elements with strong tourist and infrastructural attractiveness (Roma Termini railway station) that influence the market dynamics. In the analysis buffer of 1000 meters, there are no services for the community (for example, non-exhaustive, neighborhood library, headquarters for cultural associations, spaces for social gathering). Sustainability 2020, 11, x FOR PEER REVIEW 18 of 25 In order to analyze and characterize the territorial context of reference in which the school is placed also with regard to the types of existing services, the territorial area of analysis was included within a buffer with a radius of influence of 1000 m from the point where the school is located (Circular N° 425/1967). With the aim of identifying the prevailing services within the scope of the survey thus defined, a phase of georeferencing of the commercial, receptive, cultural and sports services was conducted. After the georeferencing of the information, an information map (see Figure 10) was created using GIS instrumentation (Google Maps) to support the identification and analysis of services within the 1000 m buffer. Through the use of this cartography, it was possible to see that the territory is characterized by the high density of commercial activities and accommodation facilities. This is due to the presence of elements with strong tourist and infrastructural attractiveness (Roma Termini railway station) that influence the market dynamics. In the analysis buffer of 1000 meters, there are no services for the community (for example, non-exhaustive, neighborhood library, headquarters for cultural associations, spaces for social gathering). Congruence Check between the Technical and Regulatory Requirements and the Actual State of the School After the cognitive phase aimed at defining the state of the school, the congruence of the surface consistencies of the rooms surveyed (see Table 6) was verified, taking into account the regulatory requirements relating to their sizing in relation to the number of student that they host. This was done by comparing the value of the per capita equipment of each space (If), obtained by comparing the areas surveyed on site to the capacity of each space expressed in terms of number of students, with the corresponding regulatory standards (If*) contained in the Ministerial Decree of April 18 in 1975. For each type of space, the value of Li_fi was defined according to the corresponding percentage ∆Ifi. Table 7 shows the surface consistencies of each type of internal and external space used by the school, the theoretical surface indices, and those deriving from on-site measurements, the corresponding ∆Ifi and Li_f. The score at Li_f is assigned with the diagram in Figure 2. The final Lf parameter to be considered was obtained from the arithmetic mean of the Li_f of each type of space. In this case, the average value of Lf was 2.3. Congruence Check between the Technical and Regulatory Requirements and the Actual State of the School After the cognitive phase aimed at defining the state of the school, the congruence of the surface consistencies of the rooms surveyed (see Table 6) was verified, taking into account the regulatory requirements relating to their sizing in relation to the number of student that they host. This was done by comparing the value of the per capita equipment of each space (If), obtained by comparing the areas surveyed on site to the capacity of each space expressed in terms of number of students, with the corresponding regulatory standards (If*) contained in the Ministerial Decree of April 18 in 1975. For each type of space, the value of L i_fi was defined according to the corresponding percentage ∆Ifi. Table 7 shows the surface consistencies of each type of internal and external space used by the school, the theoretical surface indices, and those deriving from on-site measurements, the corresponding ∆Ifi and L i_f . The score at L i_f is assigned with the diagram in Figure 2. The final L f parameter to be considered was obtained from the arithmetic mean of the L i_f of each type of space. In this case, the average value of L f was 2.3. From a structural, plant engineering, environmental, health and hygiene points of view, on the other hand, a qualitative evaluation of the school's degree of conformity with the content expressed in the reference standards was carried out. In order to estimate the level of compliance of the school with the laws on fire prevention, safety of the structure, indoor quality, the presence of fire-fighting devices in each room inside the school was verified, as well as the possible inability of some spaces at the time of on-site inspection, as well as the formation of condensation on the walls inside the building. With regard to the type of actions implemented within the school in order to reduce the fire risk, at the date of the inspection in March 2019, the fire-fighting system serving the building had recently been brought into conformity with the technical regulations of reference. By carrying out a series of on-site inspections, it was possible to ascertain that each room was equipped with a special firefighting device. With regard, however, to the safety of the school for the prevention of seismic risk, following a campaign of on-site inspections it was possible to find that some of its internal environments were not accessible (in particular two rooms on the first floor). The necessary interventions for their safety were being carried out in order to use them to carry out frontal and laboratory didactic activities. From a structural, plant engineering, environmental, health and hygiene points of view, on the other hand, a qualitative evaluation of the school's degree of conformity with the content expressed in the reference standards was carried out. In order to estimate the level of compliance of the school with the laws on fire prevention, safety of the structure, indoor quality, the presence of fire-fighting devices in each room inside the school was verified, as well as the possible inability of some spaces at the time of on-site inspection, as well as the formation of condensation on the walls inside the building. With regard to the type of actions implemented within the school in order to reduce the fire risk, at the date of the inspection in March 2019, the fire-fighting system serving the building had recently been brought into conformity with the technical regulations of reference. By carrying out a series of on-site inspections, it was possible to ascertain that each room was equipped with a special firefighting device. With regard, however, to the safety of the school for the prevention of seismic risk, following a campaign of on-site inspections it was possible to find that some of its internal environments were not accessible (in particular two rooms on the first floor). The necessary interventions for their safety were being carried out in order to use them to carry out frontal and laboratory didactic activities. From a structural, plant engineering, environmental, health and hygiene points of view, on the other hand, a qualitative evaluation of the school's degree of conformity with the content expressed in the reference standards was carried out. In order to estimate the level of compliance of the school with the laws on fire prevention, safety of the structure, indoor quality, the presence of fire-fighting devices in each room inside the school was verified, as well as the possible inability of some spaces at the time of on-site inspection, as well as the formation of condensation on the walls inside the building. With regard to the type of actions implemented within the school in order to reduce the fire risk, at the date of the inspection in March 2019, the fire-fighting system serving the building had recently been brought into conformity with the technical regulations of reference. By carrying out a series of on-site inspections, it was possible to ascertain that each room was equipped with a special firefighting device. With regard, however, to the safety of the school for the prevention of seismic risk, following a campaign of on-site inspections it was possible to find that some of its internal environments were not accessible (in particular two rooms on the first floor). The necessary interventions for their safety were being carried out in order to use them to carry out frontal and laboratory didactic activities. From a structural, plant engineering, environmental, health and hygiene points of view, on the other hand, a qualitative evaluation of the school's degree of conformity with the content expressed in the reference standards was carried out. In order to estimate the level of compliance of the school with the laws on fire prevention, safety of the structure, indoor quality, the presence of fire-fighting devices in each room inside the school was verified, as well as the possible inability of some spaces at the time of on-site inspection, as well as the formation of condensation on the walls inside the building. With regard to the type of actions implemented within the school in order to reduce the fire risk, at the date of the inspection in March 2019, the fire-fighting system serving the building had recently been brought into conformity with the technical regulations of reference. By carrying out a series of on-site inspections, it was possible to ascertain that each room was equipped with a special firefighting device. With regard, however, to the safety of the school for the prevention of seismic risk, following a campaign of on-site inspections it was possible to find that some of its internal environments were not accessible (in particular two rooms on the first floor). The necessary interventions for their safety were being carried out in order to use them to carry out frontal and laboratory didactic activities. From a structural, plant engineering, environmental, health and hygiene points of view, on the other hand, a qualitative evaluation of the school's degree of conformity with the content expressed in the reference standards was carried out. In order to estimate the level of compliance of the school with the laws on fire prevention, safety of the structure, indoor quality, the presence of fire-fighting devices in each room inside the school was verified, as well as the possible inability of some spaces at the time of on-site inspection, as well as the formation of condensation on the walls inside the building. With regard to the type of actions implemented within the school in order to reduce the fire risk, at the date of the inspection in March 2019, the fire-fighting system serving the building had recently been brought into conformity with the technical regulations of reference. By carrying out a series of on-site inspections, it was possible to ascertain that each room was equipped with a special firefighting device. With regard, however, to the safety of the school for the prevention of seismic risk, following a campaign of on-site inspections it was possible to find that some of its internal environments were not accessible (in particular two rooms on the first floor). The necessary interventions for their safety were being carried out in order to use them to carry out frontal and laboratory didactic activities. From a structural, plant engineering, environmental, health and hygiene points of view, on the other hand, a qualitative evaluation of the school's degree of conformity with the content expressed in the reference standards was carried out. In order to estimate the level of compliance of the school with the laws on fire prevention, safety of the structure, indoor quality, the presence of fire-fighting devices in each room inside the school was verified, as well as the possible inability of some spaces at the time of on-site inspection, as well as the formation of condensation on the walls inside the building. With regard to the type of actions implemented within the school in order to reduce the fire risk, at the date of the inspection in March 2019, the fire-fighting system serving the building had recently been brought into conformity with the technical regulations of reference. By carrying out a series of on-site inspections, it was possible to ascertain that each room was equipped with a special firefighting device. With regard, however, to the safety of the school for the prevention of seismic risk, following a campaign of on-site inspections it was possible to find that some of its internal environments were not accessible (in particular two rooms on the first floor). The necessary interventions for their safety were being carried out in order to use them to carry out frontal and laboratory didactic activities. From a structural, plant engineering, environmental, health and hygiene points of view, on the other hand, a qualitative evaluation of the school's degree of conformity with the content expressed in the reference standards was carried out. In order to estimate the level of compliance of the school with the laws on fire prevention, safety of the structure, indoor quality, the presence of fire-fighting devices in each room inside the school was verified, as well as the possible inability of some spaces at the time of on-site inspection, as well as the formation of condensation on the walls inside the building. With regard to the type of actions implemented within the school in order to reduce the fire risk, at the date of the inspection in March 2019, the fire-fighting system serving the building had recently been brought into conformity with the technical regulations of reference. By carrying out a series of on-site inspections, it was possible to ascertain that each room was equipped with a special fire-fighting device. With regard, however, to the safety of the school for the prevention of seismic risk, following a campaign of on-site inspections it was possible to find that some of its internal environments were not accessible (in particular two rooms on the first floor). The necessary interventions for their safety were being carried out in order to use them to carry out frontal and laboratory didactic activities. Finally, again through visits to the school, it was possible to verify the presence of superficial condensation in the spaces of each floor. At the time of the inspection, some classrooms had spots of surface condensation on the internal perimeter walls (five on the first floor and two on the second). On the basis of the ordinal scale of values used to assign a score to the parameter L i concerning the level of compliance with the content of the technical-normative system in force, the relative verification of compliance of the state with the fire regulations was measured by assigning a score of 5. Instead, the level of congruence between the actual configuration of school building and the reference standard for the structural safety of the spaces was evaluated by assigning the corresponding parameter L s the score of 4. From the sum of L a , L s , L i-s and L f was obtained the K t, which was equal to 15.3. With reference, however, to the estimate of the level of correspondence between the services currently present in the school and those found in the context of reference, the L ser is defined for each type of service that you can see inside the building to be upgraded. The services that are present in some rooms inside the school include a library, a space aggregative (aula magna), a museum of natural sciences, and an art laboratory. From the context analysis previously illustrated in Section 4.1.2, there are no similar services within the 1000 m analysis from the point where the school in question is located. Therefore, each L i_ser is given a score of 6. The corresponding K ser , deriving from the aggregation of L i_ser , is equal to 24. Identification of Types of Interventions to Be Implemented in Case of Non-Compliance with the Minimum Regulatory Requirements Regarding Safety and Use of School Space After the phase of evaluation of the level of correspondence between the actual state of the school building and reference regulations, on the basis of the values obtained by K ser and K i_t , the methods of intervention to be followed for the planning and/or design of interventions aimed at the functional and structural-plant requalification of the school were identified. The phase of identification of the intervention modalities was carried out using the double entry diagram of Figure 7. Reporting on the x-axis the value of K t and on y-axis K ser is defined the combination of the types of action to be implemented for the integrated requalification of the school. In this case, four types of service were considered (n • Si = 4) and four regulatory criteria were respected (n • Ci = 4). So, the extremes of the numerical intervals that identify the proposed types of intervention (TC, RC, DC) were multiplied respectively by the total number of services and technical-regulatory criteria considered. Figure 11 shows the analysis scheme of Figure 7 for the part of the school building related to the Torquato Tasso classical high school. In the light of the diagram in Figure 10, specifically concerning the school being studied, on the basis of the K ser and K t values obtained from the evaluation procedure described in Section 3.2.1, it is possible to observe that it would be appropriate to encourage interventions aimed, in particular, at making better use of the spaces dedicated to teaching and sports activities (Distributional Challenges), as well as actions for the safety of environments useful for carrying out training activities for students (Regulatory Compliance). From the analysis carried out on the types of service existing and falling within the territorial buffer of 1000 m (see Section 4.1.2), the need to enhance some environments inside and/or outside the institution to open up the cultural and social services to the community emerges. These include, for example, the opening to the public of the library inside the building, or even the use of some spaces (classrooms and/or corridors) for the exhibition of art and sculpture objects made by students in art education courses included in the educational training plan. In the light of the diagram in Figure 10, specifically concerning the school being studied, on the basis of the Kser and Kt values obtained from the evaluation procedure described in Section 3.2.1, it is possible to observe that it would be appropriate to encourage interventions aimed, in particular, at making better use of the spaces dedicated to teaching and sports activities (Distributional Challenges), as well as actions for the safety of environments useful for carrying out training activities for students (Regulatory Compliance). From the analysis carried out on the types of service existing and falling within the territorial buffer of 1000 m (see Section 4.1.2), the need to enhance some environments inside and/or outside the institution to open up the cultural and social services to the community emerges. These include, for example, the opening to the public of the library inside the building, or even the use of some spaces (classrooms and/or corridors) for the exhibition of art and sculpture objects made by students in art education courses included in the educational training plan. Conclusions In the context of the settlement transformation processes aimed at the redevelopment, recovery and enhancement of the existing building, the need to take into account, from the planning phase, multiple aspects of various kinds in an integrated manner considering the existing mutual relations among them is recognized. The complexity of jointly considering multiple characteristics of the same project, both technical and spatial-functional types, has encouraged the use of intervention practices often based on a problem-solving-based approaches and not on multidimensional ones. This can be observed especially in the field of Italian school buildings (2019), where there is a lack of a single legislative framework in which to jointly include technical and functional evaluations with those that specifically concern the services that characterize the market of the urban context. For supporting the planning, design and execution of interventions by preferring an integrated approach, especially with regard to the redevelopment of existing school buildings, the proposed methodology attempts to include within a single multi-criteria evaluation logic aspects of various kinds: structural, plant engineering, sanitation, spatial-functional, strengthening of existing services and integration of others on the basis of the market characterizing the city within the buffer of 1000 meters from the school is located. Appropriate criteria and qualitative measurement systems are used for each one. Conclusions In the context of the settlement transformation processes aimed at the redevelopment, recovery and enhancement of the existing building, the need to take into account, from the planning phase, multiple aspects of various kinds in an integrated manner considering the existing mutual relations among them is recognized. The complexity of jointly considering multiple characteristics of the same project, both technical and spatial-functional types, has encouraged the use of intervention practices often based on a problem-solving-based approaches and not on multidimensional ones. This can be observed especially in the field of Italian school buildings (2019), where there is a lack of a single legislative framework in which to jointly include technical and functional evaluations with those that specifically concern the services that characterize the market of the urban context. For supporting the planning, design and execution of interventions by preferring an integrated approach, especially with regard to the redevelopment of existing school buildings, the proposed methodology attempts to include within a single multi-criteria evaluation logic aspects of various kinds: structural, plant engineering, sanitation, spatial-functional, strengthening of existing services and integration of others on the basis of the market characterizing the city within the buffer of 1000 meters from the school is located. Appropriate criteria and qualitative measurement systems are used for each one. The application of the evaluation methodology to the case of redevelopment of the school located in the historic center of the city of Rome (Italy) attests to the practicality of the proposed evaluation framework. The in-depth analysis and study on a careful choice of the criterion with which to express the level of adaptation of the technical-structural features and the planimetric distribution of the schools places to the regulatory requirements and the possibility to express the use of spaces for additional services for the community outline future research prospects. Specifically, it would be interesting to implement participatory procedures for the identification of needs by the student community, for example through the administration of questionnaires, or even to return the proposed methodology in the form of mathematical expressions as the basis of the implementation of optimization linear systems of the operational research.
2020-02-06T09:08:43.941Z
2020-02-03T00:00:00.000Z
214130910
s2orc/train
v2
The reverse mathematics of the Tietze extension theorem
The reverse mathematics of the Tietze extension theorem We prove that several versions of the Tietze extension theorem for functions with moduli of uniform continuity are equivalent to WKL_0 over RCA_0. This confirms a conjecture of Giusto and Simpson that was also phrased as a question in Montalb\'an's"Open questions in reverse mathematics." Introduction The Tietze extension theorem states that if X is a metric space, C ⊆ X is closed, and f : C → R is continuous, then there is a continuous F : X → R extending f (meaning that F (x) = f (x) for all x ∈ C). It is a fundamental theorem of real analysis and topology, and, as such, the question of its logical strength is natural and ripe for consideration. In this work, we analyze the logical strengths of formalized versions of the Tietze extension theorem in the setting of reverse mathematics, a foundational program designed by Friedman to classify mathematical theorems according to the strengths of the axioms required to prove them [2]. In reverse mathematics, we fix a weak base axiom system WeakSystem for second-order arithmetic and consider the implications that are provable in WeakSystem. If ϕ and ψ are two statements in second-order arithmetic, typically expressing two well-known theorems, and WeakSystem ⊢ ϕ → ψ, then we say that ϕ implies ψ over WeakSystem and think of the logical strength of ϕ as being at least that of ψ. We also like to appeal to the equivalence of WeakSystem ⊢ ϕ → ψ and WeakSystem +ϕ ⊢ ψ in order to think of the strength of ϕ in terms of the additional statements ψ that become provable once ϕ is considered as a new axiom and added to the axioms of WeakSystem. Often, as in this work, we wish to compare a theorem ϕ to an axiom system StrongSystem that is stronger than WeakSystem and proves ϕ. In this situation, if WeakSystem +ϕ ⊢ ψ for every axiom ψ of StrongSystem, then we say that ϕ is equivalent to StrongSystem over WeakSystem. The proof of (the axioms of) StrongSystem from WeakSystem +ϕ is called a reversal, from which 'reverse mathematics' gets its name. It is a remarkable phenomenon that equivalences of this sort are the usual case: a theorem is typically either provable in the standard WeakSystem or equivalent to one of four well-known stronger systems. These five systems together are known as the Big Five. There are, however, many fascinating examples of misfit theorems as well, and we refer the reader to [4] for the tip of that particular iceberg. It is possible to formalize the Tietze extension theorem in second-order arithmetic in several different ways, and different formalizations may exhibit different logical strengths. The logical systems in play are the first three of the Big Five, which are • the base system RCA 0 (for recursive comprehension axiom), which corresponds to computable mathematics and is the standard WeakSystem; • the stronger system WKL 0 (for weak König's lemma), which adds the ability to make compactness arguments; and • the yet stronger system ACA 0 (for arithmetical comprehension axiom), which adds the ability to form sets defined by any number of first-order quantifiers (but no second-order quantifiers). The differences among the formalizations of the Tietze extension theorem that we consider arise from two sources. The first source is the problem of coding closed subsets of complete separable metric spaces, which are most naturally thought of as third-order objects, as second-order objects. Closed sets can be coded by negative information (in which case they are simply called closed), positive information (in which case they are called separably closed), or both simultaneously (in which case they are called closed and separably closed). In a compact complete separable metric space, a set is closed if and only if it is separably closed, but both directions of this equivalence are themselves equivalent to ACA 0 over RCA 0 [1,Theorem 3.3]. These notions of closedness are thus distinct when working in RCA 0 . The second source of differences is the fact that the statement "every continuous function f : X → R on a compact complete separable metric space X is uniformly continuous" is equivalent to WKL 0 over RCA 0 (see [7, Theorem IV.2.2 and Theorem IV.2.3]) and therefore has non-trivial logical strength. Here 'uniformly continuous' means having a modulus of uniform continuity, which is a function that, when given an ǫ > 0, returns a δ > 0 such that (∀x, y ∈ X)(d(x, y) < δ → d(f (x), f (y)) < ǫ). Thus though the two statements (1) For every compact complete separable metric space X, every closed C ⊆ X, and every continuous f : C → R, there is a continuous F : X → R extending f . (2) For every compact complete separable metric space X, every closed C ⊆ X, and every uniformly continuous f : C → R, there is a uniformly continuous F : X → R extending f . are obviously equivalent in ordinary mathematics, the situation over RCA 0 is more complicated. Following Giusto and Simpson's terminology from [3], we call statement (1) the Tietze extension theorem and statement (2) the strong Tietze extension theorem. The following list summarizes some of the known results. • The Tietze extension theorem for closed sets (i.e., the negative information coding) is provable in RCA 0 (see [7,Theorem II.7.5]). In this case the assumption that X is compact may be dropped if f is assumed to be bounded. • The Tietze extension theorem for separably closed sets is equivalent to ACA 0 over RCA 0 [3, Theorem 6.9]. • The strong Tietze extension theorem for separably closed sets is equivalent to WKL 0 over RCA 0 [3, Theorem 6.14]. • The strong Tietze extension theorem for closed sets is provable in WKL 0 because the Tietze extension theorem for closed sets is provable in RCA 0 , and WKL 0 proves that continuous functions on compact complete separable metric spaces are uniformly continuous. • The strong Tietze extension theorem for closed and separably closed sets is not provable in RCA 0 . In fact, it implies the existence of diagonally non-recursive functions [3, Lemma 6.17]. Notice that the above list of results leaves open the precise logical strength of the strong Tietze extension theorem for closed sets and for closed and separably closed sets. Giusto and Simpson conjecture that these theorems are equivalent to WKL 0 . Specifically, they make the following conjecture. (1) WKL 0 . (2) Let X be a compact complete separable metric space, let C be a closed subset of X, and let f : C → R be a continuous function with a modulus of uniform continuity. Then there is a continuous function F : X → R with a modulus of uniform continuity that extends f . The question of whether or not this conjecture holds also appears as Question 16 in Montalbán's Open questions in reverse mathematics [6]. Let sTET [0,1] denote statement (5) in Conjecture 1.1 (the notation is chosen to evoke the strong Tietze extension theorem for [0, 1]). We prove that Conjecture 1.1 is true by proving that RCA 0 + sTET [0,1] ⊢ WKL 0 . Before continuing, we remark that Giusto and Simpson's Located sets and reverse mathematics [3], which contains Conjecture 1.1, is largely concerned with the notion of a located set, where a closed or separably closed subset C of a complete separable metric space X is called located if there is a continuous distance function f : X → R, where f (x) = inf{d(x, y) : y ∈ C} for every x ∈ X. With the assumption of locatedness, the equivalence between closed and separably closed becomes provable in RCA 0 : for compact complete separable metric spaces, RCA 0 proves that every closed and located set is separably closed and that every separably closed and located set is closed. Furthermore, the strong Tietze extension theorem for closed and located sets (and thus for separably closed and located sets) is provable in RCA 0 . These results and many others appear in [3]. However, located sets are not relevant to Conjecture 1.1, so we make no use of them here. For ease of comparison, the table below displays the strengths of eight versions of the Tietze extension theorem, taking into account the confirmation of Conjecture 1.1 proven here. The row labeled 'Tietze extension theorem' corresponds to versions of the theorem where f and its extension are not required to be uniformly continuous, and the row labeled 'strong Tietze extension theorem' corresponds to versions of the theorem where f and its extension are required to be uniformly continuous. The columns represent the different assumptions on the domain C of f . The column labeled 'located' means that C is assumed to be closed and located, which in RCA 0 is equivalent to assuming that C is separably closed and located. located closed & separably closed closed separably closed Tietze extension theorem We introduce the systems RCA 0 , WKL 0 , and ACA 0 and then define in RCA 0 the analytic and topological notions relevant to Conjecture 1.1. The standard reference for reverse mathematics is Simpson's Subsystems of Second Order Arithmetic [7], and almost all of this section's material can be found in expert detail therein. Simpson's book also contains many, many examples of theorems that are provable in RCA 0 , theorems that are equivalent to WKL 0 over RCA 0 , and theorems that are equivalent to ACA 0 over RCA 0 . 2.1. RCA 0 , WKL 0 , and ACA 0 . The axioms of RCA 0 are: a first-order sentence expressing that N is a discretely ordered commutative semi-ring with identity; the Σ 0 1 induction scheme, which consists of the universal closures (by both first-and second-order quantifiers) of all formulas of the form where ϕ is Σ 0 1 ; and the ∆ 0 1 comprehension scheme, which consists of the universal closures (by both first-and second-order quantifiers) of all formulas of the form where ϕ is Σ 0 1 , ψ is Π 0 1 , and X is not free in ϕ. RCA 0 is the standard base system and captures what might be called effective mathematics. The name 'RCA 0 ,' which stands for recursive comprehension axiom, refers to the ∆ 0 1 comprehension scheme because a set X is ∆ 0 1 in a set Y if and only if X is recursive in Y . The subscript '0' refers to the fact that induction in RCA 0 is limited to Σ 0 1 formulas. RCA 0 proves enough number-theoretic facts to implement the codings of finite sets and sequences that are ubiquitous in recursion theory. Therefore, in RCA 0 we can represent the set N <N of all finite sequences as well as its subset 2 <N of all finite binary sequences, and we can give the usual definition of a tree as subset of N <N that is closed under initial segments. Thus, in RCA 0 we can formulate (but not prove) weak König's lemma, which is the statement "every infinite subtree of 2 <N has an infinite path." WKL 0 is then the system RCA 0 + weak König's lemma. The fact that there is a recursive infinite subtree of 2 <N with no recursive infinite path can be used to show that WKL 0 is strictly stronger than RCA 0 . WKL 0 captures the mathematics of compactness. For example, WKL 0 is equivalent to the Heine-Borel compactness of [0, 1] (see [7, Theorem IV.1.2]), a fact that is crucial for our analysis of sTET [0,1] . An important strategy for proving that a theorem implies WKL 0 over RCA 0 is to employ the following lemma, which states that WKL 0 is equivalent over RCA 0 to the statement that for every pair of injections with disjoint ranges, there is a set that separates the two ranges. [7,Lemma IV.4.4]). The following are equivalent over RCA 0 . For comparison, ACA 0 , introduced next, is equivalent over RCA 0 to the statement that for every injection there is a set consisting of exactly the elements in the injection's range. The axioms of ACA 0 are those of RCA 0 , plus the arithmetical comprehension scheme, which consists of the universal closures (by both first-and second-order quantifiers) of all formulas of the form where ϕ is an arithmetical formula in which X is not free. Jockusch and Soare's famous low basis theorem [5] can be used to prove that ACA 0 is strictly stronger than WKL 0 . The strength of ACA 0 is great enough to provide a natural and extensive development of most classical mathematics. Though we do not make further use of ACA 0 here, it is relevant to the discussion in the introduction. 2.2. Analytic and topological notions in RCA 0 . Following [7, Section II.4], we code integers as pairs of natural numbers and rational numbers as pairs of integers. A real number is then coded by a sequence of rational numbers q k : k ∈ N such that ∀k∀i(|q k − q k+i | ≤ 2 −k ). The expression 'x ∈ R' abbreviates the predicate "x codes a real number." Two real numbers coded by q k : . The definition of a complete separable metric space generalizes the coding of reals by rapidly converging Cauchy sequences. If x = a k : k ∈ N and y = b k : k ∈ N code points in A, then d(x, y) is defined to be lim k d(x k , y k ), and (the points coded by) x and y are defined to be equal if d(x, y) = 0. For example, the unit interval [0, 1] is the complete separable metric space coded by {q ∈ Q : 0 ≤ q ≤ 1} with the usual metric, and the sequence j2 −i : j ≤ 2 i : i ∈ N witnesses that [0, 1] is compact. In A closed set C in a complete separable metric space is also coded by a set The idea here is that the pair a, q ∈ A × Q + codes the open ball B(a, q) of radius q centered at a and that a set U ⊆ N × A × Q + codes some sequence B(a k , q We can now define a complete separable metric space to be Heine-Borel compact if for every sequence U k : k ∈ N of open sets such that (∀x ∈ A)(∃k ∈ N)(x ∈ U k ), there is an N ∈ N such that (∀x ∈ A)(∃k < N )(x ∈ U k ). Although RCA 0 proves that [0, 1] is a compact complete separable metric space in the sense of Definition 2.3, the Heine-Borel compactness of [0, 1] is equivalent to WKL 0 over RCA 0 . (1) WKL 0 . (2) Every compact complete separable metric space is Heine-Borel compact. Finally, we define continuous functions and moduli of uniform continuity. Definition 2.7 (RCA 0 ; see [7, Definition II.6.1]). Let A and B be complete separable metric spaces. A continuous partial function from A to B is coded by a set Φ ⊆ N × A × Q + × B × Q + that satisfies the properties below. Let a, r Φ b, s denote ∃n( n, a, r, b, s ∈ Φ). For a, a ′ ∈ A and r, r ′ ∈ Q + , let a ′ , r ′ ≺ a, r denote d(a, a ′ ) + r ′ < r and similarly for b, b ′ ∈ B and s, s ′ ∈ Q + . The properties that Φ must satisfy are that, for all a, a ′ ∈ A, all b, b ′ ∈ B, and all r, r ′ , s, s ′ ∈ Q + , • if a, r Φ b, s and a, r Φ b ′ , s ′ , then d(b, b ′ ) ≤ s + s ′ ; • if a, r Φ b, s and a ′ , r ′ ≺ a, r , then a ′ , r ′ Φ b, s ; and • if a, r Φ b, s and b, s ≺ b ′ , s ′ , then a, r Φ b ′ , s ′ . The domain of the function f coded by Φ is the set of all x ∈ A such that The idea behind Definition 2.7 is that Φ enumerates pairs of open balls B(a, r), B(b, s) (i.e., the pairs of balls coded by the a, r and b, s such that a, r Φ b, s ) with the property that if f is the function being coded by Φ and x is in both B(a, r) and dom f , then f (x) is in the closure of B(b, s). Reversing the strong Tietze extension theorem to weak König's lemma In their analysis of sTET [0,1] , Giusto and Simpson first show that RCA 0 sTET [0,1] by showing that sTET [0,1] fails in REC, the model of RCA 0 whose first-order part is the standard natural numbers and whose second-order part is the recursive sets [3, Lemma 6.16]. To do this, they take advantage of Theorem 2.6, the fact that WKL 0 fails in REC, and the fact that RCA 0 proves that a continuous real-valued function on [0, 1] has a modulus of uniform continuity if and only if it has a Weierstraß approximation (see [7,Theorem IV.2.4]). Here, a Weierstraß approximation of a continuous function f : [0, 1] → R is a sequence of polynomials p n : n ∈ N from Q[x] such that (∀n ∈ N)(∀x ∈ [0, 1])(|f (x) − p n (x)| < 2 −n ). The goal in proving that REC |= sTET [0,1] is thus to produce a recursive code for a closed and separably closed C ⊆ [0, 1], a recursive code for a continuous f : C → R, and a recursive modulus of uniform continuity for f such that no continuous extension of f to [0, 1] has a recursive Weierstraß approximation. To this end, let I e = [2 −(2e+1) , 2 −2e ] for each e ∈ N, and let D = {0} ∪ e∈N I e . We call D the pre-domain of f , as C ⊆ D is obtained from D by enumerating additional open intervals into the complement of C. The plan is to define f (0) = 0, then for each e ∈ N to define C and f on I e to diagonalize against Φ e being a Weierstraß approximation to an extension of f . Thus on each I e we implement the following strategy. First, by the fact that Theorem 2.6 item (4) fails in REC, fix a recursive enumeration (a k , b k ) : k ∈ N of open intervals with rational endpoints that covers (the recursive reals in) [0, 1] but has no finite subcover. Transfer this cover to a cover (a e k , b e k ) : k ∈ N of I e that has no finite subcover by the linear transformation x → (x + 1)/2 2e+1 . Enumerate the intervals of (a e k , b e k ) : k ∈ N into the complement of C until a stage s is reached that witnesses Φ e,s (2e + 1)↓ = p, where p is (a code for) a polynomial in Q[x]. If Φ e (2e + 1)↑, then s is never found, and all the intervals in the sequence (a e k , b e k ) : k ∈ N are enumerated into the complement of C. In this case, I e is erased from the domain of f , so we do not need to take any action to define f there. If s is found, then at stage s only the intervals of (a e k , b e k ) : k < s have been enumerated into the complement of C. We then stop the enumeration, which makes C ∩ I e = I e \ k<s (a e k , b e k ). As no finite set of intervals from (a e k , b e k ) : k ∈ N covers I e , we can find a rational q ∈ I e \ k<s (a e k , b e k ). Then we define f on I e \ k<s (a e k , b e k ) by making it be constantly 2 −2e if p(q) ≤ 0 and making it be constantly −2 −2e otherwise. In both cases we ensure that |f (q) − p(q)| ≥ 2 −2e , which successfully diagonalizes against Φ e because if Φ e were a Weierstraß approximation to an extension of f , then we would have |f (q) − p(q)| < 2 −(2e+1) . Furthermore, C is closed and separably closed by Lemma 3.1 below, and it is easy to write down a modulus of uniform continuity for f . Our plan to prove that RCA 0 +sTET [0,1] ⊢ WKL 0 is to formalize and elaborate upon the preceding argument. Observe, however, that the above argument relies very heavily on the fact that [0, 1] is not Heine-Borel compact in REC. To replicate this style of argument, we appeal to Theorem 2.6 and work in RCA 0 + ¬WKL 0 . The overall strategy is thus to produce the contradiction RCA 0 + ¬WKL 0 + sTET [0,1] ⊢ WKL 0 . Let g 0 , g 1 : N → N be two injections with disjoint ranges. By Lemma 2.1, we wish to separate the ranges of g 0 and g 1 using sTET [0,1] . A first idea would be to follow the proof that REC |= sTET [0,1] and use I e to code whether or not e should be in a separating set. Enumerate the intervals of (a e k , b e k ) : k ∈ N into the complement of C until a stage s is reached that witnesses either g 0 (s) = e or g 1 (s) = e. If g 0 (s) = e, then define f to be 2 −2e on the remaining portion of I e ; and if g 1 (s) = e, then define f to be −2 −2e on the remaining portion of I e . The idea would then be to decode a separating set from an extension F of f by checking whether or not F is ≥ 0 on I e . The problem is of course that not every F (q) for q ∈ I e correctly codes whether or not e should be in a separating set. We would need to find a q ∈ I e that is sufficiently close to a member of C, where the meaning of 'sufficiently close' is determined by F 's modulus of uniform continuity. We refine this idea by replacing each I e with a sequence of disjoint closed intervals I e,m : m ∈ N where the length of each I e,m is at most 2 −m , and we choose a rational q e,m ∈ I e,m for each e, m ∈ N. The pre-domain for our f is {0} ∪ e,m∈N I e,m . The refined strategy is to implement the above naïve coding plan for I e on each interval I e,m . In the end, if e is in the range of g 0 or g 1 , then C ∩ I e,m is non-empty for every m ∈ N. So in this case, for every m ∈ N, q e,m is a point in I e,m that is within 2 −m of a point in C. Thus we are able to decode whether or not e should be in a separating set from an extension of f and the extension's modulus of uniform continuity. The first lemma says that the closed sets we consider are also separably closed. It is implicit in [3], but we make it explicit as a matter of convenience. with rational endpoints such that C = {0} ∪ e∈N J e is closed, then C is also separably closed. Proof. Let Q = q n : n ∈ N be an enumeration of the rationals in {0} ∪ e∈N J e . We show that the closure of Q is C. Clearly 0 is in the closure of Q, and if x ∈ J e it is easy to see that x is in the closure of the rationals in J e . Conversely, suppose that x / ∈ {0} ∪ e∈N J e . Then x is in some open interval (a, b) contained in the complement of C. By shrinking this interval, we can find an m ∈ N \ {0} such that (x − 1/m, x + 1/m) is contained in the complement of C. Thus ∀n(|x − q n | ≥ 1/m), so x is not in the closure of Q. We remark that in Lemma 3.1, Q can even be taken to be a set of rationals, rather than a sequence of rationals. Let Q contain 0 and the set of rationals q such that there is an e less than q's code with q ∈ J e . The next lemma prepares f 's pre-domain {0} ∪ e,m∈N I e,m . ] by open intervals with rational endpoints that has no finite subcover. By adjusting the endpoints of the intervals as necessary, assume that (∀k ∈ N)(−2 −2 < a k < b k < 1 + 2 −2 ). For each e ∈ N, transfer (a k , b k ) : k ∈ N to I e via the linear transformation x → (x + 1)/2 2e+1 , and denote the transferred sequence of intervals by (a e k , b e k ) : k ∈ N . Notice that if e = e ′ then (a e k , b e k ) and (a e ′ k ′ , b e ′ k ′ ) are disjoint for all k and k ′ . The procedure described below is clearly uniform in e, so we think of fixing an e ∈ N and enumerating • I e,m : m ∈ N ; • q e,m : m ∈ N ; , b e k e,m+1 ), a closed interval I e,m+1 ⊆ (c e,m+1 , d e,m+1 ) of length at most 2 −(m+1) , and a rational q e,m+1 ∈ I e,m+1 . Enumerate (the at most finitely many intervals coding) k≤k e,m+1 (a e k , b e k )\ n≤m+1 I e,n into U e,m+1 . Immediately we see that (ii) and (iii) are satisfied. For (i), consider the closed set C described by the simultaneous enumeration of the open intervals (2 −(2e+2) , 2 −(2e+1) ) : e ∈ N and the open intervals in e,m∈N U e,m . Suppose that x ∈ {0} ∪ e,m∈N I e,m . If x = 0, then it is clear that x ∈ C. If x ∈ I e,m , then x is in no interval of the form (2 −(2e+2) , 2 −(2e+1) ), and it is in no interval O ∈ n∈N U e ′ ,n for an e ′ = e. Furthermore, x is in no interval O ∈ n∈N U e,n either. This is because when I e,m is defined at stage m for e, I e,m is chosen disjoint from the intervals in n<m U e,n , and at stages n ≥ m the intervals added to U e,n are chosen to be disjoint from I e,m . Hence x ∈ C. Conversely, suppose that x / ∈ {0} ∪ e,m∈N I e,m . If x is not in any I e for e ∈ N, then clearly x / ∈ C. So suppose that x ∈ I e . Let k be such that x ∈ (a e k , b e k ), and let m be such that k < k e,m . Then x ∈ k≤ke,m (a e k , b e k ) \ n≤m I e,n , so x ∈ n≤m O∈Ue,n O. Thus C = {0} ∪ e,m∈N I e,m , establishing (i). To establish (iv), for each e, m ∈ N, transfer (a k , b k ) : k ∈ N to I e,m via the linear transformation that maps 0 to the left endpoint of I e,m and maps 1 to the right endpoint of I e,m . Denote the Proof. We derive the contradiction RCA 0 + ¬WKL 0 + sTET [0,1] ⊢ WKL 0 . Let g 0 , g 1 : N → N be injections with disjoint ranges. Our goal is to separate the ranges of g 0 and g 1 . For each e ∈ N, let I e = [2 −(2e+1) , 2 −2e ]. By ¬WKL 0 , let I e,m : e, m ∈ N , q e,m : e, m ∈ N , and (a e,m k , b e,m k ) : e, m, k ∈ N be as in Lemma 3.2, and let D denote the closed set {0} ∪ e,m∈N I e,m . The plan is to define a continuous function f with a modulus of uniform continuity on a closed and separably closed subset of D such that if F is a continuous extension of f to [0, 1] with a modulus of uniform continuity, then, for each e ∈ N, the value of F (q e,m ), for an m chosen according to e and F 's modulus of uniform continuity, codes whether or not e should be in a separating set. Let E denote the closed set whose complement is coded by {(a e,m k , b e,m k ) : e, m ∈ N ∧ (∀s ≤ k)(g 0 (s) = e ∧ g 1 (s) = e)}. Let C be the closed set D ∩ E. Notice that for each e, m ∈ N, either I e,m and E are disjoint (if ∀s(g 0 (s) = e ∧ g 1 (s) = e)) or I e,m ∩ E is a finite union of closed intervals with rational endpoints (if ∃s(g 0 (s) = e ∨ g 1 (s) = e)). Thus C is of the form {0} ∪ e∈N J e for J e : e ∈ N a sequence of pairwise disjoint closed intervals with rational endpoints. Therefore C is also separably closed by Lemma 3.1. We now define the continuous function f with modulus of uniform continuity h to which we apply sTET [0,1] . Let if x ∈ I e ∩ C ∧ ∃s(g 0 (s) = e) −2 −2e if x ∈ I e ∩ C ∧ ∃s(g 1 (s) = e). To do this, for each e, m ∈ N, wait while the intervals from (a e,m k , b e,m k ) : k ∈ N covering I e,m are being enumerated into the complement of C. If this enumeration never stops, then I e,m is disjoint from C and f is not defined on I e,m . If this enumeration stops at some stage s, then either g 0 (s) = e or g 1 (s) = e, and I e,m ∩ C is determined at this stage. Thus the appropriate pairs of intervals can start being enumerated into the code for f to define f (x) = 2 −2e on I e,m ∩ C if g 0 (s) = e and f (x) = −2 −2e on I e,m ∩ C if g 1 (s) = e. Let h : N → N be the function h(n) = 2n + 2. We show that h is a modulus of uniform continuity for f . Suppose that x < y are in C and satisfy |x − y| < 2 −(2n+2) . If y ∈ I e for an e ≤ n, then |x − y| < 2 −(2n+2) implies that x must also be in I e , which means that |f (x) − f (y)| = 0 < 2 −n . If y ∈ I e for an e > n, then |f (y)| = 2 −2e and |f (x)| ≤ 2 −2e , so |f (x) − f (y)| ≤ 2 −2e+1 < 2 −n . Thus h is a modulus of uniform continuity for f . By sTET [0,1] , let F be a continuous extension of f to [0, 1] with modulus of uniform continuity H. Define a set X as follows. Given e ∈ N, let m = H(2e + 2), and use F to approximate F (q e,m ) to within 2 −(2e+2) (i.e., find a rational q such that |F (q e,m ) − q| < 2 −(2e+2) ). Define e ∈ X if and only if this approximation is ≥ 0. This X separates the ranges of g 0 and g 1 . Suppose ∃s(g 0 (s) = e).
2016-02-17T13:11:40.000Z
2016-02-17T00:00:00.000Z
56109410
s2orc/train
v2
I hate you when I am anxious: Anxiety during the COVID‐19 epidemic and ideological hostility
I hate you when I am anxious: Anxiety during the COVID‐19 epidemic and ideological hostility Abstract Most previous studies that examined the effect of anxiety on hostility towards a distinct group have focused on cases in which we hate those we are afraid of. The current study, on the other hand, examines the relationship between anxiety in one domain and hostility towards a distinct group that is not the source of that anxiety. We focus here on symptoms of anxiety during the COVID‐19 pandemic, which have become increasingly frequent, and show that the implications of such mental difficulties are far‐reaching, posing a threat to relationships between ideological groups. In two studies conducted in both Israel and the United States, we found that high levels of anxiety during the COVID‐19 epidemic are associated with higher levels of hatred towards ordinary people from the respective political outgroups, lower levels of willingness to sustain interpersonal relations with these people (i.e., greater social distancing), and greater willingness to socially exclude them. This relationship was mediated by the perception of threat posed by the political outgroup. This study is the first to show that mental difficulty driven by an external threat can be a fundamental factor that explains levels of intergroup hostility. | INTRODUCTION The outbreak of the coronavirus (COVID-19) pandemic occurred in December 2019 in Wuhan, Hubei Province, China, and began to spread throughout China and to the rest of the world in early 2020 (Chen et al., 2020). By January 2021, the coronavirus had infected over 100 million people and claimed the lives of more than two million people worldwide (World Health Organization, 2021). Along with the health crisis, the pandemic has led to a widespread economic crisis as the global economy has acutely contracted and millions of people worldwide are expected to be pushed into extreme poverty (The World Bank, 2020). In times of such crises, societal unity and cohesion help to cope with the threats and preserve stability (Dovidio et al., 2020;Van Bavel et al., 2020a). The current pandemic has posed an even more significant challenge, as it struck at a time when many of the world's democratic countries were dealing with increasing animosity and hostility among political outgroups (Finkel et al., 2020;Reiljan, 2019). The increasing animosity across party lines (Iyengar et al., 2012) has been one of the world's leading challenges in the recent decade. Numerous studies have indicated that inter-party hostility is prevalent in many democratic countries (e.g., Gidron et al., 2018;Westwood et al., 2017), and that it has severe implications for both politics (e.g., Hetherington & Rudolph, 2015;Iyengar & Krupenkin, 2018;Ward & Tavits, 2019) and interpersonal relations between supporters of opposing parties (for a review see Iyengar et al., 2019). The growing enmity between ordinary citizens is reflected, among other things, in negative emotions toward supporters of the opposing party-predominantly dislike-and in a tendency to avoid close interactions with them while maintaining social distance (Druckman & Levendusky, 2019;Iyengar et al., 2019). This voluntary detachment impairs societies' resilience and often leads to social and political instability (McCoy et al., 2018). Therefore, democratic societies around the world are dealing simultaneously both with the COVID-19 pandemic and with increased hostility across ideological boundaries. The possible links between the COVID-19 pandemic and the relations between different ideological groups have captured the attention of researchers since the onset of the contagion. Most studies published on these topics have focused on partisan differences in evaluations of the severity of the disease, as well as on differences across party lines in behavioral responses to the pandemic (e.g., Allcott et al., 2020;Druckman et al., 2020Druckman et al., , 2021Grossman et al., 2020;Painter & Qiu, 2020;Pennycook et al., 2020). Both these lines of research point to the politicization of the pandemic (Kerr et al., 2021). However, when politics are concerned, the COVID-19 pandemic is not merely another divisive issue on the arena; it also has severe psychological implications that may have further amplified inter-party animosity. Here we make a case that one such psychological implication of the pandemic, anxiety, could have contributed to and perpetuated the already existing hostility. Needless to say, the current study is not the first to examine the anxiety-hate association. Yet, most existing studies in political psychology examining the effect of anxiety on intergroup hostility have focused on a link within the same domain, such that the same group constituted the source of the anxiety and the target of the hate. For example, individuals from a majority group may fear or be anxious about members of a distinct minority group, and these feelings may trigger hostility or hatred (e.g., Canetti-Nisim et al., 2008). In that regard, Canetti-Nisim et al. (2009) found that psychological distress caused by exposure to terrorism predicted perceived threat from Israeli Palestinians, which, in turn, predicted exclusionist attitudes toward this group. Simply put, we hate and wish to exclude those who we are afraid of. In this study, however, we examine the relationship between anxiety triggered by COVID-19 and hostility towards a distinct political group which is not related to the source of that anxiety. We argue that one plausible mechanism behind this relationship is an increase in sensitivity to threats of different kinds that correlated with anxiety induced by COVID. In other words, it is possible that, compounded with the already existing apprehensions, COVID-induced anxiety may have a relation with animosity towards outgroups. Previous studies have shown that anxiety leads to an overestimation of threats (Lerner & Keltner, 2000, 2001Raghunathan & Pham, 1999), and that one of the most pervasive and powerful effects of threat is to increase intolerance and hatred. This relationship is not contingent on whether threat is defined as a widely acknowledged external force or as a subjective, perceived state (Gibson 1998;Marcus et al., 1995;Sullivan, Pierson, & Marcus, 1982). For example, research has uncovered a link between periods of anxiety-economic hard times or work stoppages, for example-and rejection of different outgroups which had little connection with the source of the anxiety (Feldman & Stenner, 1997;Lahav, 2004). In other words, a person who experiences increased anxiety driven by external circumstances may perceive threats of different kinds and feel hatred towards different outgroups whose connection to these perceived threats is indirect at best. Based on previous work (Feldman & Stenner, 1997;Lahav, 2004), we contend that the anxiety triggered by the COVID-19 pandemic might related the sensitivity to a threat from the political outgroup, which correlated with increased hostility towards members of that group. | THE COVID-19 PANDEMIC AND ANXIETY The coronavirus pandemic poses a threat to people's mental health due to increased and prolonged feelings of fear and uncertainty associated with the virus outbreak (Cao et al., 2020;Ozamiz-Etxebarria et al., 2020;Torales et al., 2020). A prolonged traumatic event of this kind can reduce people's feelings of security and have adverse effects on their mental health. In the current situation, this impact could be caused by questions related to the pandemic with no definite answers, such as when it will come to an end and what effective methods of treatment exist; constant exposure to a flow of information about the pandemic and its effects; decreased social interactions due to the pandemic; and recommendations, such as remaining at home as much as possible. Symptoms such as anxiety, fear, stress and sleep deprivation have become more frequent during the COVID-19 pandemic (Cao et al., 2020;Torales et al., 2020). Research on the social and political effects of anxiety indicate that anxious individuals tend to perceive higher levels of risk or threat compared to those experiencing low levels of anxiety (Butler & Mathews, 1983;Eysenck 2013;Lerner & Keltner, 2000, 2001. Not surprisingly, research during COVID-19 points to a strong relationship between anxiety levels and perceived threat levels regarding the pandemic, i.e., heightened vulnerability or likelihood of contagion (Garfin et al., 2020;Killgore et al., 2020;Lima et al., 2020;Usher et al., 2020). However, as stated above, anxiety is also likely to increase perceived threat in regard to negative events which may not have anything to do with the source of the anxiety (Butler & Mathews 1983, 1987Huddy et al., 2005). According to Keltner (2000, 2001), anxiety produces a sense of uncertainty and lack of control that raises assessments of different threats, whether immediate or remote. Therefore, anxiety during or due to COVID-19 can, potentially, lead people to anticipate and perceive a variety of threats. As already stated, we test the hypothesis that such an increase in anxiety will be associated with increased levels of intergroup hostility. As we have already mentioned, such relationships, especially with regard to inter-party hostility, have thus far been tested mainly within the same domain (i.e., anxiety caused by X relates to hostility towards X). However, in a threatening situation, anxiety may traverse from one domain to another, spreading like contagion (see Lahav, 2004). The domains explored in this study are health and ideology. At the time the study was conducted, the pandemic and the ideological tension were both at their peak (e.g., Finkel et al., 2020), especially in the United States (this issue will be discussed in more detail below, in Study 2). Theoretically, given the increased centrality and saliency of the ideological tension (e.g., Finkel et al., 2020), it may not be too far-fetched to assume that anxiety triggered by the pandemic would translate into increased hostility and animosity directed at the political outgroup. When ideological tensions are on the rise, the search for a scapegoat in times of anxiety can easily turn the spotlight towards the ideological opponent even though this opponent is not connected to the source of anxiety. Thus, we argue that the mechanism behind the relationship between anxiety and inter-party hostility is the perceived threat from a political outgroup. That idea has not been tested yet either in the context of inter-party hostility or during the COVID-19 pandemic. In what follows we present two correlational studies. The first one, conducted in Israel provides an initial examination of the relationship between general mental difficulties (levels of anxiety and tension) during COVID-19 and expressions of (a) hatred towards ordinary people from the political outgroup; (b) willingness to engage in interpersonal relations with members of that political outgroup (i.e., lower social distancing); and (c) willingness to socially exclude those people. In Study 2 we replicate all the results of Study 1, but in the U.S. context and with a larger sample. Furthermore, we introduce in Study 2 a mechanism, perception of threat from the political outgroup, that can potentially explain the associations obtained in the analyses. could be related to responses to the COVID-19 pandemic. As a part of this project, the team from each country involved was asked to collect data from at least 500 participants in their respective country or territory, representative with respect to gender and age. We should note here that, initially, this project was not designed to include questions about inter-party hostility. These questions were added only when the poll administered within the project was underway, after approximately 300 subjects had already been sampled. Importantly, the political reality in Israel seemed at that time even more extreme than previously-in the wake of the third round of elections and failed attempts to form a government. Accordingly, and as will be detailed below, the number of respondents for this study was quite small (but still, generally, representative in terms of age, gender, education and income 1 ). The small sample is, without a doubt, a limitation that will be discussed in more detail below. Therefore, Study 1 served as an initial examination of the relations between level of anxiety during COVID-19 and expressions of hostility: emotional (i.e., hatred), interpersonal (i.e., social distance) and socio-political (i.e., exclusionism). Based on the literature presented above, we hypothesized that higher levels of anxiety and tension would be associated with (a) higher levels of hatred towards ordinary people of the political outgroup, (b) lower levels of desire for interpersonal relations with the people from the political outgroup, and (c) greater willingness to socially exclude those people. | Participants A total of 167 participants (49.1% female, 50.5% male, and no other categories were found; mean age 46.19, SD = 14.59) were recruited, using an online survey platform (The Midgam Panel Project) that offers monetary compensation in return for participation in surveys. Participants were all Jewish Israelis from the general population and the survey was conducted in Hebrew. Based on a sensitivity analysis, we found that our sample of 167 participants afforded 90% power to detect an effect of f 2 = 0.32 size. Education level was measured using 13 values ranging from 1 (upto 8 years of education) to 13 (Doctoral degree) (M = 8.17, SD = 2.24). Monthly income was measured using 5 values, from (1) below the average income to (5) above the average income (M = 3.07, SD = 1.36). Political orientation was measured using 7 values ranging from 1 (extremely rightwing) to 7 (extremely leftwing) (M = 4.08, SD = 1.67). The political outgroup set for those who rated themselves on values as 1 (extremely right-wing), 2 (right-wing), and 3 (moderate right-wing) was left-wing, and the political outgroup set for BALMAS ET AL. | 3 those who rated themselves on values as 5 (moderate leftwing), 6 (leftwing) and 7 (extremely leftwing) was rightwing. Those who rated themselves on values as 4 (center) were presented with a following question: "From which political side do you feel more distant"? The two options were: (1) Right and (2) Left; 57.1% stated they felt more distant from the Right and 42.9% from the Left. The answer to this question set the outgroup for those who defined themselves as "Center." | Measures Level of Anxiety was measured based on four items. Participants were told: Here are some feelings that one might have due to the outbreak of the coronavirus (COVID-19) pandemic. For each, please indicate, on a scale of 1 (not at all) to 6 (to a very large extent), the degree to which you have experienced those feeling (α = .92): I have experienced feelings of fear and anxiety; I have experienced stress or tension due to the pandemic situation; I have felt despair and hopelessness; I have felt sadness or a desire to cry. It is important to emphasize that the feelings of tension and anxiety measured here were totally unrelated to the ideological outgroup; namely, we did not ask participants about their fear or anxiety related to the ideological outgroup, but rather, more broadly about their levels of tension, fear and anxiety due to the COVID-19 pandemic. Hatred towards political outgroup. Participants were asked to indicate, on a scale of 1 (not at all) to 6 (to a very large extent), to what extent they felt hatred towards ordinary right-wingers/left-wingers. Desire for interpersonal relations (i.e., lower social distance) gauges the extent to which individuals are socially comfortable with those on the other political side (Druckman & Levendusky, 2019;Iyengar et al., 2012;Levendusky & Malhotra 2016). We used a set of three questions to capture how comfortable people feel, respectively, having close friends from the other party; having neighbors from the other party; and having their children marry someone from the other party (α = .92). Respondents were asked to rate their responses on a scale of 1 (not at all) to 6 (to a very large extent). Social exclusionism of the political outgroup. Respondents were asked to indicate, on a scale of 1 (totally disagree) to 6 (totally agree), to what extent they agreed with the following statement regarding their political outgroup: I would prefer to live in a society without right-wingers/left-wingers. Covariates Socio-demographic. We included various socio-demographic variables that potentially can be related to feelings of anxiety and/or to inter-party hostility, among them, age, gender, income and level of education. Taylor (2019) noted that COVID-19 can affect people differently, based on certain sociodemographic factors. This conclusion was corroborated by research. Thus, for example, it was found that women were almost three times more likely to report on feeling of anxiety due to COVID-19 compared to men (Caycho-Rodríguez et al., 2021). Younger adults, people with higher education, and people with lower levels of income reported higher anxiety levels than their counterparts with the opposite characteristics (e.g., Lee et al., 2020a;Solomou & Constantinidou, 2020). Those sociodemographic variables were also found to be relevant for predicting intergroup hostility. For example, Amsalem et al. (2022) found that older adults, people with higher level of education, and women demonstrated greater inter-party hostility. Threat perception due to COVID-19 has been found closely related to feelings of anxiety during COVID-19. As stated above, researches have pointed out a strong relationship between anxiety levels during COVID-19 and perceived threat levels regarding the pandemic, i.e., heightened vulnerability or likelihood of contagion (Garfin et al., 2020;Killgore et al., 2020;Lima et al., 2020;Usher et al., 2020). This variable was measured based on four indicators In addition, to rule out explanations based on factors other than socio-demographic that the literature mentions as potential predictors of intergroup hostility (Amsalem et al., 2022;Iyengar et al., 2012;Iyengar & Westwood, 2015), we controlled for three variables outlined below: Political leaning. Respondents were asked to indicate their political leaning, on a scale of 1 (extremely left-wing/liberal) to 7 (extremely right-wing/conservative). Ideological identity strength. Previous research has shown that individuals with stronger ideological views and partisan attachments are likely to report higher levels of out-party animus (Amsalem et al., 2022). This variable was measured based on five indicators (Bankert et al., 2017). Respondents were asked to rate the following items on a scale of 1 (not at all) to 6 (to a very large extent): How important is your political identity? How well does the term [leftwing/rightwing] describe you? When talking about [leftwing/rightwing], how often do you use "we" | Results Means, standard deviations, and correlations between the main variables are presented in Table 1. As can be seen, level of anxiety is significantly correlated with hatred and social exclusionism, but not with desire for interpersonal relations (lower social distance). All three dependent variables (e.g., hatred, desire for interpersonal relations and social exclusionism) are correlated with each other in the expected directions. Political identification is positively correlated with desire for interpersonal relations, indicating that, socially, left-wingers generally feel more comfortable with right-wingers than vice versa. Not surprisingly, threat perception due to COVID-19 is significantly and highly correlated with general mental difficulty, which suggests that those who felt more threatened by the consequences of the coronavirus (for both their health and financial resources) reported higher levels of anxiety. | Anxiety during COVID-19 and intergroup hostility We ran three separate regressions with hatred, desire for interpersonal relations (i.e., lower social distance) and exclusionism as dependent variables; level of anxiety as an independent variable; and control variables. In line with our initial hypotheses, the analysis presented in Table 2 Table S1). *p ≤ .05; **p ≤ .01; ***p ≤ .001. a p ≤ .09. | Measures Level of Anxiety. Whereas in Study 1 we measured anxiety as part of general mental difficulties, in this study we searched for a more comprehensive measure of anxiety. To this end, we used the Generalized Anxiety Disorders (GAD 7) inventory Spitzer et al., 2006 Hatred towards political outgroup was measured as specified in Study 1 (on a scale of 1-not at all to 7-very much. Desire for interpersonal relations (lower social distance) (α = .93) was measured as specified in Study 1 but on a different scale (running from 0-not at all comfortable to 100-extremely comfortable), and Social exclusionism was measured as specified in Study 1 (on a scale of 1-totally disagree to 7-totally agree). Political intolerance was included as an additional dependent variable in Study 2, as previous studies had shown that perceptions of outgroup threat can also affect people's inclinations to prevent the outgroup from expressing its positions publicly or from gaining political power and influence-which is tantamount to political intolerance (Gibson & Gouws, 2000). This variable was measured based on three indicators. Respondents were asked to indicate, on a scale of 1 (totally disagree) to 6 (totally agree), to what extent they agreed with each of the following statements (regarding their political outgroup): I would prefer that Democrats/Republicans be prevented from holding rallies and demonstrations; I would prefer that democrats/republicans be banned from television appearances or speeches; and I would prefer that democrats/republicans not be allowed to visit college campuses to register potential voters (α = .94). Since Social exclusionism and Political intolerance are highly correlated (α = .76), and moreover, both gauge a predilection for exclusionist policies, we combined them into one index: Exclusionist policy. Threat Perception (from the political outgroup) was measured based on three indicators. Respondents were asked to indicate, on a scale of 1 (totally disagree) to 6 (totally agree), to what extent they agreed with each of the following statements regarding their respective political outgroup: republicans/democrats are a serious threat to the United States and its people; republicans/democrats endanger the future of the United States; and Republicans/Democrats act in ways that harm American democracy (α = .96). Covariates All covariates were identical to the ones used in Study 1: Threat perception due to COVID-19 (α = .69) on a scale of 1 (not at all) to 5 (very much); Ideological identity strength (α = .91) and Moral conviction (α = .93) on scale running from 0 (not at all) to 100 (extremely strong). | Results Means, standard deviations, and correlations between the main variables are presented in Table 3. As can be clearly seen, anxiety is significantly correlated not only with the three dependent variables (e.g., hatred, desire for interpersonal relations, and exclusionist policy support) in the expected directions, but also with the mediator (e.g., threat from the political outgroup) and the other control variables. Perception of the political outgroup as a threat is highly correlated with all four dependent variables. Additionally, all three dependent variables are correlated with each other. Political identification is correlated with hatred, desire for interpersonal relations, and exclusionist policy support, indicating that Conservatives generally feel more hatred and express more intolerance towards Liberals, and are, socially, less comfortable with Liberals. Here too, threat perception due to COVID-19 is highly correlated with anxiety. | Anxiety during COVID-19 and inter-party hostility in the U.S. context We replicated the results of Study 1. The analysis presented in Table 4 reveals the relationship between levels of anxiety during COVID-19 and a higher level of hatred towards people from the political outgroup (b = .57, SE = 0.10, p = .001; f 2 = 0.045), lower levels of willingness to have interpersonal relations with them (i.e., greater social distancing) (b = −3.38, SE = 1.39, p = .005; f 2 = 0.010), and higher levels of support for exclusionist policies (b = .35, SE = 0.08, p = .001; f 2 = 0.026). It should be noted that we found an interaction effect between level of anxiety and political identification on hatred (F (721) = 1.37, p = .01), which indicates that the relationship between anxiety and hatred is stronger among Republicans than among Democrats. However, no interaction effects were found between level of anxiety and political identification on desire for interpersonal relations (F (721) = 1.11, p = .11) or exclusionist policy support (F (721) = 1.19, p = .11). | GENERAL DISCUSSION An extensive body of research published around the world during the past several months of the COVID-19 pandemic attests to an increase in symptoms of anxiety and fear among the population at large (Cao et al., 2020;Ozamiz-Etxebarria et al., 2020;Torales et al., 2020). It goes without saying that this change does not bode well for people's mental health. However, its implications are more far-reaching, posing a threat to the entire social fabric, including relationships between political or ideological groups. In two studies conducted during COVID-19, one in Israel and the other in the United States, we found that high anxiety levels are associated with higher levels of hatred towards ordinary people from the respective political outgroup, lower levels of willingness to initiate or sustain interpersonal relations with those people (i.e., greater social distance) and greater support for exclusionist policies towards those people. We have also provided evidence that the mechanism behind these relationships is perception of threat posed by the political outgroup. Put differentlly, the perception of threat from the political outgroup is strengthened with the rising anxiety level, leading to hatred, social distancing and exclusionist policy support. Theoretically, we know that anxiety can potentially lead to increased sensitivity to, or T A B L E 4 Anxiety during COVID-19 and expressions of inter-party hostility R 2 .14*** .17*** .12*** Note: Regression models with controlling for demographic measures (age, gender, education, and income) yielded similar results (see Table S2). *p ≤ .05; **p ≤ .01; ***p ≤ .001. a p ≤ .10. F I G U R E 1 (a-c) Perception of the political outgroup as a threat mediates the association between level of anxiety during COVID-19 and expressions of inter-party hostility. (a) Hatred, (b) desire for interpersonal relations, (c) exclusionist policy support. *p < .05; **p < .01; ***p < .001. overestimation of, threats (Lerner & Keltner 2000, 2001Raghunathan & Pham 1999). It is also known that threat perceptions can increase hatred, intolerance and exclusionist tendencies towards the source of threat (e.g., Canetti-Nisim et al., 2008;Gibson 1998;Marcus et al. 1995;Shamir & Sagiv-Schifter, 2006). However, as already stated, most literature that focuses specifically on interparty hostility has thus far explored this relationship within the same domain (i.e., anxiety on account of X relates to perceptions of threat from X and hostility towards the same X). This study examined a mechanism pivoting on perceived threat from the political outgroup during the current pandemic, when the anxiety is driven by an external threat that is largely unrelated (or at least not directly related) to the context of intergroup relations. We and animosity that go beyond the groups that are perceived as relevant for the creation of the specific crisis (see Lahav, 2004). Societies and leaders, should be aware of these potentially destructive implications, and take some preventive steps to moderate them. Even more broadly, this could mean that people who feel anxious for any reason, e.g., economic difficulties, a physical injury, loss of a loved one and more, tend to regard their Mernyk et al. (2022), in which correcting metaperceptions regarding the ideological outgroup's support for intergroup violence decreased participants' own support for aggressive actions towards that outgroup. These findings were recently T A B L E 5 Perception of the political outgroup as a threat (republican/democrats) mediates the association between level of anxiety during COVID-19 and inter-party hostility expressions Note: Regression models with controlling for demographic measures (age, gender, education and income) yielded similar results (see Table S3). replicated in a study conducted during a violent crisis in Israel (Nir et al., 2022), and therefore, provide an interesting example for a path for change, partially aligning with the findings of the current study. | Limitations This study has several limitations. First, due to its correlational design, we cannot draw conclusions about the causal relationship between mental difficulties such as anxiety, on the one hand, and intergroup hostility, social exclusionism and political intolerance, on the other. Establishing the direction of the association demonstrated in the current work would require an experiment that manipulates individuals' anxiety levels, which obviously involves some ethical and moral challenges. Yet, future research can use longitudinal data, relying on enduring measures of mental difficulties or trait anxiety, which can provide additional, albeit not optimal, support to the causal direction intimated by the current study. Second, Study 1, which was conducted in Israel as an initial examination, is based on a community sample that comprises a relatively low number of respondents. Additionally, due to constraints of Study 1, we used there a short and targeted scale of anxiety. However, in Study 2 we use a more comprehensive measure: Generalized Anxiety Disorders (GAD 7) inventory Spitzer et al., 2006). Third, our dependent measures are limited to tapping short-term relationships; this issue can likewise be addressed and elucidated through further investigation. Last, the studies in the current paper focus on anxiety caused by a specific situational factor, which has so far been (fortunately) very rare: a global pandemic. However, the association between mental difficulties and social exclusionism or political intolerance is probably not limited to this particular context. Therefore, future studies can expand our work and explore this association in other, more common, situations that are considered as conducive to stress and anxiety, for example, poverty. Identifying such situational factors that increase hatred towards ordinary people from the political outgroup can help in future efforts to develop measures to mitigate this phenomenon. | Conclusions Notwithstanding these weaknesses, the present study provides evidence that the COVID-19 pandemic has had implications for political intergroup relations. Scholars have argued that one of the leading challenges countries worldwide have faced in the recent decade is the increasing animosity between ideological groups Iyengar et al., 2019). Today, societies around the world need to deal simultaneously with additional threat-the COVID-19 pandemic. While the current study shows that mental difficulties related to COVID-19 and the threat it poses can contribute to intergroup hostility, an open question that remains to be explored in future studies is whether an extreme challenge, such as a global pandemic, can also have the opposite effect. Can the presence of an external threat suspend the rivalry between ideological groups and encourage them to unite? Can it create the kind of shared goals and identities required to moderate animosity, and under which conditions would that happen? If such a reverse process is indeed possible, the struggle against the pandemic could be channeled into diminishing intergroup hostility and might promote tolerance and respect in the political sphere. CONFLICTS OF INTEREST The authors declare no conflicts of interest. ETHICS STATEMENT Research was conducted ethically, responsibly, and legally. Results are reported clearly, honestly, and without fabrication, falsification or inappropriate data manipulation. New findings are presented in the context of previous research, which is accurately represented. Researchers are willing to make their data available to the editor when requested. Methods are described clearly and unambiguously. Submitted work is original, not (self-)plagiarised, and has not been published elsewhere. Authorship accurately reflects individuals' contributions. Funding sources and conflicts of interest are disclosed.
2022-08-14T15:12:01.506Z
2022-08-12T00:00:00.000Z
251542610
s2orc/train
v2
Type-II 2HDM under the Precision Measurements at the $Z$-pole and a Higgs Factory
Type-II 2HDM under the Precision Measurements at the $Z$-pole and a Higgs Factory Future precision measurements of the Standard Model (SM) parameters at the proposed $Z$-factories and Higgs factories may have significant impacts on new physics beyond the Standard Model in the electroweak sector. We illustrate this by focusing on the Type-II two Higgs doublet model (Type-II 2HDM). The contributions from the heavy Higgs bosons at the tree-level and at the one-loop level are included in a full model parameter space. We perform a multiple variable global fit and study the extent to which the parameters of non-alignment and non-degenerate masses can be probed by the precision measurements. We find that the allowed parameter ranges are tightly constrained by the future Higgs precision measurements, especially for small and large values of $\tan\beta$. Indirect limits on the masses of heavy Higgs can be obtained, which can be complementary to the direct searches of the heavy Higgs bosons at hadron colliders. We also find that the expected accuracies at the $Z$-pole and at a Higgs factory are quite complementary in constraining mass splittings of heavy Higgs bosons. The typical results are $|\cos(\beta-\alpha)|<0.008, |\Delta m_\Phi |<200\ {\rm GeV}$, and $\tan\beta \sim 0.2 - 5$. The reaches from CEPC, FCC-ee and ILC are also compared, for both Higgs and $Z$-pole precision measurements. Introduction With the milestone discovery of the Higgs boson (h) at the CERN Large Hadron Collider (LHC) [1,2], particle physics has entered a new era. All the indications from the current measurements seem to confirm the validity of the Standard Model (SM) up to the electroweak (EW) scale of a few hundred GeV, and the observed Higgs boson is SM-like. Yet, there are compelling arguments, both from theoretical and observational points of view, in favor of the existence of new physics beyond the Standard Model (BSM) [3]. As such, searching for new Higgs bosons would be of high priority since they are present in many extensions of BSM theories. One of the most straightforward, but well-motivated extensions is the two Higgs doublet model (2HDM) [4], in which there are five massive spin-zero states in the spectrum (h, H, A, H ± ) after the electroweak symmetry breaking (EWSB). Extensive searches for BSM Higgs bosons have been actively carried out, especially in the LHC experiments [5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Unfortunately, no signal observation has been reported thus far. This would imply either the non-SM Higgs bosons are much heavier and essentially decoupled from the SM, or their interactions are accidentally aligned with the SM configuration [19,20]. In either situation, it would be challenging to observe those states in experiments. Complementary to the direct searches, precision measurements of SM parameters, in particular, the Higgs boson properties could lead to relevant insights into new physics. There have been proposals to build a Higgs factory in the pursuit of precision Higgs measurements, including the Circular Electron Positron Collider (CEPC) in China [21,22], the electron-positron stage of the Future Circular Collider (FCC-ee) at CERN (previously known as TLEP [23][24][25]), and the International Linear Collider (ILC) in Japan [26]. With about 10 6 Higgs bosons produced at the Higgs factory, one would expect to reach sub-percentage precision determination of the Higgs properties, and thus to be sensitive to new physics associated with the Higgs boson. As an integrated part of the program, one would like to return to the Z-pole. With about 10 10 − 10 12 Z bosons, the achievable precisions on the SM parameters could be improved by a factor of 20 − 200 over the Large Electron Positron (LEP) Collider results [27]. Such a high precision would hopefully shed light on new physics associated with the electroweak sector. In this paper, we set out to examine the impacts from the precision measurements of the SM parameters at the proposed Z-factories and Higgs factories on the extended Higgs sector. There is a plethora of articles in the literature to study the effects of the heavy Higgs states on the SM observables [4]. We illustrate this by focusing on the Type-II 2HDM 1 . In our analyses, we include the tree-level corrections to the SM-like Higgs couplings and one-loop level contributions from the heavy Higgs bosons. A global fit is performed in the full modelparameter space. In particular, we study the extent to which the parametric deviations from the alignment and degenerate mass limits can be probed by the precision measurements. We find that the expected accuracies at the Z-pole and at a Higgs factory are quite complementary in constraining mass splittings of heavy Higgs bosons. The reach in the heavy Higgs masses and couplings can be complementary to the direct searches of the heavy Higgs bosons at the LHC. The rest of the paper is organized as follows. In Section 2, we summarize the anticipated accuracies on determining the EW observables at the Z-pole and Higgs factories. Those expectations serve as the inputs for the following studies for BSM Higgs sector. We then present the Type-II 2HDM and the one-loop corrections, as well as the existing constraints to the model parameters in Section 3. Section 4 shows our main results from the global fit, for the cases of mass degeneracy and non-degeneracy of heavy Higgs bosons. We summarize our results and draw conclusions in Section 5. The EW and Higgs Precision Measurements at Future Lepton Colliders The EW precision measurements are not only important in understanding the SM physics, but also can impose strong constraints on new physics models [30,31]. The benchmark scenarios of several proposed future e + e − machines and the projected precisions on Z-pole and Higgs measurements are summarized below. These expected results serve as the inputs for the later studies in constraining the BSM Higgs sector. The electroweak precision measurements The current best precision measurements for Z-pole physics came mostly from the LEP-I, and partially from the Tevatron and the LHC [32,33]. These measurements could be significantly improved by a Z-pole run at future lepton colliders with a much larger data sample [21,[23][24][25]34]. For example, the parameter sin 2 θ ef f can be improved by more than one order of magnitude at the future e + e − collider; the Z-mass precision can be measured four times better in CEPC. Precisions of other observables, including m W , m t , m h , A b,c,l F B , R b , etc., can be improved as well, depending on different machine parameter choices. Given the complexity of a full Z-pole precision fit, we study the implications of Z-pole precision measurements on the 2HDM adopting the Peskin-Takeuchi oblique parameters S, T and U [35]. The anticipated precisions on the measurements of α s , ∆α [32,[36][37][38][39] for various benchmark scenarios of future Z-factories with the indicated Z data samples. The corresponding constrained S, T and U ranges and the error correlation matrices are listed in Tab. 2. The results listed as "current" are obtained directly from the Gfitter results which use the current Z-pole precision measurements [32,33], with reference values of the SM Higgs boson mass of m h ,ref = 125 GeV and m t ,ref = 172.5 GeV [33]. The predictions for future colliders are obtained by using the Gfitter package [32] with corresponding precisions for different machines, using the best-fit SM point with the current precision measurements as the central value. For the Z-pole observables with estimated precisions not yet available at future colliders, the current precisions Current (1.7 × 10 7 Z's) CEPC (10 10 Z's) FCC-ee (7 × 10 11 Z's) ILC (10 9 Z's) Table 2. Estimated S, T , and U ranges and correlation matrices ρ ij from Z-pole precision measurements of the current results, mostly from LEP-I [27], and at future lepton colliders CEPC [21], FCC-ee [23] and ILC [34]. Gfitter package [32] is used in obtaining those constraints. are used instead. As seen from the at 1σ level. FCC-ee would further improve the accuracy. In our analyses as detailed in a later section, the 95% C.L. S, T and U contours are adopted to constrain the 2HDM parameter spaces, using the χ 2 -fit with error-correlation matrices . Higgs precision measurements At a future e + e − collider of the Higgs factory with the center-of-mass energy of 240−250 GeV, the dominant channel to measure the Higgs boson properties is the Higgsstrahlung process of Due to the clean experimental environment and well-determined kinematics at the lepton colliders, both the inclusive cross section σ(hZ) independent of the Higgs decays, and the exclusive ones of different Higgs decays in terms of σ(hZ)×BR, can be measured to remarkable precisions. The invisible decay width of the Higgs boson can also be very well constrained. In addition, the cross sections of W W, ZZ fusion processes for the Higgs boson production grow with the center-of-mass energy logarithmically. While their rates are still rather small and are not very useful at 240−250 GeV, at higher energies in particular for a linear collider, such fusion processes become significantly more important and can provide crucial complementary information. For √ s > 500 GeV, tth production can also be used as well. To set up the baseline of our study, we hereby list the running scenarios of various machines in terms of their center-of-mass energies and the corresponding integrated luminosities, as well as the estimated precisions of relevant Higgs boson measurements that are used in our global analyses in Tab. 3. The anticipated accuracies for CEPC and FCC-ee are comparable for most channels, except for h → γγ. There are several factors that contribute to the difference for this channel, which include the superior resolution of the CMS-like electromagnetic calorimeter that was used in FCC-ee analyses, and the absence of background from beamstrahlung photons [23]. In our global fit to the Higgs boson measurements, we only include the rate information for the Higgsstrahlung Zh and the W W fusion process. Some other measurements, such as the angular distributions, the diboson process e + e − → W W , can provide important information in addition to the rate measurements alone [41][42][43]. Table 3. Estimated statistical precisions for Higgs boson measurements obtained at the proposed CEPC program with 5 ab −1 integrated luminosity [21], FCC-ee program with 5 ab −1 integrated luminosity [23], and ILC with various center-of-mass energies [40]. 3 Type-II Two Higgs Doublet Model The 2HDM Lagrangian for the Higgs sector can be written as with the Higgs potential of by assuming CP-conserving and a soft Z 2 symmetry breaking term m 2 12 . After EWSB, one of the four neutral components and two of the four charged components are eaten by the SM gauge bosons Z, W ± , providing their masses. The remaining physical mass eigenstates are two CP-even neutral Higgs bosons h and H, with m h < m H , one CP-odd neutral Higgs boson A, as well as a pair of charged ones H ± . Instead of the eight parameters appearing in the Higgs potential m 2 11 , m 2 22 , m 2 12 , λ 1,2,3,4,5 , a more convenient choice of the parameters is v, tan β, α, m h , m H , m A , m H ± , m 2 12 , where α is the rotation angle diagonalizing the CP-even Higgs mass matrix 2 . The Type-II 2HDM is characterized by the choice of the Yukawa couplings to the SM fermions and is given in the form of After EWSB, the effective Lagrangian for the light CP-even Higgs couplings to the SM particles can be parameterized as (3.6) for i indicates individual Higgs coupling. Their values at the tree level are Our sign convention is β ∈ (0, π 2 ), β − α ∈ [0, π], so that sin(β − α) ≥ 0. The CP-even Higgs couplings to the SM gauge bosons are g hV V ∝ sin(β − α), and g HV V ∝ cos(β − α). The current measurements of the Higgs boson properties from the LHC are consistent with the SM Higgs boson interpretation. There are two well-known limits in 2HDM that would lead to a SM-like Higgs sector. The first situation is the alignment limit [19,45] of cos(β − α) = 0, in which the light CP-even Higgs boson couplings are identical to the SM ones, regardless of the other scalar masses, potentially leading to rich BSM physics. For sin(β − α) = 0, the opposite situation occurs with the heavy H being identified as the SM Higgs boson. While it is still a viable option for the heavy Higgs boson being the observed 125 GeV SM-like Higgs boson [46,47], the allowed parameter space is being squeezed with the tight direct and indirect experimental constraints. Therefore, in our analyses below, we identify the light CP-even Higgs h as the SM-like Higgs with m h fixed to be 125 GeV. The other well-known case is the "decoupling limit", in which the heavy mass scales are all large m A,H,H ± 2m Z [48], so that they decouple from the low energy spectrum. For masses of heavy Higgs bosons much larger than λ i v 2 , cos(β − α) ∼ O(m 2 Z /m 2 A ) under perturbativity and unitarity requirement. Therefore, the light CP-even Higgs boson h is again SM-like. Although it is easier and natural to achieve the decoupling limit by sending all the other mass scales to be heavy, there would be little BSM observable effects given the nearly inaccessible heavy mass scales. We will thus primarily focus on the alignment limit. Note that while κ g , κ γ and κ Zγ are zero at the tree-level for both the SM and 2HDM, they are generated at the loop-level. In the SM, κ g , κ γ and κ Zγ all receive contributions from fermions (mostly top quark) running in the loop, while κ γ and κ Zγ receive contribution from W -loop in addition [49]. In 2HDM, the corresponding hf f and hW W couplings that enter the loop corrections need to be modified to the corresponding 2HDM values. Expressions for the dependence of κ g , κ γ and κ Zγ on κ V and κ f can be found in Ref. [50]. There are, in addition, loop corrections to κ g , κ γ and κ Zγ from extra Higgs bosons in 2HDM. It is of particular importance to include a discussion for the triple couplings among Higgs bosons themselves. At the alignment limit, which is the parameter that enters the Higgs self-couplings and relevant for the loop corrections to the SM-like Higgs boson couplings. This parameter could be used interchangeably with m 2 12 as we will do for convenience. For the rest of our analysis, we fix v = 246 GeV and m h = 125 GeV. The remaining free parameters are tan β , cos(β − α) , m H , m A , m H ± and λ . (3.10) Note that while these six parameters are independent of each other, their allowed ranges under perturbativity, unitarity, and stability consideration are correlated. For simplicity with important consequences, one often starts from the degenerate case where all heavy Higgs boson masses are set the same. We will explore both the degenerate and non-degenerate cases specified as Given the current LHC Higgs boson measurements [51][52][53][54], deviations of the Higgs boson couplings from the decoupling and alignment limits are still allowed at about 10% level. All the tree-level deviations from the SM Higgs boson couplings are parametrized by only two parameters: tan β and cos(β −α). Once additional loop corrections are included, dependences on the heavy Higgs boson masses as well as λv 2 also enter. In our analyses below, we study the combined contributions to the couplings of the SM-like Higgs boson with both tree-level and loop corrections. Before concluding this section, a special remark is in order. The model parameters introduced in this section and henceforth are all at the electroweak scale, identified as on-shell parameters to directly compare with experimental measurements. We do not consider the running effects due to other new physics at a higher scale such as in Supersymmetry or Grand Unified theories. This would become relevant if one asks whether the alignment behavior could be a natural result due to some symmetry or other principles [20]. In such scenarios, the alignment may take place at a higher scale but could be modified at the electroweak scale. Our results here, on the other hand, could be viewed as the acceptable deviations from the exact alignment conditions in a more fundamental theory. Loop corrections to the SM-like Higgs couplings We define the normalized SM-like Higgs boson couplings including loop effects as where κ tree ≡ g 2HDM tree /g SM tree . g 2HDM loop (Φ) and g 2HDM loop (SM) are the 2HDM Higgs boson couplings including loop corrections with heavy Higgs bosons or with SM particles only, respectively. To the leading order in 1-loop corrections, Eq. (3.13) simplifies to (3.14) In the alignment limit of κ tree = 1, the term in the bracket is exactly zero, and κ 2HDM 1−loop | alignment = 1 + ∆κ 2HDM 1−loop . In our calculations, we adopt the on-shell renormalization scheme [55]. The conventions for the renormalization constants and the renormalization conditions are mostly following Refs. [55,56]. All related counter terms, renormalization constants and renormalization conditions are implemented according to the on-shell scheme and incorporated into model files of FeynArts [57] 3 . One-loop corrections are generated using FeynArts and FormCalc [63] including all possible one-loop diagrams. FeynCalc [64,65] is also used to simplify the analytical expressions. LoopTool [66] is used to evaluate the numerical value of all the loopinduced amplitude. The numerical results have been cross-checked with another numerical program H-COUP [67] in some cases. For the couplings of the SM-like Higgs boson to a pair of gauge bosons and fermions, the general renormalized hf f and hV V vertices take the following formŝ where q µ , p µ 1 , and p µ 2 are the momenta of the Higgs boson and two other particles, respectively, and q 2 is the typical momentum transfer of the order m 2 h . κ i for each vertex is given byΓ S hf f andΓ 1 hV V for hf f and hV V , which includes both the tree-level and one-loop corrections: (3.17) Loop corrections to Z-pole precision observables The 2HDM contributions to the Peskin-Takeuchi oblique parameters [35] are given by [68] where we explicitly split these expressions into terms independent of or dependent on the alignment parameter of cos(β − α). The expression for various B and F -functions can be found in Ref. [68]. The mass splittings among heavy Higgs bosons of (m H , m A , m H ± ) violate the SU(2) custodial symmetry and thus will lead to contributions to the T and U parameters. In Fig. 1, we show the contributions to ∆S (left panel) and ∆T (right panel) in 2HDM varying ∆m A ≡ m A − m H and ∆m C ≡ m H ± − m H between ± 300 GeV, for cos(β − α) = 0. While the contribution to ∆S is typically small |∆S| 0.03, the contribution to ∆T quickly increases when m H ± is non-degenerate with either m A or m H . Therefore, an improved determination of ∆T from Z-pole precision measurement would severely constrain the mass splitting between the charged Higgs and its neutral partners. Furthermore, non-alignment case also breaks the symmetric pattern between ∆m A and ∆m C for ∆T contribution, preferring a slightly negative value of mass splittings. Theoretical constraints and current experimental bounds Heavy Higgs loop corrections would involve the Higgs boson masses and self-couplings, which are constrained by various theoretical considerations and experimental measurements, such as vacuum stability, perturbativity and unitarity, as well as electroweak precision measurements, flavor physics constraints, and LHC direct searches. We briefly summarize below the theoretical considerations and experimental constraints. • Vacuum stability In order to have a stable vacuum, the following conditions on the quartic couplings need to be satisfied [69]: (3.21) • Perturbativity and unitarity We adopt a general perturbativity condition of |λ i | ≤ 4π and the tree-level unitarity of the scattering matrix in the 2HDM scalar sector [70]. In Fig. 2, we show the constraints in the λv 2 -tan β plane once all the theoretical considerations are taken into account. For the upper panels, we work under the assumption with degenerate heavy Higgs boson masses m H ± = m H = m A ≡ m Φ . The left panel is for m Φ = 800 GeV and the right one is for m Φ = 2000 GeV, with cos(β − α) =0.005 (red curves), 0 (alignment limit, blue curves), and −0.005 (green curves). Regions enclosed by the curves are theoretically preferred. For a lower mass m Φ = 800 GeV, the constraints vary very little with the values of cos(β − α). The largest range on λv 2 ≡ m 2 Φ − m 2 12 / sin β cos β occurs at tan β = 1 [28]: which gives −0.29 < λ = −λ 4 = −λ 5 < 5.95 and 0 < λ 3 < 6.21. For a large value of m Φ = 2000 GeV, a slight shift of cos(β − α) leads to notable change in constraints on λv 2 , as shown by the red and green curves in the top right panel of Fig. 2. The theoretically preferred region also depends on the individual heavy Higgs boson masses, as well as the deviation from the degenerate condition. In the lower panels of Fig. 2 The strongest bounds at large tan β come from A/H → τ + τ − mode, which excludes m A/H ∼ 300 − 500 GeV for tan β ∼ 10, and about 1500 GeV for tan β ∼ 50. The strongest bounds at small tan β 1 come from A/H → tt mode. The latest ATLAS search on such channel utilized the lineshape of tt invariant mass distribution, which exhibits a peak-dip structure due to the interference between the signal and the SM tt background [72,73]. A strong 95% C.L. bound of m A/H around 600 GeV can be reached for tan β = 1 for degenerate mass of m A = m H under the alignment limit. The direct searches for heavy charged Higgs bosons have been conducted with the H ± → (τ ν , tb) channels [16][17][18], and the bounds are relatively weak given the rather small leading production cross section for bg → tH ± , the large SM backgrounds for the dominant H ± → tb channel and the relatively small branching fraction of H ± → τ ν [74]. the alignment limit and mass-degenerate assumption. The strongest constraints for the large tan β region come from the A/H → τ + τ − searches: m A/H could be excluded to about 1000 GeV for tan β ∼ 10, and even larger masses for larger tan β. H ± → tb offers better exclusion at low tan β, which excludes m H ± to about 600 GeV for tan β ∼ 1. Possible A/H → tt mode might help to extend the exclusion reach to about 2000 GeV for tan β ∼ 1 [73,76]. At 100 TeV pp collider with 3 ab −1 luminosity, A/H → τ + τ − could extend the reach at large tan β to about 2000 GeV at tan β ∼ 10 and about 3 TeV for tan β ∼ 50. The coverage at low tan β could also be extended to about m H ± ∼ 1500 GeV via H ± → tb and m A ∼ 2500 GeV via A/H → tt for tan β ∼ 1 [75]. Since the branching fractions of the conventional search channels could be highly suppressed once other exotic decay channels of the non-SM Higgs boson to light Higgs bosons and/or SM gauge bosons open up [77][78][79], it is important to note that the current exclusion limits could be relaxed. Current LHC limits on m A,H via searches of exotic decay modes A/H → HZ/AZ are up to about 700 − 800 GeV, depending on the spectrum of non-SM Higgs bosons [12,13]. m A,H could be excluded to about 1500 GeV at HL-LHC and about 3000 GeV at 100 TeV pp collider [80]. While the exotic Higgs decay channel of A → h(→ bb, τ + τ − )Z is absent in the alignment limit, this channel could be used to constrain cos(β − α) and tan β when the deviation from the alignment limit is allowed. The projected A → hZ search results in the cos(β − α)-tan β plane of LHC 13 TeV with an integrated luminosity of 36 fb −1 (cyan) [11] and future HL-LHC 14 TeV with an integrated luminosity of 3 ab −1 (green) [81] for m A = 800 GeV (left panel) and m A = 2000 GeV (right panel) are shown in Fig. 3 with the colored survival regions. For the case of m A = 800 GeV, a narrow band within | cos(β − α)| 0.1 or | cos(β − α)| 0.02 is still allowed by the current LHC or the future HL-LHC data, as expected. Another branch from cos(β − α) = 0 to cos(β − α) = 1.0 with tan β decreasing from 5 − 10 to ∼ 0.1 is also allowed, which corresponds to the region with a suppressed BR(h → bb). The constraint for the m A = 2000 GeV case is far less stringent for the LHC 13 TeV case. Only the lower left region is excluded, in which both the production cross section σ(gg → A) and decay branching fraction of BR(A → hZ) × BR(h → bb) are enhanced. For the HL-LHC case, the tan β 1 regions are largely excluded, leaving the narrow band with | cos(β − α)| 0.1 or a branch stretching from cos(β − α) = 0 to cos(β − α) = 1.0 with tan β decreasing from ∼ 1 to ∼ 0.1 allowed by the future HL-LHC data. This is complementary to the SM-like Higgs boson signal strength measurements, which constrain the range of cos(β − α) to be less than about 0.1 around tan β ∼ 1 and even narrower regions for small and large tan β for Type-II 2HDM [28] with the current LHC measurements, except for a small wrong-sign Yukawa coupling region at tan β 2. Flavor physics consideration usually constrains the charged Higgs mass to be larger than about 600 GeV for the Type-II 2HDM [74]. However, given the uncertainties involved in those flavor measurements, and that they are in general less stringent than the direct collider limits, we thus will not pursue the flavor bounds further. Study Strategy and Results In an earlier work [28], constraints from the tree-level effects on cos(β − α) and tan β, as well as from loop contributions in the degenerate mass case m H = m A = m H ± = m Φ under the alignment limit are analyzed. In this work, we extend the studies to more general cases of the non-degenerate masses and non-alignment, as well as including both the tree-level and one-loop contributions. We also incorporate the Z-pole precision results to show the complementarity between the Higgs and Z-pole precision measurements. Global fit framework To transfer the anticipated accuracy on the experimental measurements to the constraints on the model parameters, we perform a global fit by constructing the χ 2 with the profile likelihood method (4.1) Here, µ BSM i = (σ × BR) BSM /(σ × BR) SM for various Higgs search channels. We note that the correlations among different σ × BR are usually not provided, and are thus assumed to be zero in the fits. µ BSM i is predicted in each specific model, depending on model parameters. In our analyses, for the future colliders, µ obs i are set to be the SM value µ obs i = 1, assuming no deviations from the SM observables. The corresponding σ µ i are the estimated errors for each process, as already shown in Tab. 3 for the CEPC, FCC-ee and ILC. For the ILC with three different center-of-mass energies, we sum the contributions from each individual channel. We fit directly to the signal strength µ i , instead of the effective couplings κ i . The latter are usually presented in most experimental papers. While using the κ-framework is easy to map to specific models, unlike µ i , various κ i are not independent experimental observables. Ultimately, fitting to either µ i or κ i should give the same results, if the correlations between κ i are properly included. Those correlation matrices, however, are typically not provided from experiments. Therefore, fitting to κ i only, assuming no correlations, usually leads to more relaxed constraints. For a comparison of µ-fit versus κ-fit results, see Ref. [28]. For Z-pole precision measurements, we fit into the oblique parameters S, T and U , including the correlations between those oblique parameters, as given in Tab. 2. We define the χ 2 as with X i = (∆S , ∆T , ∆U ) 2HDM being the 2HDM predicted values, andX i = (∆S , ∆T , ∆U ) being the current best-fit central value for current measurements, and 0 for future measurements. The σ ij are the error matrix, σ 2 ij ≡ σ i ρ ij σ j with σ i and correlation matrix ρ ij given in Tab. 2. Case with degenerate heavy Higgs boson masses We first consider the simple case of degenerate heavy Higgs boson masses m H = m A = m H ± ≡ m Φ such that the Z-pole precision are automatically satisfied. As shown in Ref. [28], in the Type-II 2HDM, the current LHC Higgs precision has already constrained cos(β −α) to be less than about 0.1. To explore the impact from the anticipated precision Higgs measurements at the CEPC, we perform a two-parameter global fit including the loop contributions. In Fig. 4, we show the 95% C.L. allowed region in the two-parameter cos(β − α)-tan β plane from the individual couplings by the colored curves: blue (κ b ), orange (κ c ), purple (κ τ ), green (κ Z ), cyan (κ g ), for a benchmark point of m Φ = 800 GeV, √ λv 2 = 300 GeV. κ γ does not have a notable effect therefore not shown. For large values of tan β, regions below the colored curves are allowed, while for small values of tan β, regions above the colored curves are allowed. The central red region is the global fit result with the best-fit point indicated by the black star. The two solid horizontal black lines represent the upper and lower limit for parameter tan β from theoretical constraints, as shown in Fig. 2 earlier. The region enclosed by the dashed black lines shows the tree-level only result for comparison. For the Type-II 2HDM, the cos(β − α) region gets smaller for larger and smaller values of tan β. At large tan β, κ b and κ τ provide the strongest constraint since they are enhanced by a universal tan β factor. For small values of tan β, κ g (or effectively, κ t ) rules out large values of cos(β − α), followed by κ c for negative cos(β − α). Combining all the channels, the 95% C.L. region for the global fit leads to 0.2 ≤ tan β ≤ 30, −0.01 ≤ cos(β − α) ≤ 0.008, for √ λv 2 = 300 GeV is used here. The constraints from individual couplings are given with the color codes: blue (κ b ), orange (κ c ), purple (κ τ ), green (κ Z ), cyan (κ g ). The region enclosed by the dashed black lines shows the tree-level twoparameter global fit result for comparison. Two solid horizontal black lines represent the upper and lower limit for parameter tan β from theoretical constraints. the benchmark point m Φ = 800 GeV, √ λv 2 = 300 GeV. We note that the upper bound on tan β and the lower (negative) bound on cos(β − α) coming from κ g is mainly due to the large contribution from b-quark loop with a enhanced κ b . The overall range is slightly smaller than that obtained from the tree-level only result, shown by region enclosed by the dashed lines. The distorted shape of the global fit results, comparing to the tree-level only results is due to the interplay between both the tree-level contribution and loop corrections. Note that while κ Z can be measured with less than 0.2% precision, it is less constraining comparing to other couplings given the 1/ tan β (tan β) enhanced sensitivities for κ t,c (κ b,τ ) at small (large) tan β region. To illustrate the dependence on m Φ and λv 2 , which enter the loop corrections, in Fig. 5, we show the 95% C.L. allowed region in the cos(β − α)-tan β plane given CEPC Higgs precision, for m Φ = 800 GeV, 3 TeV, the one-loop level effects almost decouple and the final allowed region is close to the tree-level results. Comparing with the constraints on the cos(β − α)-tan β plane via LHC searches with A → hZ channel as shown in Fig. 3, and the current and HL-LHC Higgs coupling precision measurements [28], the future Higgs factory can constrain the 2HDM parameter space at least an order of magnitude better in the allowed cos(β − α) range. High precision on the Higgs coupling measurements can also be used to constrain the mass of the heavy Higgs bosons running in the loop. In Fig. 6, we show the 95% C.L. allowed region in the m Φ -tan β plane for We also show the allowed regions in the m Φ -tan β plane under theoretical considerations in Fig. 6 with the different colors for different choices of cos(β −α). While all ranges of m Φ and tan β are allowed in the alignment limit of cos(β − α) = 0, once cos(β − α) deviates away from 0, large m Φ as well as small and large tan β regions are ruled out by theoretical considerations. Combining both the theoretical constraints and precision Higgs measurements, a constrained region in m Φ -tan β can be obtained for the non-alignment cases. For √ λv 2 = 300 GeV, larger loop corrections further modify the allowed region in m Φ and tan β. The tt threshold region m Φ ≈ 350 GeV is inaccessible and the range of tan β is shrunk to 0.3 − 1.5 when cos(β − α) varies from 0 to 0.005. For the negative cos(β − α) = −0.005, the allowed region divides to two parts. The part with m Φ ≤ 1000 GeV has a wide range for parameter tan β, while for m Φ > 1000 GeV, 0.4 < tan β < 1.6. Theoretical considerations further limit the range of tan β to be between 0.35 and 3, as shown by the shaded region. For cos(β − α) = ±0.005, m Φ has an upper limit of about 2750 GeV from theoretical considerations. While λv 2 ≡ m 2 Φ − m 2 12 /s β c β is a good parameter to use since it is directly linked to the triple Higgs self-couplings, sometime it is convenient to fix the soft Z 2 breaking parameter m 2 12 instead. The resulting 95% C.L. allowed region in the m Φ -tan β plane is shown in Fig. 7 for m 12 = 0 (left panel) and 300 GeV (right panel). The theoretical constraints as discussed in the previous section are also indicated with the shaded gray regions. They have little dependence on the cos(β − α) value when m 2 12 is kept fixed. For m 12 = 0, m Φ = √ λv 2 is constrained to be less than around 250 GeV. For larger values of m 12 , the rather narrow region in the plane as seen in the right panel indicates a strong correlation between m Φ and tan β for large tan β, approximately scaled as tan β ∼ (m Φ /m 12 ) 2 , which minimizes the corresponding λv 2 value and thus its loop effects. The indirect probe in m Φ via Higgs precision measurements complements the direct search limits at the LHC, especially in the intermediate tan β wedge region where the direct search limits are the most relaxed. Case with non-degenerate heavy Higgs boson masses Going beyond the degenerate case, both the Higgs and Z-pole precision observables are sensitive to the mass splittings between the non-SM heavy Higgs bosons. In Fig. 8, For the case of m H = m H ± (lower panels), the allowed range of ∆m Φ is larger, up to about 400 GeV for √ λv 2 = 0, and up to about 500 GeV for √ λv 2 = 300 GeV. Note that the region for ∆m Φ = 0 corresponds to the situation of cos(β − α) = 0 in Fig. 6, which is much less restrictive than the non-degenerate case ∆m Φ = 0. In Fig. 9 Figure 9. Constraints on the ∆m A -∆m C plane from individual Higgs coupling measurement (color curves), and the 95% C.L. global fit results (red shaded region), for tan β = 0.2(left), 1 (middle), tan β = 7 (right) under alignment limit, with m H = 800 GeV, √ λv 2 = 300 GeV. For individual coupling constraint, the dashed line represents negative limit, while solid line represents the positive limit. Regions between the solid and dashed curves are the allowed region. For κ γ , region above the line is allowed. plane from individual Higgs coupling measurements in color curves, and the 95% C.L. global fit results in the red shaded region, for tan β = 0.2 (left panel), 1 (middle panel) and 7 (right panel) under alignment limit with m H = 800 GeV, √ λv 2 = 300 GeV. For each individual coupling constraint with a "±" error bar, the dashed line is for the negative limit, while the solid line is for the positive limit. The range between the two lines is the survival region. Under the alignment limit, κ Z is independent of tan β as apparent in the figure. For Type-II 2HDM, generally speaking, κ b,τ are tan β-enhanced, while κ c is cot β-enhanced. Thus for small tan β, the main constraint on the mass splitting comes from κ c and leads to a small overlapping red region with κ Z as the global fit result of ∆m A ∼ −40 GeV to 0 GeV (left panel). For large tan β, it is due to κ b,τ , resulting in ∆m A ∼ −50 GeV to −250 GeV (right panel). For tan β ∼ 1, constraints from both κ b,τ and κ c are relatively relaxed, leading to a larger allowed region in the mass splittings ∆m A ∼ −250 GeV to 400 GeV (middle panel) mostly due to κ Z . The range of ∆m C is typically between −200 GeV to 100 GeV constrained from κ Z . κ γ mainly involves the charged Higgs loops and only constrains weakly. Note that κ g does not constrain the mass splittings significantly and therefore is not shown in the plots. In Fig. 10, we present the 95% C.L. allowed region in the ∆m A -∆m C plane, for m H = 800 GeV (left panels) and 2000 GeV (right panels), again under the alignment limit. The upper panels are for 2000 GeV, the allowed ranges of the mass difference are much more relaxed and are almost independent of tan β. For √ λv 2 = 300 GeV, however, the largest ranges for ∆m C,A could be achieved for tan β ∼ 2, for both benchmark choices of m Φ , due to the constraints from individual couplings, as illustrated in Fig. 9. For m H = 2000 GeV, the allowed ranges of the mass difference varies little with 0.5 < tan β < 2, but shrink quickly for larger tan β. In Fig. 11, we show the 95% C.L. contours in the ∆m A -∆m C plane, focusing on the cos(β − α) dependence given by different color codes, for Higgs (solid curves) and Z-pole precision (dashed curves) constraints individually (left panels), and combined (right panels), with upper rows for m H = 800 GeV, √ λv 2 = 0, middle rows for m H = 800 GeV, √ λv 2 = 300 GeV, and bottom rows for m H = 2000 GeV, √ λv 2 = 0. tan β = 1 is assumed for the plots. For the Higgs precision fit, the alignment limit cos(β − α) = 0 (blue curve) typically gives the largest allowed ranges. Even for small deviation away from the alignment limit, cos(β − α) = ±0.007, ∆m A is constrained to be positive for cos(β − α) = 0.007, and it splits into two branches for cos(β − α) = −0.007. The Z-pole precision measurements force the mass splittings to either ∆m C ∼ 0 or ∆m C ∼ ∆m A , equivalent to m H ± ∼ m H,A . The dependence on cos(β − α) for Z-pole constraints is almost non-noticeable given the small range of cos(β − α) allowed under the current LHC Higgs precision measurements. Combining both the Higgs and Z-pole precisions (right panels), the range of ∆m C,A are further constrained to be less than about 200 GeV in the alignment limit for m H = 800 GeV, √ λv 2 = 0, with positive (negative) values for the mass splittings preferred for positive (negative) cos(β − α). For √ λv 2 = 300 GeV, loop corrections play a more important role. For cos(β − α) = 0.007, only thin strip of ∆m C ∼ 0 and 0 ∆m A 500 GeV is allowed. For cos(β − α) = −0.007, −250 GeV ∆m C ∼ ∆m A −100 GeV as well as thin slice of ∆m C ∼ 0 for negative ∆m A could be accommodated. For larger m H = 2000 GeV, while the ranges for mass splittings are typically larger under the alignment limit, deviation from the alignment limit leads to tighter constraints due to the suppressed loop contributions. The Higgs and Z-pole precision measurements at future lepton colliders provide complementary information. While the Z-pole precision is more sensitive to the mass splittings between the charged Higgs boson and the neutral ones (either m H or m A ), the Higgs precision measurements in addition could impose an upper bound on the mass splitting between the neutral ones. Furthermore, the Higgs precision measurements are more sensitive to the parameters cos(β − α), tan β, √ λv 2 and the masses of heavy Higgs bosons. Comparison between different lepton colliders In this section, we present a brief comparison for the potential reach of different machines, including CEPC, FCC-ee, and ILC precision shown in Tab. 2 for Z-pole precision and Tab. 3 for Higgs precision. In Fig. 12 for the Z-pole precision, FCC-ee has the best performance because of the higher proposed luminosity at Z-pole. For the combined fit, FCC-ee shows the best constraint, dominanted by the Z-pole effects. Summary and Conclusions In this paper, we examined the impacts of the precision measurements of the SM parameters at the proposed Z-factories and Higgs factories on the extended Higgs sector. We first summarized the anticipated accuracies on determining the EW observables at the Z-pole and the Higgs factories in Section 2. Those expectations serve as the general guidances and inputs for the following studies for BSM Higgs sector. We illustrated this by studying in great detail the well-motivated theory, the Type-II 2HDM. Previous works focused on either just the tree-level deviations, or loop corrections under the alignment limit, and with the assumption of degenerate masses of the heavy Higgs bosons. In our analyses, we extended the existing results by including the tree-level and one-loop level effects of non-degenerate Higgs masses. The general formulation, theoretical considerations and the existing constraints to the model parameters were presented in Section 3, see Fig. 1−Fig. 3. The main results of the paper were presented in Section 4, where we performed a global fit to the expected precision measurements in the full model-parameter space. We first set up the global χ 2 -fitting framework. We then illustrated the simple case with degenerate heavy Higgs masses as in Fig. 4 with the expected CEPC precision. We found that in the parameter space of cos(β − α) and tan β, the largest 95% C.L. range of | cos(β − α)| 0.008 could be achieved for tan β around 1, with smaller and larger values of tan β tightly constrained by κ g,c and κ b,τ , respectively. Comparing to the tree-level only results [28], cos(β − α) shifts to negative values for tan β > 1. Smaller heavy Higgs masses and larger λv 2 lead to larger loop corrections, as shown in Fig. 5. The limits on the heavy Higgs masses also depend on tan β, λv 2 and cos(β − α), as shown in Fig. 6 and alternatively in Fig. 7 varying m 2 12 . While the most relaxed limits can be obtained under the alignment limit with small λv 2 , deviation away from the alignment limit leads to much tighter constraints, especially for allowed range of tan β. The reach seen in the m Φ -tan β plane is complementary to direct non-SM Higgs search limits at the LHC and future pp colliders, especially in the intermediate tan β region when the direct search limits are relaxed. It is important to explore the extent to which the parametric deviations from the degenerate mass case can be probed by the precision measurements. Fig. 8 showed the allowed deviation for ∆m Φ with the expected CEPC precision and Fig. 9 demonstrated the constraints from the individual decay channels of the SM Higgs boson. As shown in Fig. 10, the Higgs precision measurements alone constrain ∆m A,C to be less than about a few hundred GeV, with tighter constraints achieved for small m H , large λv 2 and small/large values of tan β. Z-pole measurements, on the other hand, constrain the deviation from m H ± ∼ m A,H . We found that the expected accuracies at the Z-pole and at a Higgs factory are quite complementary in constraining mass splittings. While Z-pole precision is more sensitive to the mass splittings between the charged Higgs and the neutral ones (either m H or m A ), Higgs precision measurements in addition could impose an upper bound on the mass splitting between the neutral ones. Combining both Higgs and Z-pole precision measurements, the mass splittings are constrained even further, as shown in Fig. 11, especially when deviating from the alignment limit. Furthermore, Higgs precision measurements are more sensitive to parameters like cos(β − α), tan β, √ λv 2 and the masses of heavy Higgs bosons. We found that except for cancelations in some correlated parameter regions, the allowed ranges are typically tan β ∼ 0.2 − 5, | cos(β − α)| < 0.008, |∆m Φ | < 200 GeV . (5.1) For the sake of illustration, we mostly presented our results using the CEPC precision on Higgs and Z-pole measurements. The comparison among different proposed Higgs factories of CEPC, FCC-ee and ILC are shown in Fig. 12 and Fig. 13. While ILC with different centerof-mass energies has slightly better reach in Higgs precision fit, FCC-ee has slightly better reach in Z-pole precisions. The precision measurements of the SM parameters at the proposed Z and Higgs factories would significantly advance our understanding of the electroweak physics and shed lights on possible new physics beyond the SM, and could be complementary to the direct searches at the LHC and future hadron colliders.
2019-03-01T13:50:00.000Z
2018-08-06T00:00:00.000Z
119187900
s2orc/train
v2
Severe Hypoglycemia Caused by a Giant Borderline Phyllodes Tumor of the Breast: A Case Report and Literature Review
Severe Hypoglycemia Caused by a Giant Borderline Phyllodes Tumor of the Breast: A Case Report and Literature Review A case of hypoglycemic coma caused by a giant borderline phyllodes tumor of the breast has been described. The patient, a 63-year-old woman, was admitted with recurrent unconsciousness. She had a giant breast tumor with decreased blood glucose, insulin, and C-peptide. The patient’s hypoglycemia resolved rapidly after resection of the breast tumor. Pathological examination indicated a borderline phyllodes tumor of the breast, and immunohistochemistry suggested high expression of insulin-like growth factor-2 (IGF-2) in the tumor tissue. A literature review is also included to summarize the clinical characteristics of such patients and to serve as a unique resource for clinical diagnosis and treatment of similar cases. INTRODUCTION Hypoglycemia is an endocrine emergency, which can manifest as impaired consciousness or even death in severe cases. Hypoglycemia is frequently caused by improper antidiabetic drug use or insulin overproduction, such as islet cell tumors (1,2). Non-islet cell tumor hypoglycemia (NICTH) is very rare (3). The most common cause of hypoglycemia of this type is tumoral overproduction of incompletely processed insulin-like growth factor-2 (IGF-2), which stimulates insulin receptors and increases glucose utilization (3,4). Other potential but less common causes include the production of autoantibodies against insulin or the insulin receptor and extensive tumor burden destroying the liver or adrenal glands (4). NICTH occurs more commonly in patients with mesenchymal tumors, fibromas, carcinoids, myelomas, lymphomas, and hepatocellular and colorectal carcinomas (3,4). It is very rare that a phyllodes tumor of the breast causes NICTH. Breast phyllodes tumors that cause NICTH are extremely uncommon. PTBs (phyllodes tumors of the breast) are rare fibroepithelial tumors that make up about 0.5% of all breast tumors (5,6). Histologically, PTBs are classified as benign, borderline, or malignant, and borderline tumors account for only 12%-18% of cases (7). We present here a case of a giant borderline phyllodes tumor of the breast causing NICTH in which hypoglycemia disappeared after mastectomy, and immunohistochemistry confirmed that the tumor expressed high amounts of IGF-2. CASE PRESENTATION A 63-year-old woman was taken to the emergency room due to unconsciousness around 6:00 a.m. in April 2016. A blood examination showed severe hypoglycemia (1.4 mmol/L). A total of 40 ml of a 50% glucose solution and 500 ml of 10% glucose solution were intravenously administered to the patient, which restored her serum blood sugar level (9.5 mmol/L) and consciousness. Subsequently, she was discharged from the hospital the same day. The patient lost consciousness again 6 days later (at 6:00 a.m.) and was taken to the emergency department. She received another glucose solution infusion due to hypoglycemia (1.9 mmol/L), which alleviated her hypoglycemic symptoms. According to her medical history, a bean-like hard mass was found in the right breast 1 year ago, which gradually increased to the size of a soccer ball, accompanied by redness of the skin, pinprick-like pain, and nipple ulceration. The patient was diagnosed with "nasopharyngeal carcinoma" 20 years ago, and there was no recurrence after radiotherapy. She denied a history of hypoglycemic drug use and a history of poor appetite and wasting. Physical examination after admission revealed the following: no enlargement of superficial lymph nodes and a BMI of 22.5 kg/ m 2 . The right breast was large. The mass was soft and uneven in texture. Skin temperature was significantly elevated, and nipples were cauliflower-like with erosion. Blood examination showed hypokalemia (3.08 mmol/L) and a normal glycosylated hemoglobin level (5.4%). Thyroid hormone values were as follows: FT3 = 2.53 pmol/L (Ref. = 2.63-5.70 pmol/L), FT4 = 6.56 pmol/L (9.01-19.05 pmol/L), and TSH = 6.209 MIU/L (0.35-4.94 MIU/L). The blood routine, liver and kidney function, and tumor markers were normal, and insulin antibody (IAA) was negative. The patient was given a continuous 10% glucose solution (40 ml/h) intravenou sly, a s well as thyroid hormone supplementation (Levothyroxine Sodium Tablets 25 mg qd) and potassium supplementation. The patient still appeared drowsy and unresponsive at 7:00 am on the second day of admission, and blood glucose was measured at 2.2 mmol/L. Immediately, she was given 50% glucose solution, 40 ml of intravenous injection, and an accelerated glucose drip rate (60 ml/h). After that, the blood glucose fluctuated between 2.9 and 4.5 mmol/L. During the hypoglycemic episode, blood tests revealed severe hypo-insulinemia (<0.1 mU/ml), low C peptide (0.09 ng/ml), low insulin release index (<0.002), low GH (0.075 ng/ml), and normal IFG1 levels. Other hormonal indicators suggested normal pituitary-adrenal axis and pituitary-sex hormone axis. There was no abnormality in the nasopharynx on CT of the head and chest ( Figures 1A, B). A large soft tissue density mass was seen in the right breast, measuring approximately 17.1 cm × 13.2 cm × 17.2 cm, with clear borders and regular morphology, and no enlarged lymph nodes were seen in the bilateral axillae. The breast ultrasound showed a huge hypoechoic mass with an irregular liquid dark area in the right breast and an enlarged right nipple. Ultrasound of lymph nodes showed no enlargement of cervical, axillary, and inguinal lymph nodes. Mammography showed a large occupying lesion in the right breast. A puncture biopsy of a breast tumor confirmed fibroepithelial tumor. A right mastectomy and resection of a large mass in the right chest wall were performed 1 week after admission. Postoperative paraffin pathology confirmed the right breast borderline phyllodes tumor (Figures 2A, B) with a maximum diameter of approximately 19 cm. The nipple had no tumor involvement. Postoperative immunohistochemical staining was as follows: ER There were no hypoglycemic episodes after the postoperative glucose infusion was stopped. The thyroid function was normal on recheck before discharge. Long-term follow-up to date (5 years) revealed that the patient had no hypoglycemic episodes and no breast tumor recurrence. DISCUSSION The patient had recurrent nocturnal and early morning hypoglycemia, and the blood insulin and C-peptide were significantly lower (insulin < 0.1 mU/ml, C-peptide 0.09 ng/ml) during the hypoglycemic episodes, suggesting that the patient's hypoglycemic episodes were not mediated by insulin. The patient had no history of hypoglycemic drug use and alcohol abuse, had normal liver and kidney function, and had no cachexia. As a result, the etiology of the patient's hypoglycemia was investigated to see if it was due to a lack of elevated blood sugar hormones or hypoglycemia Figure 2E showing negativity for IGF-2 in the control tumor tissue (IGF-2 IHC ×400). caused by tumor secretion of insulin-like growth factors in non-islet cell tumors. There are many types of elevated blood sugar hormones, including glucagon, glucocorticoids, catecholamines, growth hormone, and lactogen. Thus, except for hypopituitarism or adrenal cortical crisis, a single hormone deficiency rarely causes hypoglycemia (1,2). In the patient, the hypopituitary-adrenal function, IGF-1, and PRL levels were normal. As a result, hypoglycemia due to a lack of multiple glucagon hormones was uncommon, even when the patient also had hypothyroidism. This patient had recurrent episodes of hypoglycemia despite the continuous infusion of glucose before the removal of the breast tumor. After surgical removal of the breast tumor, there were no further episodes of hypoglycemia, and glucose infusion was discontinued. As a result, the phyllodes tumor of the breast was confirmed as the cause of the patient's hypoglycemia. The hypoglycemia was thought to be caused by the tumor's overproduction of IGF-2 because this phyllodes tumor tissue was immunopositive to the protein. However, serum IGF-2 levels were not measured before and after surgery, which limited the study's findings. The most common cause of NICTH is the overproduction of incompletely processed IGF-2 by the tumor, which stimulates insulin receptors and increases glucose utilization (3,4). This type of incompletely processed IGF-2 is also called big-IGF-2. Big-IGF-2 is highly homologous to insulin, with its profragments B, C, and A corresponding to similar structures of insulinogenic B chain, C peptide, and A chain, respectively (3). They can bind to the insulin receptor and cause hypoglycemia, as well as mediate the intracellular transfer of potassium, resulting in hypokalemia (3). Hypoglycemia caused by big-IGF-2 inhibits endogenous insulin secretion, resulting in a significant drop in insulin levels and C peptides in the blood of the patient (3). Due to the high structural homology, big-IGF-2 can also bind to the IGF-1 receptor family (3). IGF-1 is an effector of growth hormone and is secreted by the liver to exert growth hormone pro-growth developmental effects. An increase in large IGF-2 can lead to limbic hypertrophy-like manifestations and can also stimulate rapid growth and proliferation of the tumor itself. On the other hand, it can negatively feedback inhibit IGF-1 and GH secretion, resulting in lower IGF-1 and GH levels (3). However, the molecular weight of big-IGF-2 is different from mature IGF-2; a normal mature IGF-2 has a molecular weight of 1.5 kDa, whereas the tumor produces big-IGF-2 with a much larger molecular weight, between 10 and 20 kDa (3). As a result, different bands may appear on expression of IGF-2 protein in serum or tumor tissue by Western immunoblot (3). This is a rare case of NICTH. NICTH occurs commonly in patients with mesenchymal tumors, fibromas, carcinoid, myelomas, lymphomas, hepatocellular, and colorectal carcinomas (3,4). Breast phyllodes tumors are rarely present. In all cases, continuous intravenous glucose infusion was used to treat hypoglycemia. Two of these cases (10, 11) combined oral and intravenous glucocorticoids. The prognosis of phyllodes tumor of the breast causing NICTH is related to the malignancy of the tumor. In the majority of cases, hypoglycemia remission was achieved after mastectomy of the breast tumor, except for one patient (15) with low-grade PTB who eventually died of aspiration pneumonia and renal failure and one patient with metastatic breast malignancy (13). Treatment for hypoglycemia caused by a non-islet cell tumor should include immediate correction of the hypoglycemia as well as prompt tumor resection. Increased caloric intake (sometimes through enteral or parenteral nutrition) and intravenous glucose or dextrose administration if necessary are used to treat NICTH in inoperable patients (3,4). Glucocorticoids have been found to increase the clearance of large IGF-2, and glucocorticoid therapy (prednisone 30-60 mg/day) is an option for patients with untreatable malignancies (4). If hypoglycemia persists, patients whose blood glucose levels respond to glucagon therapy may be given long-term intravenous glucagon infusion (0.06-0.30 mg/h) or growth hormone may be added (4). However, because of the risk of pro-tumor growth, growth hormone is generally not chosen except to relieve the suffering in patients with NICTH end-stage cancer (4). We conclude from this patient's case and the literature that in patients with hypoglycemia due to phyllodes tumor of the breast, a continuous high-concentration glucose infusion is required preoperatively to avoid recurrent hypoglycemia caused by big-IGF-2, and aggressive surgical removal of the breast lesion is also required. CONCLUSION This case reports a rare, hypoglycemic coma due to a borderline phyllodes tumor of the breast. The patient's hypoglycemia resolved rapidly after the removal of the breast tumor. Pathological examination confirmed a borderline phyllodes tumor of the breast, and immunohistochemical staining showed high expression of IGF-2 in the tumor tissue. According to research, these patients frequently have severe hypoglycemia, impaired consciousness, a giant breast tumor, and detectable big-IGF-2 in serum or tumor tissue. Hypoglycemia resolved rapidly after resection of the tumor, and the prognosis depends on the malignancy of the tumor. †Expression of IGF-2 mRNA in the tumor tissue by PCR. **Expression of IGF-2 protein in the tumor tissue by Western immunoblot. ‡ Immunohistochemical staining of IGF-2 in the tumor tissue. +Positive. #In this case, the concentration of IGF-2 in serum was normal, but the IGF-II/IGF-I ratio was elevated. &In this case, the concentration of IGF-2 in serum was low, but insulin level was elevated, because of the tumor ectopic-secreted insulin. @In this case, plasma levels of insulin-like protein were increased, but it was not determined whether the insulin-like protein was IGF-2 or not. N/A, not available. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors. ETHICS STATEMENT The patient has signed the informed consent and fully acknowledged the details of examinations and inspection items. AUTHOR CONTRIBUTIONS YLiu was involved in the study concept, study design, and manuscript preparation. MZ (2nd author) carried out the definition of intellectual content and manuscript review. XY handled data analysis and statistical analysis. MZ (4th author) carried out data acquisition. ZF and YLi conducted the clinical studies. TW was involved in the
2022-05-26T13:10:34.044Z
2022-05-26T00:00:00.000Z
249048000
s2orc/train
v2
Determinants of Racial/Ethnic Disparities in Incidence of Diabetes in Postmenopausal Women in the U.S.
Determinants of Racial/Ethnic Disparities in Incidence of Diabetes in Postmenopausal Women in the U.S. OBJECTIVE To examine determinants of racial/ethnic differences in diabetes incidence among postmenopausal women participating in the Women’s Health Initiative. RESEARCH DESIGN AND METHODS Data on race/ethnicity, baseline diabetes prevalence, and incident diabetes were obtained from 158,833 women recruited from 1993–1998 and followed through August 2009. The relationship between race/ethnicity, other potential risk factors, and the risk of incident diabetes was estimated using Cox proportional hazards models from which hazard ratios (HRs) and 95% CIs were computed. RESULTS Participants were aged 63 years on average at baseline. The racial/ethnic distribution was 84.1% non-Hispanic white, 9.2% non-Hispanic black, 4.1% Hispanic, and 2.6% Asian. After an average of 10.4 years of follow-up, compared with whites and adjusting for potential confounders, the HRs for incident diabetes were 1.55 for blacks (95% CI 1.47–1.63), 1.67 for Hispanics (1.54–1.81), and 1.86 for Asians (1.68–2.06). Whites, blacks, and Hispanics with all factors (i.e., weight, physical activity, dietary quality, and smoking) in the low-risk category had 60, 69, and 63% lower risk for incident diabetes. Although contributions of different risk factors varied slightly by race/ethnicity, most findings were similar across groups, and women who had both a healthy weight and were in the highest tertile of physical activity had less than one-third the risk of diabetes compared with obese and inactive women. CONCLUSIONS Despite large racial/ethnic differences in diabetes incidence, most variability could be attributed to lifestyle factors. Our findings show that the majority of diabetes cases are preventable, and risk reduction strategies can be effectively applied to all racial/ethnic groups. M ore than 25 million Americans have diabetes, and an estimated 300 million worldwide will be diagnosed with diabetes by the year 2025 (1,2). Diabetes is the seventh leading cause of death in the U.S. and is an underlying factor in cardiovascular and cancer mortality (1,3). Non-Hispanic blacks have been reported to be 1.4-2.2 times more likely to receive a diagnosis of diabetes than non-Hispanic whites in the U.S. population (4). U.S. women of Hispanic and Asian ancestry also have a higher prevalence of diabetes than non-Hispanic whites (5). Although racial/ethnic disparities in diabetes risk have been identified, determinants of these differences have not been well studied. Previous studies have considered dietary and lifestyle factors individually, but few studies have considered these factors in aggregate in order to estimate the proportion of diabetes that might be avoided by adopting a pattern of low-risk behaviors (6,7). Moreover, few studies have been large or diverse enough to allow for the assessment of these relationships in individual racial/ethnic groups, particularly among women. The Women's Health Initiative (WHI) provides a unique opportunity to assess racial/ethnic disparities in both diabetes prevalence and incidence and factors contributing to disparities in diabetes incidence The WHI participants The design and baseline characteristics of the WHI have been described in detail elsewhere (8). In brief, the WHI enrolled postmenopausal women 50-79 years of age who provided written informed consent and had an expected survival and local residency of $3 years. Exclusion criteria included current alcoholism, drug dependency, dementia, or other conditions that would limit full participation in the study. A total of 161,808 women (68,132 in clinical trials and 93,676 in an observational study) were enrolled in the WHI between 1993 and 1998. Data used in this report were obtained from women at baseline and then at periodic follow-up points over an average of 10.4 years. The protocol and consent forms were approved by the institutional review boards of all participating institutions. Identification of diabetes Prevalent diabetes was defined as a selfreport at baseline of "ever having received a physician diagnosis of diabetes when not pregnant." Incident diabetes was based on data collected at annual follow-up visits at which participants were asked, "Since the date given on the front of this form, has a doctor prescribed any of the following pills or treatments?" Choices included "pills for diabetes" and "insulin shots for diabetes." Cases of incident treated diabetes reported as of 31 August 2009 were included in the analysis. A study of the accuracy of self-reported diabetes conducted in a subset of WHI participants indicated that self-reported disease status was a reasonably valid indicator of diagnosed diabetes when compared with medication and laboratory criteria (9). However, self-report fails to identify undiagnosed diabetes and tends to underestimate diabetes prevalence and incidence in our study group (9). Race/ethnicity At baseline, WHI participants self-reported their race/ethnicity, choosing from non-Hispanic white (referred to hereafter as white), non-Hispanic black (referred to hereafter as black), Hispanic, American Indian/Alaska Native, Asian (ancestry was Chinese, Indo-Chinese, Korean, Japanese, Pacific Islander, or Vietnamese), and other. We excluded 1,849 women who reported race/ethnicity as "other" and 413 women without race/ethnicity information. The sample size was limited among American Indians (n = 714); therefore, analyses were restricted to whites, blacks, Hispanics, and Asians. Covariates Body weight, height, and waist circumference were measured at baseline and year 3. BMI was computed as weight (kg) divided by height (m 2 ). Demographic and health history data were self-reported at baseline and included age, years of education, cigarette smoking status, family history of diabetes, and hormone therapy use. The baseline physical activity questionnaire asked about usual frequency and duration of several types of recreational and household activities using a standardized classification of physical activity intensity (10). Total energy expenditure was summarized in metabolic equivalent (MET) hours per week (METh/week), computed as the summed product of frequency, duration, and intensity for reported activities. Participants also completed a standardized food frequency questionnaire (FFQ) developed for the WHI to estimate average daily nutrient intake over the 3-month period prior to baseline visit (11). Dietary quality, assessed by the Alternate Healthy Eating Index (AHEI) (12,13), was computed based on food items and nutrients derived from the FFQ, including 1) fruit, 2) vegetables, 3) nuts and legumes, 4) ratio of white to red meat, 5) total dietary fiber, 6) trans-fat, 7) ratio of polyunsaturated fat to saturated fat, 8) alcohol, and 9) multivitamin use. Higher AHEI scores are indicative of a better quality diet. The reliability and validity of physical activity measurements were assessed among a random sample of 536 participants by a second measure of physical activity ;10 weeks after the first measure. The test-retest reliability (weighted k) for the physical activity variables ranged from 0.53 to 0.72, and the intraclass correlation for the total physical activity variable was 0.77 (8). For the FFQ, we assessed bias and precision of the FFQ by comparing the intake of 30 nutrients estimated from the FFQ with means from four 24-h dietary recalls and a 4-day food record from 113 women who participated in the WHI (11). For most nutrients, means estimated by the FFQ were within 10% of the records or recalls. Energy-adjusted correlation coefficients ranged from 0.2 (vitamin B12) to 0.7 (magnesium), with a mean of 0.5. The correlation for percentage energy from fat was 0.6. We concluded that the FFQ produced nutrient estimates, which were similar to those obtained in other studies comparing short-term dietary recall and recording methods. We acknowledge that the reliability and validity are not perfect; however, they provided reasonable measures in a large clinical trial, as published by several of our authors in high-impact articles (14)(15)(16). Statistical analyses Racial/ethnic differences in the prevalence of diabetes were assessed using logistic regression and are expressed as odds ratios (ORs) and associated 95% CIs, with whites used as the reference group. We present four logistic regression models: 1) unadjusted, including only race/ethnicity as a covariate; 2) age-adjusted; 3) adjusted for multiple potential confounding factors, including study arm, baseline age, BMI, waist circumference, physical activity, dietary quality, and smoking status; and 4) adjusted for the same covariates as in model 3 plus educational attainment. Among women who were free of diabetes at baseline, incidence rate was calculated as the number of newly reported postbaseline diabetes cases divided by total follow-up time in person-years. The time to diabetes (i.e., time to event) was calculated as the interval between enrollment date and the earliest of the following: 1) date of annual medical history update when new diagnosis of diabetes and initiation of treatment for diabetes were ascertained (observed event); 2) date of last annual medical update during which participants were identified to be without diabetes (censored event); or 3) date of death from any cause (censored event). Cox proportional hazards models were used to estimate the hazard ratios (HRs) and associated 95% CI of incident diabetes, with whites used as the reference group. Four Cox regression models that parallel those described for the analysis of diabetes prevalence are presented. To identify determinants of diabetes incidence that might vary by racial/ethnic group, the impact of several covariates was evaluated by assessing the extent to which the HR estimate for a specific race/ ethnicity group changed when each covariate was added individually to the unadjusted model. The percentage change in HR between the model with and without the covariate was used to describe the contribution of each covariate, considered singly, to diabetes risk estimated by race/ethnic group. To identify the most parsimonious prediction model, a stepwise Cox proportional hazards regression analysis was conducted (P values for entry =0.25 and for retaining in the model =0.05). Subgroup analyses also were conducted by race/ethnicity. Weight change from baseline to 3 years was evaluated in relation to risk of incident disease by race/ethnicity among women who reported being free of diabetes at the 3-year measurement point. A similar analysis was conducted for waist circumference. To assess how the combined role of healthy lifestyle habits in diabetes prevention may vary by race/ethnicity (6,7), categories of four modifiable lifestyle factors were defined as follows: physical activity (upper vs. lower two tertiles), dietary quality score (upper vs. lower two tertiles), smoking (nonsmokers vs. past and former smokers combined), and BMI (,25 vs. $25 kg/m 2 ). In subgroup analyses by race/ethnicity, HRs and 95% CIs were calculated for diabetes incidence for each lifestyle risk factor individually and several lifestyle factors in aggregate, adjusting for age, family history of diabetes, hormone therapy use, study arm, and lifestyle risk factors not already included in the model. Population characteristics At baseline, the average age of the 158,833 women with evaluable data was 63 years. The racial/ethnic distribution was 84.1% white, 9.2% black, 4.1% Hispanic, and 2.6% Asian. Approximately one-third of participants had a family history of diabetes. Approximately two-thirds had completed at least some college education. The prevalence of current smoking was 7%. Compared with whites, blacks and Hispanics tended to have more risk factors, whereas Asians tended to have fewer (Table 1). Racial/ethnic disparities in diabetes prevalence and incidence At enrollment, diabetes prevalence was highest among blacks (12.2%), followed by Hispanics (7.2%), Asians (5.9%), and whites (3.3%) ( Table 2). Compared with the unadjusted ORs, which were significantly higher in all three race/ethnic groups when compared with whites, the ageadjusted and multivariable-adjusted ORs were attenuated for blacks and Hispanics but strengthened for Asians. Additional adjustment for educational attainment further attenuated the ORs for blacks and Hispanics; however, ORs for all three racial/ethnic minority groups remained significantly higher than that of whites. During an average of 10.4 years (SD = 3.2) of follow-up, 14,604 new cases of diabetes were reported (11,127 whites, 2,181 blacks, 879 Hispanics, and 417 Asians), with incidence being highest in blacks and lowest in whites. Compared with whites, unadjusted analyses showed a significantly higher diabetes incidence of 136% in blacks, 114% in Hispanics, and 43% in Asians. After adjusting for age, study arm, BMI, physical activity, smoking status, and educational attainment, HRs increased for Asians and decreased for blacks. Determinants of racial/ethnic disparities in diabetes incidence In each racial/ethnic group ( Fig. 1), the highest cumulative incidence of diabetes was seen among women who had both high BMI and low levels of physical activity. The cumulative incidence of diabetes was 23.4% among black women who were both obese and in the lowest tertile of physical activity, but decreased to 8.8% in those who had a healthy weight and exercised. White women who were overweight (BMI = 25-29.9 kg/m 2 ) and in the lowest tertile of physical activity had a cumulative incidence of 8.7%, whereas white women who were obese (BMI $30 kg/m 2 ) and in the lowest tertile of physical activity had a cumulative incidence of 18.6%. Across all racial/ethnic groups, women who were normal weight (BMI ,25 kg/m 2 ) and in the highest tertile of physical activity had less than one-third to one-sixth the incidence of diabetes compared with women with BMI $30 kg/m 2 and in the lowest tertile of physical activity. Analyses conducted to estimate the influence of individual factors that may account for the observed racial/ethnic variation in diabetes incidence by comparing HRs obtained from unadjusted versus fully adjusted models (results not shown) revealed that if Asian women had the same waist circumference, BMI, and dietary quality intake as whites, their HR for diabetes would be increased by 44, 29, and 3.5%, respectively. Among blacks, if they had the same BMI, waist circumference, family history of diabetes, dietary quality intake, and physical activity levels as whites, their HR for diabetes would be decreased by 25, 19, 11, 7, and 6%, respectively. Among Hispanics, if they had the same educational attainment, BMI, family history of diabetes, dietary quality intake, and physical activity levels as whites, their HR for diabetes would be decreased by 14,10,9,7, and 6%, respectively. Variables retained in the final model predicting diabetes incidence Results from the stepwise Cox proportional hazards regression analysis conducted to identify the most parsimonious predictive model for diabetes incidence, fitting candidate predictors plus race/ ethnicity, showed that race/ethnicity, age, BMI, waist circumference, education, smoking status, physical activity, family history of diabetes, and dietary quality score entered the model. Subgroup analyses by race/ethnicity (results not shown) conducted to assess the effect of interval (i.e., baseline to 3 years) changes in factors found to be significant in the Cox proportional hazards models revealed a significant effect of ;5% increased risk of diabetes for each 5-cm increase in waist circumference (HR 1.05 [95% CI 1.04-1.07]). The observed effect was consistent across all race/ethnicity groups. An alternative model fit to assess if weight gain could explain some of the observed racial/ethnic variation in diabetes incidence revealed an ;3% increase in subsequent risk for each 1-kg/m 2 increment in BMI (1.03 [1.02-1.04]). HR of incident diabetes by single and specific combinations of lifestyle risk factors and race/ethnicity Higher levels of physical activity, better diet, and having a healthy weight tended to be associated with a significantly lower risk of diabetes in each racial/ethnic group, with some exceptions. Whites, blacks, and Hispanics with all factors in the low-risk category (4.0, 1.1, and 2.2%) had 60, 69, and 63% lower risk for incident diabetes (Table 3). Healthy weight (BMI ,25 kg/m 2 ) demonstrated the greatest role in reducing risk of diabetes in each racial/ethnic group: 66% in whites, 55% in blacks, 64% in Hispanics, and 66% in Asians. CONCLUSIONSdPrevious reports, mainly focused on younger men and women, have indicated significant racial/ ethnic disparities in diabetes in the U.S. (1,4,5). The 2007-2009 National Health Interview Survey found that 7.1% of whites, 8.4% of Asians, 11.8% of Hispanics, and 12.6% of blacks reported having a diagnosis of diabetes (1). The WHI allows a unique opportunity to deepen understanding of the patterns and determinants of diabetes in older women, including racial/ethnic minorities that represent growing segments of the U.S. population. We found that the prevalence and incidence of diabetes ranged from approximately two to three times higher in blacks and approximately two times higher in Hispanics and in Asians, compared with whites. Observed racial/ethnic differences in diabetes incidence were explained, in large part, by modifiable lifestyle factors that included diet quality, physical activity, and smoking status or factors resulting from lifestyle behaviors including BMI and waist circumference. Adjustment for differences in these variables indicates that Asians have the highest inherent risk, though women of all four racial/ ethnic groups would experience a large reduction in diabetes risk by maintaining a healthy body weight, healthy diet, and a physically active lifestyle. Maintaining a BMI ,25 kg/m 2 appears to be particularly important, and interval changes in both BMI and waist circumference predicted newly incident disease. In this study, both BMI and waist circumference were found to be related to prevalent diabetes, and interval changes were associated with risk of incident disease. Because waist circumference reflects centralized obesity, and the propensity toward large waist circumference varies according to race/ethnicity, its effect may differ from that of BMI (17,18). Although BMI and waist circumference among Asian, postmenopausal women in the WHI were relatively lower than whites, Asians were at higher risk of diabetes at lower levels of BMI and with smaller waist circumferences (i.e., by 44 and 29%, respectively) compared with whites, a result consistent with that seen in the Multi-Ethnic Study of Atherosclerosis (19). Our results on Asians suggest that additional risk factors, which might be biological, social, or a combination, drive the higher prevalence and incidence of diabetes in this population. BMI and waist circumference also contributed to disparities in diabetes for blacks and Hispanics. Diabetes risk in blacks with the same BMI and waist circumference as whites was lower by 25 and 17%, respectively. Similarly, diabetes risk in Hispanics with the same BMI and waist circumference as whites was lower by 10 and 2%, respectively. We noted that within each group, BMI was the most important determinant for diabetes incidence. Physical inactivity also was found to be associated with increased risk of diabetes in all groups. In a previous publication using WHI data, we reported the association between physical inactivity and development of diabetes (20). Our present study showed that, compared with whites, given the same level of physical activity, the risk for diabetes for blacks and Hispanics is 6% lower. Nutritional factors may play a role in the development of diabetes (21)(22)(23)(24)(25). Several studies have found differences in dietary intake by race/ethnicity (26,27). Our analysis suggested that Asians appear to be particularly sensitive to poor dietary intake. Although their diet quality was better than that of whites, we found, for example, if their overall dietary quality were to decrease to that of whites, their risk of diabetes would increase by 4% relative to whites. Poorer dietary intake among blacks and Hispanics put them at increased risk of diabetes. In fact, if blacks and Hispanics improved their diet to a quality similar to that of whites, their risk of diabetes would be 7% lower. Although previous studies demonstrated that diabetes is preventable by relatively simple lifestyle modifications (6,7), our data suggest that some minority groups might obtain greater benefit from improving lifestyle factors (i.e., blacks and Hispanics) than others (i.e., Asians) due to differences in the amount of change possible or the vulnerability to these factors. The divergent findings in Asian women are of interest, suggesting that additional approaches to prevention deserve attention. For example, Asians may need to achieve even greater weight loss to have the same low risk of diabetes as nonoverweight whites, and the World Health Organization already uses a lower BMI cut point for Asians. The fact that Asians in the WHI had much healthier body weights at baseline set the stage for much lower overall diabetes risk. However, further research with this population is warranted because our statistical models, with additional covariates, may be unstable and less reliable due to small numbers. We found that blacks and Hispanics are more sensitive to lifestyle modifications and weight loss than whites, and this is corroborated by our previous lifestyle intervention study results from a Hispanic population (28). Hispanic sensitivity to the development of an insulin-resistant state or diabetes with lower weight gain is well described (29). Less is understood about corresponding sensitivity to weight loss in lifestyle interventions. If confirmed in weight-loss studies, the sensitivity to modest weight loss that we observed in high-risk groups in the WHI will have important clinical and public health implications. It will also be important to explore possible social and genetic underpinnings for such population sensitivity. Two recent studies indicate that social economic status (SES) may account for some of the observed race/ethnicity disparities in diabetes prevalence (30,31). Similarly, in our study, adjustment for educational attainment resulted in the largest decrease in diabetes incidence in Hispanics; if Hispanics had achieved the same education levels as whites, their risk of diabetes would be 14% lower. However, education is only one component of SES, and thus consideration of an adjustment for other SES parameters (e.g., income and occupation) would likely account for a greater proportion of the disparities observed in black and Hispanic women. There are several interesting nuances to our findings that are worth noting. First, observed prevalence of diabetes in the WHI was lower than expected, as relatively healthy postmenopausal women were enrolled in the study. Second, reported education level among black women in the WHI was considerably higher than education level among black women in the U.S. The education differential for blacks and Hispanics is much more extreme than that observed in whites, and this may underlie some of the other observations in the data (32,33). However, observed patterns are consistent with the population-based literature in terms of race/ethnic patterns of diabetes prevalence and incidence of diabetes. Third, Asian women appear to have the greatest "inherent" risk in that their risk factor profile was in fact better than whites, and large increases in diabetes risk would occur if the risk factors deteriorate. This is consistent with the report from Lutsey et al. (19), which showed that Asians had a higher diabetes risk per unit increase in BMI and waist circumference. Recognizing that each of the racial/ethnic groups is heterogeneous due to differences by national origin and other relevant parameters, future studies should examine disparities in incident diabetes by national origin. Fourth, it is encouraging to note the effect of interval changes in weight amounting to a 3% reduction in risk of diabetes for each unit decrease in BMI. Individual efforts to reduce weight and increase physical activity are difficult to sustain; however, current efforts are underway to tackle weight regulation through a variety of approaches that go beyond the individual. Lessons from the tobacco literature indicate that behavior change is influenced by our social and political structure; thus, multilevel approaches are likely needed. In the U.S., the race/ethnicity categories used most often in medical and public health research are from self-report, the same as the U.S. Census categories. Genetic race/ethnicity information known as admixture data (i.e., ancestry-informative markers) from another study indicates great diversity within all four of the groups that we examined (34). Yet, few studies obtain such data due to issues concerning cost, feasibility, practicality, and comparability. Thus, the differences observed and reported here reflect any inherent biological differences across the groups studied as well as differences in life experiences (e.g., exposure to specific environments and stressors), which may also contribute to acquired physiological differences in reactivity, and in turn diabetes risk (35). The challenges for interpreting the results associated with studying selfreported racial/ethnic groups will likely increase in future studies, as more people identify themselves as belonging to multiple racial/ethnic groups. This study has several limitations that are worth noting. First, the WHI participants are not a population-based random sample. Although geographically diverse, racial/ethnic groups vary in their representation of the general population. Data for whites show that many characteristics of the WHI participants are similar to white women participating in the National Health and Nutrition Examination Survey (11); however, ethnic groups are underrepresented in the WHI. Participants from each ethnic group were generally of higher SES than national averages. Women from parts of the country where we see large disparities in certain minorities (e.g., rural southern blacks) were not represented. Thus, both prevalence and racial differentials are smaller in the WHI than what we might expect to see in the U.S. as a whole. Second, only self-reported prevalence of diabetes and treated incident diabetes were ascertained; thus, prevalence and incidence of diabetes may be underestimated. We acknowledge that this is a limitation, and we did not account for nontreated diabetes. However, self-reported diabetes in the WHI was found to be reliable and sufficiently accurate to allow its use in epidemiologic studies (9). Third, there could be other factors for which we did not control that further contribute to racial/ethnic disparities, such as health care access (36). However, .90% of the WHI participants had insurance coverage, and diabetes prevalence and incidence were assessed at regular study visits. Fourth, although incident diabetes in older women is likely to be type 2 diabetes, the WHI question did not specify type of diabetes. Other limitations include missing data; however, the rates of retention in the WHI were .95% during an average of 7 years of follow-up (37). Balancing the limitations, there are several major strengths. First, this study represents a racially diverse sample of well-characterized women. Secondly, the prospective design enables an examination of diabetes incidence. In addition, the WHI collected detailed information on a comprehensive range of diabetes risk factors relevant to this investigation, with a 10-year follow-up for diabetes outcome. In conclusion, significant disparities exist between the major ethnic groups in diabetes prevalence and incidence in postmenopausal women; these differences withstand adjustment for a very comprehensive group of physiological and behavioral risk factors. Determinants of the disparities observed varied by race/ethnicity. Although these results highlight the potential benefits of tailored diabetes prevention strategies directed at those specific factors that are most likely to increase the risk of diabetes among each racial/ethnic group, it is prudent to recommend avoidance of weight gain, weight loss, a healthy diet, and adequate levels of physical activity to all postmenopausal women for the purpose of diabetes risk reduction. AcknowledgmentsdThis study was supported by National Institute of Diabetes and contributed to the discussion and reviewed and edited the manuscript. R.B. and Y.Q. performed data analyses and reviewed and edited the manuscript. Y.M. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
2016-05-12T22:15:10.714Z
2012-10-13T00:00:00.000Z
13929000
s2orc/train
v2
Physiological and Transcriptional Responses to Saline Irrigation of Young ‘Tempranillo’ Vines Grafted Onto Different Rootstocks
Physiological and Transcriptional Responses to Saline Irrigation of Young ‘Tempranillo’ Vines Grafted Onto Different Rootstocks The use of more salt stress-tolerant vine rootstocks can be a sustainable strategy for adapting traditional grapevine cultivars to future conditions. However, how the new M1 and M4 rootstocks perform against salinity compared to conventional ones, such as the 1103-Paulsen, had not been previously assessed under real field conditions. Therefore, a field trial was carried out in a young ‘Tempranillo’ (Vitis vinifera L.) vineyard grafted onto all three rootstocks under a semi-arid and hot-summer Mediterranean climate. The vines were irrigated with two kinds of water: a non-saline Control with EC of 0.8 dS m–1 and a Saline treatment with 3.5 dS m–1. Then, various physiological parameters were assessed in the scion, and, additionally, gene expression was studied by high throughput sequencing in leaf and berry tissues. Plant water relations evidenced the osmotic effect of water quality, but not that of the rootstock. Accordingly, leaf-level gas exchange rates were also reduced in all three rootstocks, with M1 inducing significantly lower net photosynthesis rates than 1103-Paulsen. Nevertheless, the expression of groups of genes involved in photosynthesis and amino acid metabolism pathways were not significantly and differentially expressed. The irrigation with saline water significantly increased leaf chloride contents in the scion onto the M-rootstocks, but not onto the 1103P. The limitation for leaf Cl– and Na+ accumulation on the scion was conferred by rootstock. Few processes were differentially regulated in the scion in response to the saline treatment, mainly, in the groups of genes involved in the flavonoids and phenylpropanoids metabolic pathways. However, these transcriptomic effects were not fully reflected in grape phenolic ripeness, with M4 being the only one that did not cause reductions in these compounds in response to salinity, and 1103-Paulsen having the highest overall concentrations. These results suggest that all three rootstocks confer short-term salinity tolerance to the scion. The lower transcriptomic changes and the lower accumulation of potentially phytotoxic ions in the scion grafted onto 1103-Paulsen compared to M-rootstocks point to the former being able to maintain this physiological response in the longer term. Further agronomic trials should be conducted to confirm these effects on vine physiology and transcriptomics in mature vineyards. The use of more salt stress-tolerant vine rootstocks can be a sustainable strategy for adapting traditional grapevine cultivars to future conditions. However, how the new M1 and M4 rootstocks perform against salinity compared to conventional ones, such as the 1103-Paulsen, had not been previously assessed under real field conditions. Therefore, a field trial was carried out in a young 'Tempranillo' (Vitis vinifera L.) vineyard grafted onto all three rootstocks under a semi-arid and hot-summer Mediterranean climate. The vines were irrigated with two kinds of water: a non-saline Control with EC of 0.8 dS m −1 and a Saline treatment with 3.5 dS m −1 . Then, various physiological parameters were assessed in the scion, and, additionally, gene expression was studied by high throughput sequencing in leaf and berry tissues. Plant water relations evidenced the osmotic effect of water quality, but not that of the rootstock. Accordingly, leaf-level gas exchange rates were also reduced in all three rootstocks, with M1 inducing significantly lower net photosynthesis rates than 1103-Paulsen. Nevertheless, the expression of groups of genes involved in photosynthesis and amino acid metabolism pathways were not significantly and differentially expressed. The irrigation with saline water significantly increased leaf chloride contents in the scion onto the M-rootstocks, but not onto the 1103P. The limitation for leaf Cl − and Na + accumulation on the scion was conferred by rootstock. Few processes were differentially regulated in the scion in response to the saline treatment, mainly, in the groups of genes involved in the flavonoids and phenylpropanoids metabolic pathways. However, these transcriptomic effects were not fully reflected in grape phenolic ripeness, with M4 being the only one that did not cause reductions in these compounds in response to salinity, and 1103-Paulsen having the highest overall concentrations. These results suggest that all three rootstocks confer short-term salinity tolerance to the scion. The lower transcriptomic changes and the INTRODUCTION Changes in the Mediterranean and related semi-arid climates are expected shortly, leading to temperature increases and more frequent and longer drought periods (Döll, 2002). These will increase crop water demand, while simultaneously reducing the availability of quality water (Schultz, 2017). Since in most grapevine-growing regions, freshwater is a scarce resource (Medrano et al., 2015), the use of alternative waters, such as wastewaters often high in salts, will be more and more needed to mitigate drought stress (Mirás-Avalos and Intrigliolo, 2017). Besides, conventional waters, such as underground water, can indeed be of low quality due to excessive concentrations of soluble salts (Cl − and/or Na + ), with an electrical conductivity over 3 dS m −1 (Pérez-Pérez et al., 2015). This lack of water quality poses a challenge to the sustainability of deficit irrigation in viticulture, as this irrigation strategy could aggravate the effects of salinity . Excessive soil salinity can cause water loss, nutrient deficiency, oxidative stress, photoinhibition, growth inhibition, and induce many metabolic and transcriptomic changes leading to physiological damage (Walker et al., 1997;Kumari et al., 2015;Saha et al., 2015;Upadhyay et al., 2018;Zhou-Tsang et al., 2021). Previous studies have demonstrated that among plant responses to salinity, mechanisms that control ion uptake, transport, and balance, as well as hydric regulation, photosynthesis, cell division, osmotic adjustment, enzymatic activities, antioxidant production, stress signaling, and regulation of root barriers play critical roles in plant tolerance to salinity (Gong et al., 2011;Shahid et al., 2020;Zhou-Tsang et al., 2021). The Vitis vinifera L. is a crop classified as moderately sensitive to salinity (Maas and Hoffman, 1977;Cramer et al., 2007), with a soil saturation extract electrical conductivity at 25 • C yield threshold (EC t ) of 2.6 dS m −1 . The tolerance of grapevines to salinity depends on multiple factors and, particularly, on plant genetics, soil and climate characteristics, and the rate and length of the stress, to which vines are subjected (Maas and Hoffman, 1977;Zhang et al., 2002;Cramer et al., 2007;Chaves et al., 2009;Mirás-Avalos and Intrigliolo, 2017). Understanding the physiological and transcriptomic responses of grapevine to saline water is essential to prevent and mitigate potential negative effects on vine performance and grape composition (Ollat et al., 2016). Moreover, the contradictory effects of irrigation with saline or wastewater on vine performance and grape composition (Walker et al., 2004(Walker et al., , 2007Stevens et al., 2011;Mirás-Avalos and Intrigliolo, 2017) point toward the existence of important knowledge gaps regarding the effects of salinity and the salt tolerance mechanisms in Vitis spp. (Zhou-Tsang et al., 2021). Microarray studies of pot-grown own-rooted vines of CVS 'Cabernet Sauvignon, ' 'Razegui, ' and 'Shiraz' revealed that salinity stress impaired photosynthesis and increased the expression of some transcription factors and genes related to ROS scavenging, abscisic acid, and osmoprotectants such as various sugars and proline (Cramer et al., 2007;Daldoul et al., 2010). High throughput sequencing studies of potted cv. 'Thompson Seedless' and cv. 'Summer Black' under greenhouse conditions implicated the activity of genes involved in cell wall modulation, various cation and ABC transporters, signal transduction genes, HSPs, and biotic stress-related genes (Guan et al., 2018;Das and Majumder, 2019). The 'Tempranillo' cultivar has been specifically classified as moderately salt-sensitive as well, showing growth decreases attributable to osmotic effects rather than to ion-specific toxicities (Urdanoz and Aragüés, 2009). Nonetheless, since grapevine yield potential under saline conditions is related to the root-zone salinity, the plant portion that primarily deals with soil salinity is not the scion, but the rootstock. Among the characteristics of the different rootstock that contribute to enhancing grapevine tolerance to salinity, there is its ability to exclude and not transport salt to the shoots; besides, there is also the vigor it confers to the scion Munns et al., 2020). Additionally, rootstock can have a great influence on stomatal regulation in response to water and salinity stress, even more than the scion itself (Lavoie-Lamoureux et al., 2017). For instance, rootstock can affect the osmotic adjustment response, which is one of the main physiological processes, whereby the vine responds to salinity (Keller, 2010;Haider et al., 2019). This consists of the active accumulation of solutes, thus increasing leaf relative water content and turgor (Barrios-Masias et al., 2018). Regarding this, several studies are reporting that the rootstocks with lower osmotic adjustment capacity are those with greater capacity to restrict the leaf accumulation of Na + and Cl − , thus, preventing their possible phytotoxic effects Zhang et al., 2002), and minimizing their accumulation in the grape juice and wine in the long-term (Walker et al., 2004Teakle and Tyerman, 2010). American Vitis species, especially V. rupestris, V. riparia, and V. berlandieri are tolerant of saline and limestone soils (Williams et al., 1994;Ferlito et al., 2020). Some rootstocks derived from these species such as Ramsey (V. champini), 1103 Paulsen (1103P), 110 Richter, 140 Ruggeri, and 101-14 Mgt can exclude much salt (chiefly Na + and Cl − ) from root uptake and root-to-shoot transport (Walker et al., 2004(Walker et al., , 2010Gong et al., 2011). For instance, some of the most salinity-tolerant rootstocks, such as 140 Ruggeri and 1103 Paulsen, have an EC t value of up to 3.3 dS m −1 Zhang et al., 2002;Tregeagle et al., 2006). Conversely, rootstocks, such as SO4 and 3309C, are characterized by being very sensitive to salinity with an EC t value below 1.8 dS m −1 (Walker et al., 2010). Given the relatively narrow genetic pool within the commercial grapevine rootstocks and the significant genetic diversity of the genus Vitis, identifying salinity-tolerant grapevine rootstocks is a great opportunity to enhance viticulture sustainability (Schultz and Stoll, 2010). For instance, differential gene expression has been observed in potted Vitis vinifera L. ssp. sylvestris with different short-term salinity tolerance in greenhouse conditions (Askri et al., 2012). Therefore, a better understanding of the rootstock physiological, metabolomic, and transcriptomic mechanisms underlining salt stress tolerance is essential to improve breeding programs aimed at adapting to climate change (Ollat et al., 2016). In this sense, new information about salinity tolerance conferred by rootstocks is needed (Keller, 2010;Marín et al., 2021). Grapevine rootstock breeding programs, such as the one carried out by the University of Milan (Italy) with the M-series, are very promising for coping with water salinity (Meggio et al., 2014) and can benefit a lot from the results of field trials. Therefore, the objective of the present research was to evaluate the physiology and transcriptomics underlying the performance against salinity of two new rootstocks, M1 and M4, compared to the well-known salinity-tolerant 1103P (Walker et al., 2010;Bianchi et al., 2020). In this work the experimental hypothesis was that the M-rootstocks may confer better salinity tolerance to the scion than the 1103P through enhanced uptake of saltstress-contesting ions such as calcium, as well as vigor declining ability, in the case of the M1 (Porro et al., 2013;Vannozzi et al., 2017), and because of the leaf build-up of inorganic osmolytes and sodium-antagonists, such as potassium, in the case of the M4 (Meggio et al., 2014). In comparison to the M-rootstocks, the 1103P stands out for its ability to exclude Cl − from uptake. Aiming at mimicking commercial conditions, the experiment was performed under field conditions and tried to isolate the salinity effect by fully irrigating the vines. Although the vineyard was under establishment, to our best knowledge, these grapevine rootstocks had not been previously tested against salinity under conditions so close to real practice. Besides, in contrast to previous comparative studies between these grapevine rootstocks in this work, all determinations were carried out directly in the scion. This was done considering that the scion is an integrator of rootstock-induced effects (Gambetta et al., 2012;Cookson et al., 2013). Finally, by assessing a young vineyard, i.e., one with a nonextensive root system, the physiological response to salinity could be studied ensuring that most of the roots were effectively under the intended salinity. Vineyard Site and Experimental Design The experiment was undertaken in 2019 in a 'Tempranillo' (Vitis vinifera L.) vineyard located at the IVIA's experimental station in Moncada, Valencia, Spain (39 • 35 12 N, 0 • 24 1 W, and 55 m.a.s.l). In 2017, the vines were grafted onto three rootstocks in a nursery. The rootstocks were the M1 clone 1 (106/8 × V. berlandieri), the M4 clone 1 (41B x V. berlandieri) and the 1103 Paulsen clone VCR119 (V. berlandieri cv. 'Resseguier' nr. 2 × V. rupestris cv. 'Du Lot') (Marín et al., 2021). Vines were planted in 2018 at a spacing of 0.88 × 2.50 m and guided by a vertical trellis system in a simple "guyot" cordon. As it was a vineyard under establishment, it was decided to constrain the crop load to four clusters per vine to avoid overcropping. Thus, the experimental vines had an average yield of 1.75 kg, i.e., 7.9 t/ha. There were no differences in initial shoot fruitfulness or yield at harvest among treatments. The vineyard was drip irrigated at 100% of crop evapotranspiration (ET c ), based on the crop coefficients reported for 'Tempranillo' vines by López-Urrea et al. (2012), and the ET o calculated with the Penman-Monteith equation (Allen et al., 1998). Weather conditions were recorded at an automated agro-meteorological station 400 m away from the plot. Importantly, no leaching fraction was adopted. Irrigation was applied through 2 L h −1 pressure-compensated emitters spaced at 0.88 m along a single drip line and it began 50 days after budburst, i.e., the day of the year (DOY) 133. This time was selected because then, was when midday stem values reached -0.8 MPa. As a result, the vine water requirements were met by irrigation events 2-to-3 h long 3-to-5 days a week. Mineral nutrients were provided along the season by fertigation up to the cumulated rates of 30, 20, and 60 kg ha −1 of, respectively, N, P 2 O 5 , and K 2 O. Two irrigation waters were generated by dissolving adequate amounts of reagent grade calcium and sodium chlorides in partially desalinated water. Each irrigation water featured a different electrical conductivity at 25 • C (EC 25 ), but a common sodium-adsorption ratio (SAR) of 5-7 (mmol L −1 ) 1/2 . This way a sodification effect was avoided, which would have shown up as differences in soil structural stability and nutrient availability between the control and saline water, thus, interfering with the salinity treatment. The control water featured an EC 25 of 0.8 dS m −1 with 2.7, 0.3, and 3.3 mmol L −1 of, respectively, Na + , Ca 2+ , and Cl − , whereas the Saline water featured an EC 25 of 3.5 dS m −1 with 12.7, 6.5, and 25.7 mmol L −1 of, respectively, Na + , Ca 2+ , and Cl − . During the experiment, the soil on the alleyways was tilled and spontaneous weeds in the vine row were controlled by glyphosate herbicide applications. The experiment followed a complete factorial design to assess the performance of the three rootstocks under the two water quality levels (control and salinity). All treatments, i.e., each combination of rootstock and water quality, had three replicates, thus, resulting in 18 subplots of 10 vines each. The subplots were randomly distributed throughout the vineyard. For the determination of water relations and the measurement of gas exchange parameters, as well as for the transcriptomics, the experimental unit (biological replicate) was the 8th vine of each subplot. For the determination of the leaf nutritional status, leaf area index, and grape quality, the experimental unit consisted of the 8 vines from the 2nd to the 9th in each subplot, thus, leaving the 1st and 10th as guards. Field Measurements and Laboratory Determinations All field measurements and samplings were performed after more than 100 days since the treatments had begun (after 259 ± 2 mm of cumulated irrigation was applied). Specifically, the vine water relations, the gas exchange measurements, and the leaf and berry samplings were performed, on DOY 233. According to the phenological growth stages in the BBCH-scale (Lorenz et al., 1995), the vines on DOY 233 were at stage code 89, which means berries are ripe for harvesting. Total leaf area determinations and harvest were performed, respectively, on DOY 234 and 237. Each laboratory sample was analyzed in duplicate. Vine water relations were determined in each biological replicate using a pressure chamber (Model 600, PMS Instruments Company, Albany, OR, United States) at pre-dawn ( pre−dawn ) and midday. At midday, both well-exposed-to-sunlight adult leaves ( leaf ) and bag-covered leaves ( stem ) were measured (Santesteban et al., 2019). After the leaf measurement, this leaf was frozen and stored at -20 • C for determination of the leaf osmotic potential ( π ). Another leaf from the same shoot was collected and re-hydrated for determination of the leaf osmotic potential at full turgor ( π 100 ). Both π and π 100 were measured with a digital osmometer (Wescor, Logan, UT, United States). The leaf turgor potential ( p ) was calculated as the difference between leaf and π . The gas exchange measurements were carried out on two fully exposed and expanded young leaves of each biological replicate using an infrared open gas exchange analyzer system (Li-6400xt, Li-COR, Lincoln, NE, United States). The stomatal conductance (g s ), net photosynthesis (A N ), and intrinsic water use efficiency (WUE i = A N /g s ) were measured between 8:00 and 9:30 solar time. The CO 2 concentration inside the chamber was 400 µmol CO 2 mol −1 , and an airflow of 500 µmol min −1 was applied. The chamber had an area of 6 cm 2 exposed to environmental light radiation, with PAR always of 1,500 ± 2 µmol m −2 s −1 . The relative humidity and vapor pressure deficit inside the chamber were 30 ± 2% and 2.25 ± 0.3 kPa. Leaf nutritional status was determined from samples of 20 fully expanded mature leaves per subplot. Leaves were thoroughly washed with tap water, rinsed with deionized water, and oven-dried at 65 • C for 48 h. Next, they were grounded with a disk mill to pass a 200-µm mesh sieve and analyzed for the determination of various macro-and micronutrients. The concentrations of K, Ca, Mg, and Na was determined in the extracts obtained by digestion with HNO 3 :HClO 4 (2:1) using inductively coupled plasma atomic emission spectrometry (ICP-AES) in an iCAP series 6500 (Thermo Fisher Scientific, Franklin, MA, United States). The total N and C contents were determined by dry combustion with, final N 2 and CO 2 measurements (Horneck and Miller, 1998), respectively, using a TruSpec CHNS elemental analyzer (LECO TruSpec Micro Series, St. Joseph, MI, United States). The chloride content was determined in the aqueous extracts obtained by shaking the dried leaf material with deionized water (EC 25 < 1 µS/cm) for two h by ion chromatography (IC) using an 850 professional IC (Metrohm, Herisau, Switzerland). The total leaf area per vine was estimated at each biological replicate from allometric relations between shoot length (x, cm) and leaf area per shoot (y, cm 2 ) measured with an LI-3100 area meter (LI-COR Biosciences, Lincoln, NE, United States), separating main and lateral shoot (y = 17.647 x, R 2 = 0.98 * * * and y = 14.952 x, R 2 = 0.99 * * * , respectively). The leaf area index (LAI) was calculated as the total leaf area per unit of ground surface area. The berry weight and must composition were determined from 200 randomly-taken berries per subplot. The berries were crushed and hand-pressed through a metal screen filter and the must characteristics, including total soluble solids content (TSS), pH, total titratable acidity (TA), and anthocyanins and polyphenols content, were determined according to reference analysis methods (OIV, 1990). Common Data Analyses Two-way analysis of variance (ANOVA) was used to assess the effects of both factors, rootstock (R) and water quality (WQ), along with its interactions (R × WQ), on the vine water relations, leaf gas exchange, leaf nutrient contents, vine performance, and berry composition. A significant interaction between factors in a two-way ANOVA means that the effects of the factors significantly change in magnitude or direction depending on the levels of the other factor (Snedecor and Cochran, 1989). Therefore, following the two-way ANOVAs, if significant main effects were obtained (p < 0.05), but significant interactions between R and WQ were not, the group means were compared using the post hoc Duncan test. The ANOVAs and post hoc tests were carried out using the Statgraphics Centurion XVI package (version 16.0.07) (Statgraphics Technologies, The Plains, VA, United States). Additionally, regressions were calculated using SigmaPlot (version 11.0) (Systat Software, San Jose, CA, United States). RNA Extraction and Sequencing On DOY 233, immediately after the water relations and gas exchange measurements, one sample of leaves and another one of berries were collected from each biological replicate, thus, making 18 samples in total from each plant organ. Three fully expanded young leaves per plant, from the secondary shoots, and twenty berries were cleaned with a cloth and distilled water before being cut. Leaf samples were wrapped in aluminum foil after removing the petiole. Both leaf and berry samples were immediately frozen in liquid nitrogen at the field. Afterward, samples were stored at -80 • C until preparation. Total RNA was extracted from the samples using an optimized cetyltrimethylammonium bromide (CTAB) method (adapted from Carra et al., 2007), combined with RNA purification on Zymo-Spin Columns (Direct-zol RNA MiniPrep Plus kit, Zymo Research, Irvine, CA, United States). About 50 mg of frozen and powdered plant material was further homogenized with steel beads for 10 min at maximum speed in 800 µL CTAB buffer [Tris-HCl 100 mM, NaCl 2 M, EDTA 25 mM, CTAB 2.0% (w/v), PVP40 2.5% (w/v), and β-mercaptoethanol 2% (v/v), pH = 8] using TissueLyser (Qiagen, Hilden Germany). After the addition of an equal volume of chloroform-isoamyl alcohol 24:1, the sample was vortexed and centrifuged for 10 min at 10,000 g and 4 • C. The upper aqueous phase was recovered, to which 1.5 volume of pure ethanol was added. After a 30 min precipitation at 4 • C, the mixture was transferred into Zymo-Spin Columns. The RNA was further purified according to the manufacturer's instructions, with an additional washing step and a second prewashing step added to the beginning of the purification process. To elute the RNA, 30 µL of preheated (80 • C) DNase/RNase-free water was added to the column and incubated for 5 min at room temperature, before 1 min centrifugation at 14,000 g. The elution step was repeated. Isolated RNA was subjected to DNase digestion (DNase I Set, Zymo Research, Irvine, CA, United States) and cleaned up using the RNA Clean & Concentrator kit (Zymo Research, Irvine, CA, United States). RNA concentration, integrity, and purity were assessed using 2100 Bioanalyzer and RNA 6000 Nano Kit (Agilent Technologies, Santa Clara, CA, United States). At this point, one leaf sample from the M4 salinity treated group was excluded from further analysis due to insufficient quality. Library preparation for mRNA Illumina HiSeq 4000 sequencing, as well as preprocessing to remove adapter sequences and low-quality reads were provided by Novogene (Hong Kong). RNA-Seq Data Analysis The obtained 150 bp paired-end reads were trimmed to remove low-quality bases (Phred < 20), clipped to remove remaining adapter sequences, and mapped to the 12X.2 version of the PN40024 grapevine reference genome (Canaguier et al., 2017) using "CLC Genomics Workbench 12.0" (Qiagen, Hilden Germany), with the following parameters: mismatch cost 2, insertion or deletion cost 3, length fraction 1, similarity fraction 0.95, and a maximum number of hits for a read 1. The reads were annotated using the VCost.v2 annotation. Raw counts of transcripts were exported and deposited to ENA (European Nucleotide Archive) under project accession number PRJEB44658. Normalization of the raw counts and differential expression analysis was performed in "R v3.6.3" (R Core Team, 2017), using the limma package v3.42.2 (Ritchie et al., 2015) with the method previously described by Dermastia et al. (2021). In short, mRNA counts with a baseline expression level of at least 50 reads mapped in at least three samples were TMM-normalized in edgeR v3.28.1 (Robinson et al., 2009) and transformed using voom (Law et al., 2014). Principal component analysis (PCA) and hierarchical clustering analysis were performed on the resulting normalized counts. PCA was performed with the pc package and hierarchical clustering analysis was performed using the "pheatmap package v 1.0.12, " applying 1-Pearson correlation as distance measure and Complete Linkage as the linkage method. Differential expression was obtained by contrasts. Gene Set Enrichment Analysis (GSEA) was performed as described by Subramanian et al. (2005) on normalized log-transformed expression data. Results with a false discovery rate FDR q < 0.25 were considered statistically significant. Targeted Gene Expression Analysis by qPCR Differential expression of three genes, NCED1 (Vitvi19g01356), MAPK2 (Vitvi16g01160), LOX (Vitvi06g00158), and UBI_CF (Vitvi19g00744) as a reference gene was confirmed by qPCR. The primers and probes used are listed in Supplementary Table 1. Reverse transcription was performed with the High-Capacity RNA-to-cDNA TM kit (Applied Biosystems, Waltham, MA, United States). Power SYBR TM Green PCR Master Mix was used for all assays. The following thermal cycle conditions were applied for PCR: 95 • C for 10 min, 40 cycles of 95 • C for 15 s, and 60 • C for 1 min; and a climb in increments of 0.05 • C from 60 to 95 • C for the high-resolution melting curve. The Cq values were used for relative calculation of the initial target number from a serial dilution curve using quantGenius (Baebler et al., 2017). Then, the normalized logFC values were correlated to the values obtained from the RNA-Seq analysis by Pearson correlation coefficient. Vine Physiology and Nutritional Status The experimental season was warmer and drier than average. From DOY 1 to 233, the ET o and rainfall were 901 and 126 mm, respectively. All rainfall events greater than 10 mm occurred pre−dawn , pre-dawn leaf water potential; stem , midday stem water potential; leaf , midday leaf water potential; π , leaf osmotic potential; p , leaf turgor potential; π 100 , leaf osmotic potential at full turgor; A N , net photosynthesis; g s , stomatal conductance; WUE i , intrinsic water use efficiency. Significance of effects in bold denotes statistically significant differences at p < 0.05. before the start of irrigation (DOY 133). On DOY 233, when vine water relations and leaf gas exchange were measured and the berry and leaf samples were collected, the average air temperature was 23.6 • C and the relative humidity was 70%. On that day an ET o of 5 mm was recorded. In general, the water relations of grapevine cv. 'Tempranillo' was significantly affected only by water quality (WQ) ( Table 1), so water potential values are plotted by water quality treatment (Figure 1). According to the pre−dawn and stem measurements, the WQ exerted a significant effect on the vine water status at both maximum hydration and maximum water demand with no differences among rootstocks (Figure 1). Specifically, the vines from the saline treatments exhibited more negative values than the controls. These differences were -0.12 and -0.17 MPa on average for, respectively, pre−dawn and stem . Therefore, the effects of WQ on the water status at the time of maximum hydration ( pre−dawn ), were fairly maintained at the time of maximum evaporative demand ( stem ). According to the π and π 100 measurements, neither the R nor the R × WQ had significant effects on the osmotic potential (Figure 1). Despite this, the vines from the saline treatments exhibited significantly more negative values than the controls. These differences were -0.16 MPa on average for both π and FIGURE 1 | Average values of vine water relations in a Tempranillo vineyard grafted onto M1, M4, and 1103-Paulsen (1P) rootstocks subjected to different water quality (C, control and S, saline irrigation) on DOY 233 of 2019 in Valencia, Spain. pre−dawn , pre-dawn leaf water potential; stem , midday stem water potential; leaf , midday leaf water potential; p , leaf turgor potential; π , leaf osmotic potential; π 100 , leaf osmotic potential at full turgor. Data are averages and standard errors of 9 measurements per water quality. Within each parameter, an asterisk denotes significant differences between treatments at p < 0.05 (Duncan test). Frontiers in Plant Science | www.frontiersin.org π 100 . Both the leaf and p were unaffected by either WQ, R, or R × WQ. Regarding gas exchange parameters, both net photosynthesis rate (A N ) and leaf stomatal conductance (g s ) was significantly affected by WQ, and A N also by R (Table 1), whereas the R × WQ interactions were non-significant. Specifically, the vines from the Saline treatments presented lower values than the controls for both parameters with an average A N value of 14.3 and 17.2 µmol CO 2 m −2 s −1 , respectively, and with average g s values of 0.362 and 0.493 mol H 2 O m −2 s −1 . Despite these differences in carbon assimilation and stomatal conductance rates, no significant differences in intrinsic water use efficiency (WUE i ) in response to WQ were observed. Moreover, net photosynthetic rates of vines on 1103P were significantly higher than those on M1 (Figure 2). The LAI was significantly affected by WQ ( Table 2) due to reductions in the leaf area of lateral shoots (data not shown). Overall, the Saline treatments reduced the LAI per vine by 15% compared to the controls. This decreasing effect of WQ on the LAI was observed on the vines grafted onto the M-series rootstocks, mainly onto the M1. The concentrations FIGURE 2 | Average values of gas exchange parameters in a Tempranillo vineyard grafted onto M1, M4, and 1103-Paulsen (1P) rootstocks subjected to different water quality (C, control and S, saline irrigation) on DOY 233 of 2019 in Valencia, Spain. A N , net photosynthesis; g s , stomatal conductance; WUE i , intrinsic water use efficiency. Data are averages and standard errors of 18 and 12 measurements per water quality and rootstock, respectively. Within each parameter, asterisks or letters denote significant differences between water quality treatments or rootstocks at p < 0.05 (Duncan test), respectively. 2 | Leaf area index (LAI) and leaf nutritional status in leaf blades from Vitis vinifera (L.) cv. Tempranillo grafted onto M1, M4 and 1103-Paulsen (1P) rootstocks subjected to different water quality (C, control and S, saline irrigation) on DOY 233 of 2019 in Valencia, Spain. Factors Treatment Data are averages of 6, 9, and 3 determinations per rootstock, water quality and rootstock per water quality respectively. For each parameter, letters denote significant differences between treatments at p < 0.05 (Duncan test). The statistical significance effect of the rootstock (R), water quality (WQ) and their interaction are also indicated by means of the p-values from the ANOVAs. Significance of effects in bold denotes statistically significant differences at p < 0.05. of the macro-and micronutrients in the vine leaves were, overall, significantly affected by both WQ and R, and even by the R × WQ interaction ( Table 2), which points toward an interesting rootstock salt-stress modulating effect. On the one hand, the leaf concentrations of Cl − , Ca 2+ , K + , and Mg 2+ depended on WQ, while N and Na + did not. On the other hand, the leaf concentrations of N, Cl − , Ca 2+ , Na + , and Mg 2+ depended on R, while K + did not. Nitrogen was significantly higher in the vines grafted onto the 1103P than in those grafted onto the M4 (Table 2). Specifically, the Cl − concentration in the leaves increased 2.3-fold on average from the controls to the saline treatments. Interestingly, this increase in leaf Cl − concentration from the controls to the saline treatments was significant in the M-series rootstocks, but not in the 1103P. The Ca 2+ concentration in the leaves also increased significantly from the controls to the saline treatments and, similarly to Cl − , more markedly onto the M-series than onto the 1103P ( Table 2). Regarding the leaf K + concentrations, the effect of WQ was also significant, leading to lower K + concentrations from the controls to the saline treatments. Regarding leaf Na + , there were no significant differences in the concentrations in response to WQ, but there were depending on the rootstock and, interestingly enough, depending on the R × WQ interaction. Specifically, the M1 tended to accumulate Na + in the leaves in response to the Saline treatments, which is an effect not observed for 1103P or M4 (Table 2). Thus, the M1 showed the lowest K + /Ca 2+ ratio and the K + /Na + one. Finally, there were differences in leaf Mg 2+ concentrations in response to both WQ and R, which were statistically, but, maybe, not practically significant ( Table 2). Grape Composition The grape composition was less affected by WQ than by Ress, some statistically significant interactions between both factors were observed ( Table 3). The TSS was affected by WQ and R and, in addition, the effect of WQ significantly changed in magnitude from one rootstock to the others, i.e., the interaction R × WQ was also significant. Specifically, grape TSS tended to increase from the controls to the saline treatments with a greater increment in the vines onto the M1 rootstock (Table 3). Contrary to TSS, the other grape technological composition parameters (pH, TA) were neither affected by R nor WQ nor R × WQ (Table 3). Regarding the phenolic composition, i.e., anthocyanins and polyphenols contents, it was not significantly affected by WQ, but heavily depended on R. Besides, a significant R × WQ interaction was also revealed in the polyphenols, which points toward an interesting change in the effect of WQ depending on the rootstock (Table 3). Specifically, both the polyphenols and the anthocyanins contents tended to decrease from the controls to the saline treatments onto the 1103P and on M1, with no changes onto the M4 ( Table 3). Regardless of the effect of WQ on phenolic composition in grapes, the 1103P tended to have higher anthocyanins and polyphenols than the other two rootstocks. Differential Gene Expression High-throughput mRNA sequencing was performed on whole leaf and berry skin samples from cv. 'Tempranillo' was grafted onto the three different rootstocks and exposed to salinity stress. On average, 41,326,458 reads were mapped in pairs to the grapevine genome. Of the 42,413 genes annotated in grapevine, 16,790 were expressed in sufficient quantities for statistical analysis. Although hierarchical clustering analysis and PCA of leaf and berry skin samples showed no apparent correlation in gene expression regarding either the WQ or R and no clear clustering was observed on PCA for either tissue (Supplementary Figures 1, 2), GSEA identified several processes (bins) that were statistically significantly (FDR q < 0.25) differentially expressed due to WQ in leaves and berries of scions grafted on the three rootstocks (Figure 3). The number of significantly differentially expressed bins was higher in leaves and berries of scions grafted on M4 and M1 rootstocks as compared to 1103P. The strongest enrichment was detected for flavonoid synthesis bins in berry skins for all three R. In them, chalcone synthases contribution prevailed (Supplementary Table 2). When examining the expression of individual genes involved in this pathway, large differences in average values were observed, with up to a fourfold difference in a uniform dominant upregulation pattern, although no statistically significant differences in gene expression were found between the control and saline treatments (Supplementary Table 3). Specifically, the differences in average values between salt-stressed and control vines were the highest in the expression of genes related to chalcone synthase (CHS) and phenylalanine ammonia-lyase (PAL) genes. This was most apparent in berry skins, where most of the PAL and CHS genes showed an upregulation pattern due to WQ (Figure 4). Moreover, the differences were highest in vines grafted onto 1103P than onto M4 and M1. However, multiple flavanone 3-hydroxylases showed a downregulation pattern in these samples. On the other hand, leaf samples showed lower differences, which were found in CHS genes in samples grafted onto M1, and some flavanone 3hydroxylase genes in samples grafted onto M4 (Supplementary Figure 3). Although no statistically significant differences in expression of individual genes were observed due to WQ in either leaves or the berries, some statistically significant differences due to R were observed (Supplementary Table 3). There were 15 differentially expressed genes found between the leaves of control plants grafted onto 1103P and M4. Most of them were more expressed in 1103P than in M4, but no specific pathway predominated among them. The technical validity of RNA-Seq and the data analysis pipeline was corroborated by the targeted analysis of three genes by qPCR. The qPCR results highly correlated with RNA-Seq (r 2 = 0.83) (Supplementary Figure 4). DISCUSSION The effects of WQ and R on physiology and transcriptomics of cv. 'Tempranillo' vines were assessed indirectly because all determinations were carried out on the scion, not in the rootstock, which is the barrier against soil salinity. However, the scion cultivar is the genotype that ultimately bears fruit and ripens it and, therefore, confers economic value on the crop (Marguerit et al., 2012). Thus, in this approach, the scion is considered an integrator of the effects induced by the rootstock. It is important to bear this in mind when interpreting the results, especially the transcriptome analyses, because of the combination of two Vitis spp. Genotypes are studied by evaluating only one of them, i.e., Vitis vinifera L. In comparison, most of the grapevine transcriptomics responses reported in the literature have been assessed on a single genotype, i.e., directly in the own-rooted Vitis vinifera (Cramer et al., 2007;Guan et al., 2018;Das and Majumder, 2019;Lehr et al., 2022) or on the rootstock without grafting (Gong et al., 2011;Henderson et al., 2014;Meggio et al., 2014;Corso et al., 2015;Vannozzi et al., 2017;Fu et al., 2019;Çakır Aydemir et al., 2020), and if carried out in both the scion and the rootstock, they have been under highly controlled conditions (Upadhyay et al., 2018;Bianchi et al., 2020;Franck et al., 2020;Baggett et al., 2021), i.e., not under real field-grown conditions. In the present trial, the water requirements of the grapevines were fully met trying to isolate the effect of WQ on the physiological and transcriptomic responses. When plant measurements and samplings were carried out, the water status experienced by the control vines grafted onto any of the rootstocks was indicative of very mild water stress according to Williams and Baeza (2007; Figure 1). This implies that irrigation largely met the evapotranspiration demand of the plants. However, it was not excessive, which would have resulted in irrigation water percolation and thus the washout of salts from the rooting depth. In fact, the ions' concentration in the soil solution of Saline treatments caused vine water stress. This was observed in the general decrease of both pre−dawn and stem in the vines grafted onto all rootstocks under irrigation with saline water, which means a worsening of the plant water status (Figure 1). This physiological response is likely due to a reduction of the soil water potential by an osmotic effect , i.e., the so-called osmotic drought (Chaves et al., 2009). As expected, pre−dawn was in line with stem (Suter et al., 2019), although plants onto M4 tended to show less negative stem values than those onto 1103P, with no difference in pre−dawn ( Table 1). These slight differences in stem between M4 and 1103P agreed with what Frioni et al. (2020) observed in M4 under water shortage. Plants react to salt stress and control their subsequent physiological responses using signals, which can be ionic, osmotic, hormonal, and/or reactive oxygen species regulation (Shahid et al., 2020;Zhou-Tsang et al., 2021). Concerning the ionic, in this work the leaf ion concentrations have been observed to differ among rootstocks, notably, Cl − , Ca 2+ , Na + , and Mg 2+ ( Table 2). Regarding Cl − , it usually builds up in the leaves of woody crops, and the plant's ability to avoid accumulating Cl − in leaves is considered directly proportional to its salinity tolerance. In this work the M-series rootstocks increased the leaf Cl − twofold in the saline treatment compared to the control. In contrast, in the 1103P the leaf Cl − increase in the saline treatment compared to the control was not significant. These results are in agreement, on the one hand, with Meggio et al. (2014), who also reported higher leaf Cl − in vines onto M4 in comparison to the good salt excluder 101-14 Mgt (Walker et al., 2004(Walker et al., , 2010 and, on the other hand, with Urdanoz and Aragüés (2009), who reported that the 'Tempranillo' cultivar grafted onto 1103P was able to exclude Cl − from the leaves more efficiently than other cultivar-rootstock combinations. The leaf Cl − non-accumulation ability conferred by the 1103P could be due to (i) limited salt uptake, i.e., ion exclusion, and (ii) limited salt translocation from the root to the shoot. Abbaspour et al. (2013) suggested that 1103P contributes to reducing shoot Cl − concentration by root efflux and vacuolar internalization. Besides, Henderson et al. (2014) suggested that transcriptional events contributing to the Cl − exclusion mechanism in grapevine are not stress-inducible, but constitutively different between contrasting genotypes. Anyway, Cl − exclusion factors are yet to be identified at the transcriptomic level, and are multigenic, including transport proteins (Gong et al., 2011;Das and Majumder, 2019;Zhou-Tsang et al., 2021). This genotypedependent, though fuzzy, transcriptomic effects agree with our GSEA results, which identified much less statistically significantly (FDR q < 0.25) differentially expressed bins due to WQ in 'Tempranillo' grafted onto 1103P as compared to M4 and M1 (Figure 3). Baggett et al. (2021), also similarly observed that salinity affected transcript abundance more in salt-sensitive genotypes than in salt-tolerant ones. Importantly, the leaf Cl − concentrations in our trial are higher than the ones reported by Urdanoz and Aragüés (2009) and Baggett et al. (2021), even though in the range of the ones found in 'Cabernet Sauvignon' onto 1103P by Dag et al. (2015) using similar WQ. The capacity of rootstocks to restrict leaf salt buildup should not be the only parameter for rootstock selection (Zhou-Tsang et al., 2021). Regarding other criteria, several authors indicated the better M4 performance compared to other rootstocks because of an improved antioxidant capability (Meggio et al., 2014;Corso et al., 2015;Lucini et al., 2020;Prinsi et al., 2020). Furthermore, it is important to consider the likely accumulation of Cl − and Na + in the permanent instead of the short-lived organs of the vine (Stevens and Partington, 2013;Netzer et al., 2014), which may lead to salinity carry-over effects on the mediumto-long term. Based on our results, this would be a concern for rootstocks M1 and M4 and less for 1103P ( Table 1), because of its possible detrimental effects on future bud fruitfulness . In fact, Dag et al. (2015) reported that irrigating the 'Cabernet-Sauvignon' scion grafted onto 1103P with water similar in salinity to the Saline treatment in this work, did not significantly affect vine performance in the first two seasons, but that Na + and Cl − accumulation in the wood eventually led to vine death in the third one. Regarding Na + , it is less prone to build up in the leaves of grapevines than Cl − (Henderson et al., 2018), which, given the Na + /Cl − ratio of the waters applied in this work, was also observed here (Table 2). However, there were differences in saltstress modulating ability among rootstocks with the M1 more liable to leaf Na + accumulation as salinity increased than 1103P or M4. Regarding leaf Ca 2+ , it increased in the Saline treatments compared to the Controls ( Table 2). That leaf Ca 2+ increased in the 'Tempranillo' leaves as salinity grew regardless of the rootstock suggests that all three rootstocks can maintain high Ca 2+ /Na + ratios and thus, efficiently exclude Na + (Shahid et al., 2020). More interestingly, however, there were differences in leaf Ca 2+ among the vines depending on the rootstock. Particularly, the M1 built up significantly more leaf Ca 2+ than the 1103P and M4 ( Table 2). Since Ca 2+ can regulate plant signaling, enzyme activity, ion channel performance, and gene expression (Golldack et al., 2014), the higher leaf Ca 2+ onto the M1 may be a positive plant adaptation as previously reported by Porro et al. (2013). Likewise, K + is also key in maintaining the osmotic balance and thus the ionic homeostasis in plant cells (Kumari et al., 2015;Guan et al., 2018). However, in our work, leaf K + decreased because of salinity, without differences among rootstocks (Table 2). Similarly, Guan et al. (2018) also found a decreasing trend in leaf K + in 'Summer Black' cv. in response to NaCl irrigation, and Munns and Tester (2008) indicated that a strong relationship between leaf K + and salt tolerance had not yet been reported. In our work, both the leaf K + /Ca 2+ and K + /Na + ratios were reduced by M1 compared to 1103P. This suggests that the 1103P conferred a greater salinity tolerance to the scion than the M1. Concerning the osmolyte regulation signals, a tendency to a slight osmotic adjustment was observed in the leaves on all three rootstocks. This is because, independently of the leaf water status, i.e., π 100 , the values of the saline treatments were significantly more negative (-0.16 MPa on average) than those of the Controls (Figure 1). Through osmotic adjustment plants cope with declining soil water potential mainly because increasing osmolyte concentrations decrease the water potential within plant cells, thus increasing the leaf relative water content and turgor for a given soil water potential (Barrios-Masias et al., 2018). These osmolytes can be inorganic, which are actively and passively taken from the same soil solution, or organic, which are obtained by biosynthesis of proline, glycine-betaine, etc. However, in our work, the expression of genes involved in amino acid metabolism was not altered in leaves in response to WQ (Figure 3), whereas the concentration of Cl − , K + , and Ca 2+ did increase in the leaves ( Table 2). Accordingly, the slight observed osmotic adjustment was achieved through the build-up of inorganic osmolytes, and this was controlled by the rootstock because the root is the organ that regulates the entry of the soil solution ions into the plant. The mechanisms of ion exclusion and/or upward movement along the xylem should be genetically regulated at the root level, i.e., over-expression of the cation HKT transporters genes (Deinlein et al., 2014;Fu et al., 2019;Zhou-Tsang et al., 2021), and not at the scion level. However, despite occurring at the root level, the mechanisms may be genetically regulated in a scion-induced manner (Franck et al., 2020) and then, maybe, detected in the scion. Remarkably, among the 15 differentially expressed genes between the 1103P and M4, a lactoylglutathione lyase (Vitvi04g01424) and a Dof family transcription factor (Vitvi18g00858) were found. These genes have previously been implicated in response to abiotic stress in grapevine (Shangguan et al., 2020), as it was implicated in redox homeostasis in heat-stressed 'Muscat Hamburg' berries (Carbonell-Bejerano et al., 2013). The generalized reduction found in net photosynthesis (A N ) under saline conditions, regardless of the rootstock (Figure 2), is related to stomatal and mesophyll conductance limitation, as there were no major differences in WUE i beyond those expected, given the differences in water status (Flexas et al., 2004). Reductions are in line with those found by Flexas et al. (1999) in 'Tempranillo' andBaeza et al. (2007) and Baggett et al. (2021) in 'Cabernet-Sauvignon' cultivars. Moreover, no differences were detected in the ratio of internal to atmospheric CO 2 concentration (Ci/Ca) between treatments (0.76 and 0.75 in Control and Saline treatments, respectively; data not shown). This suggests that in this work salinity was not high enough to induce either toxic effects on the photosynthetic apparatus or cellular damage in the leaves, as confirmed using the leaf transcriptomic analysis (Figure 3), but rather that it simply increased water stress by lowering the soil water potential, which eventually showed up in g s and, thus, A N reduction (Figure 2). Interestingly, according to Bianchi et al. (2020), water shortcoming stress decreases stomatal conductance due to lower water potential, but the photosynthetic activity keeps high with bare differences among 1103P, M1, and M4. In contrast, in our trial, M1 performed differently from the other rootstocks by inducing an overall reduction in A N . Moreover, Bianchi et al. (2020) did detect changes in the transcript abundances of key genes related to abscisic acid biosynthesis, but in the root, not in leaves, and studying only the wider Vitis spp. genotype. The overall effects caused by salinity on decreasing leaf photosynthesis as well as LAI (Figure 2 and Table 2) should have led to reduced berry ripening (Cramer et al., 2007;Chaves et al., 2009;Liu et al., 2020;Zhou-Tsang et al., 2021). However, the opposite was observed. The Saline treatments increased TSS compared to the Control grapes. These results point toward the ability of all rootstocks to keep allocating energy resources to fruit ripening regardless of salt stress. Interestingly, Meggio et al. (2014) also highlighted the salt tolerance of these rootstocks regardless of their ability to limit specific ion accumulations in the scion, which was associated with a lower decrease in A N and leaf on M4 compared to 101-14 Mgt. This was not observed under salinity in this work, as it neither was an underwater shortage (Bianchi et al., 2020). Effects of WQ and R on grape composition are usually not very conclusive according to studies where both factors are combined (Walker et al., 2007;Stevens et al., 2011;Hirzel et al., 2017;Mirás-Avalos and Intrigliolo, 2017). This is because there is a multitude of environmental factors that interact with rootstock response, most notably soil type (Ferlito et al., 2020). Specifically, the three rootstocks studied here perform well on soils high in calcium carbonate like the one used in this investigation because all three come from crossings with Vitis berlandieri, a species that evolved on calcareous soils (Harry, 1996). In this work, there was a salt-stress modulating effect by the rootstock on grape composition, primarily TSS and, secondarily, the phenolic composition as revealed by the R × WQ interactions ( Table 3). Whereas barely anything was observed on T.A., and, specifically, pH, which did not change following the decrease in leaf K + concentration due to salinity (Table 2) in accordance with Marín et al. (2021). Contrary to T.A., and pH, the TSS increased onto the M1 rootstock as salinity grew, whereas the other rootstocks did not respond in the same way. Moreover, the phenolic substances were also subjected to rootstock-specific modulating effects (Table 3). Despite these, the expected changes on gene expression of CHS and PAL pathways were not observed (Figure 4). This is, the significant reduction in anthocyanins content found in 1103P vines and polyphenols found in 1103P and M1 vines in response to salinity ( Table 3) could not be related to the transcriptomic changes observed, nor to differences in berry size (Table 3). Several studies have linked ultraviolet light to the induction of phenolic compound synthesis, specifically the expression of the CHS gene, a key enzyme in flavonoid biosynthesis (Merkle et al., 1994;Hernández et al., 2009;Wang et al., 2016;Reshef et al., 2018). However, these putative changes, which are related to berry exposure to sunlight in response to the saline effect on the vine leaf area (Zarrouk et al., 2016;Torres et al., 2020), would have been offset by the slight increase in the leaf area-to-production ratio (Walker et al., 2000;Bobeica et al., 2015). Moreover, flavonoid synthase is also involved in drought and osmotic stress tolerance and is controlled by rootstocks (Dal Santo et al., 2018;Bianchi et al., 2020;Zombardo et al., 2020). For instance, Zombardo et al. (2020), also in grape skin during ripening, reported some differentially expressed genes mainly involved in the synthesis and transport of phenylpropanoids (e.g., flavonoids) in response to rootstock effects. Besides, the most prominent differences in gene expression of the anthocyanin pathway usually occur during veraison, together with the differences of anthocyanin content and profile in the berry and begin to faint as the berry reaches final maturity (Castellarin et al., 2006;Castellarin and Di Gaspero, 2007). All of this highlights the complexity of relating phenotypic observations to changes in gene expression (Fu et al., 2019;Haider et al., 2019). In this regard, the next generation of omics is expected to help to identify gene function, speeding up the rootstock breeding programs for enhancing resilience to climate change in future viticulture (Marín et al., 2021). CONCLUSION The results of this work have shown how the grapevine M-rootstock's physiological and transcriptomic responses integrate at the scion level because of the irrigation with saline water under real field-grown conditions for the first time. The determinations carried out in the scion (i.e., cv. 'Tempranillo') permitted us to obtain some insight into the possible mechanisms developed by the rootstocks in response to water salinity, and the differences between the three that were tested in this work. In the short period of this trial, and a vineyard under establishment, all three rootstocks similarly adjusted osmotic potential to cope with osmotic stress, and then, vine water status declined in response to irrigation with saline water compared to non-saline water. Regarding the differential response among rootstocks, based on, on the one hand, grapevine physiology and grape must composition and, on the other hand, salt accumulation in leaves and transcriptomic changes, there were differences worth highlighting. First, the M1 rootstock was the one that responded the most to salinity by reducing A N and LAI, whereas the M4 rootstock was the one that buffered the best the effects of salinity on TSS and grape phenolic composition. Second, the 1103P rootstock was the one able to reduce the leaf Cl − and Na + build-up the most and affected transcriptomic expression the least, which might have positive effects on the long-term vine performance and grape composition. Longer-term studies are needed to unravel the molecular responses occurring in mature vineyards at both the scion and rootstock levels. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ebi.ac.uk/ ena, PRJEB44658. AUTHOR CONTRIBUTIONS IB, JP-P, FV, DI, LB, MP-N, and JP contributed to the conception and design of the study. IB, JP-P, and RS acquired the data. IB, JP-P, FV, DI, KG, and MP-N performed the data analysis and interpretation. IB and RS prepared the first-draft. IB, JP-P, FV, RS, DI, MP-N, and JP reviewed and edited the manuscript. DI, LB, and JP supervised the work. DI, LB, MP-N, and JP acquired the funding. All authors read and approved the submitted version. FUNDING This work was mainly supported by European Union, Slovenian Ministry of Education, ARRS project number P4-0165, Science and Sport and Spanish Ministry of Economy and Competitiveness, co-financing Arimnet 2 project EnViRoS (grant agreement n • 618127) but also by AEI-FEDER AGL2017-83738-C3-3-R. ACKNOWLEDGMENTS IB and JP-P gratefully acknowledge their postdoctoral contracts from the 'Juan de la Cierva' (FJC2019-042122-I) and 'Ramón y Cajal' programs (RYC-2015-17726), respectively, supplied by the Spanish Ministry of Economy and Competitiveness (MINECO). Thanks are also due to Mr. F. Sanz, D. Guerra, A. Yeves, M. Tasa, and P. Romero for their technical help with the fieldwork.
2022-06-06T13:23:14.300Z
2022-06-06T00:00:00.000Z
249377800
s2orc/train
v2
PWP2, a member of the WD-repeat family of proteins, is an essentialSaccharomyces cerevisiae gene involved in cell separation
PWP2, a member of the WD-repeat family of proteins, is an essentialSaccharomyces cerevisiae gene involved in cell separation WD-repeat proteins contain four to eight copies of a conserved motif that usually ends with a tryptophan-aspartate (WD) dipeptide. TheSaccharomyces cerevisiae PWP2 gene, identified by sequencing of chromosome III, is predicted to contain eight so-called WD-repeats, flanked by nonhomologous extensions. This gene is expressed as a 3.2-kb mRNA in all cell types and encodes a protein of 104 kDa. ThePWP2 gene is essential for growth because spores carrying thepwp2Δ1::HIS3 disruption germinate before arresting growth with one or two large buds. The growth defect ofpwp2Δ1::HIS3 cells was rescued by expression ofPWP2 or epitope-taggedHA-PWP2 using the galactose-inducibleGAL1 promoter. In the absence of galactose, depletion of Pwp2p resulted in multibudded cells with defects in bud site selection, cytokinesis, and hydrolysis of the septal junction between mother and daughter cells. In cell fractionation studies, HA-Pwp2p was localized in the particulate component of cell lysates, from which it would be solubulized by high salt and alkaline buffer but not by nonionic detergents or urea. Indirect immunofluorescence microscopy indicated that HA-Pwp2p was clustered at multiple points in the cytoplasm. These results suggest that Pwp2p exists in a proteinaceous complex, possibly associated with the cytoskeleton, where it functions in control of cell growth and separation. Introduction A conserved amino acid motif is repeated several times in members of an ancient and diverse family of proteins. First named the -transducin repeat (Fong et al. 1986), it has also been designated the periodic tryptophan protein (PWP) repeat (Durino et al. 1992) the GH-WD repeat (Neer et al. 1993), and the WD-repeat (Voorn and Ploegh 1992). It is a loosely conserved sequence of approximately forty amino acids, bracketed by glycine-histidine and tryptophan-aspartate (WD) dipeptides, repeated four to eight times within each polypeptide (Voorn and Ploegh 1992). More than three dozen WD-repeat proteins are now known. Some are composed almost entirely of WD segments whereas others contain nonhomologous extensions at the Nand C-termini as well as insertions between repeats. The function of the motif is undefined, but the observation that proteins containing WD-repeats often exist in multiprotein complexes suggests they may have a general regulatory role either in facilitating macromolecular assembly or in controling protein-protein interactions (Neer et al. 1994). The proteins are found in various cellular locations, including the plasma membrane, nucleus, cytoskeleton, and peroxisomes, and they have diverse cellular functions including cell division, signal transduction, gene transcription, RNA processing, vesicle fusion and cell-fate determination. P¼P2, the period tryptophan protein described here, is one of the few WD-repeat proteins with an essential role in S. cerevisiae. We have identified and characterized the P¼P2 gene and constructed gene disruptions to study its function. Expression of P¼P2 under control of the regulated GA¸1 promoter, was used to characterize defects in morphology resulting from depletion of the Pwp2 protein. Information about its cellular distribution was obtained by indirect immunofluorescence and subcellular fractionation. Strains, media and microbiological techniques The S. cerevisiae strains used in this study are described in Table 1. Yeast strains were grown on YPD rich medium (1% yeast extract, 2% peptone, 2% glucose), YPGal (1% yeast extract, 2% peptone, 2% galactose) or SD (0.67% yeast nitrogen base without amino acids, 2% glucose) (Guthrie and Fink 1991). Growth medium was supplemented with amino acids as required and solid medium contained 2% agar. Standard methods of yeast genetics, sporulation of diploids and dissection of tetrads were performed as described (Guthrie and Fink 1991). Yeast transformations were performed using the lithium acetate method (Geitz et al. 1992). Biochemistry and molecular biology Standard methods of molecular biology were performed as described (Sambrook et al. 1989), except where indicated. For Southern analysis total yeast DNA prepared from saturated cultures was digested with restriction enzymes for 10-12 h and electrophoresed on 0.8% agarose gels. DNA was transferred to a Hybond membrane and hybridized with radiolabelled probes for 18-24 h at 42°C in 6;SSC, 5; Denhardt's solution, 0.1% SDS, and 200 mg of denatured salmon sperm DNA per ml (Sambrook et al. 1989). After hybridization, filters were washed one for 30 min in 1;SSC, 0.1% SDS at 23°C and then twice for 30 min at 55°C. After washing, filters were subjected to autoradiography. For Northern analysis, total yeast RNA was isolated and poly(A)> mRNA, selected on Dynabeads Oligo (dT), was fractionated by electrophoresis, and transferred to filters. Random primer were labeling of probes with [ -P]dCTP (3000 Ci/mmol) was carried out using Klenow polymerase. To map the 5 end of P¼P2 mRNA, a synthetic oligonucleotide primer (5-CGGTGAGAGTAGTTGCTTGCC-3) was employed in a primer extension reaction involving reverse transcriptase. The oligonucleotide was labeled with [ -P]ATP using polynucleotide kinase, annealed to poly(A)> RNA, and extended using reverse transcriptase; from avian myeloblastosis virus. The cDNA products were fractionated on an 8% polyacrylamide gel in the presence of 7 M urea and compared with the products of a standard nucleotide sequencing reaction. For in vitro translation, plasmids pRS26 and pRS5 were transcribed and translated in the presence of [S]methionine using the coupled transcription/translation system from Promega. The resulting S-labeled polypeptides were analyzed by SDS-polyacrylamide gel (12%) electrophoresis and visualized by a autoradiography. For in vivo labeling, 10 ml of cells grown in YPGal, were harvested at an optical density (OD) at 600 nm of 1, washed with water and resuspended in an equal volume of Wickerham's minimal medium (WiMP) supplemented with appropriate amino acids and galactose. After incubation for 30 min at 30°C the cells were harvested, resuspended in 1 ml of supplemented WiMP containing 150 Ci Trans-[S]-label and incubated for 15 min at 30°C. The reaction was terminated by adding trichloroacetic acid to a final concentration of 5%. For immunoprecipitation from yeast extracts (Paravicini et al. 1992) the Pwp2 protein, tagged at the N-terminus with an epitope from the influenza virus hemagglutinin protein (HA), was selected with the 12CA5 antibody (BABCO). Disruption of the P¼P2 gene The plasmids pRS6 and pRS8 were constructed for one-step gene disruption. The 1.4-kb fragment containing the predicted YCR57c open reading frame flanked by XbaI and KpnI sites was obtained by PCR of S. cerevisiae genomic DNA using the following pair of primers: 5-CTTAAGCTCTAGAATATGGTCCGTAGATTCAG-AGG-3 and 5-ATCAAATCTAAGGTACCTGCTTCAGTCATT-TTC-3. The resulting fragment was subsequently cloned into XbaI#KpnI-digested pBluescript II KS to obtain pRS5. Plasmid pRS5 was digested with BamHI and ligated with the 1.8-kb BamHI fragment of HIS3, from a plasmid obtained from E. Phizicky, to yield pRS6. In parallel, pRS5 was digested with EcoRI, blunt-ended, digested with BamHI and finally ligated with the XhoI-BamHI HIS3 fragment in which the XhoI site had been blunt-ended to generate pRS8. PvuII-cleaved pRS6 or pRS8 were introduced into a diploid strain (JRY182) by transformation followed by selection for His> prototrophy. Restriction mapping and Southern hybridization analysis of genomic DNA from the resulting transformants was conducted to confirm that transplacement had occurred at the P¼P2 locus. The diploid transformants designated RSY12 and RSY15, carried the insertion (pwp2-1 : : HIS3/P¼P2) and the deletion (pwp2 1 : : HIS3/ P¼P2), respectively. Plasmid constructions YCplac22 (CEN4, ARS1, TRP1), YCplac111 (CEN4, ARS1, LEU2), YEplac181 (2 m, LEU2) and YIplac211 (URA3) were obtained from R. Gietz (Gietz and Sugino 1988). Plasmids used in complementation analysis were derived by subcloning the 4.6-kb XbaI fragment containing P¼P2 from pUC19-6.7 into YCplac22 to create pRS10. The vector pGT5 (GA¸1/10, CEN4, ARS1, ºRA3), obtained from I. Miyajima, was used to generate pRS9, pRS12, pRS18 and pRS25. Plasmid pRS9 was constructed by subcloning the 1.4-kb XbaI-KpnI fragment containing YCR57c from pRS5 in pGT5. The 2.6-kb AflII-XbaI fragment from pRS10 (indicated in Fig. 2A) was blunt-ended and cloned into pGT5 to form pRS12. The 3.2-kb XbaI fragment containing the entire P¼P2 gene from pRS10 was cloned in XbaIdigested pGT5 to obtain pRS18. Construction of pRS25 was achieved by digesting plasmid pRS10 with XbaI-NcoI to obtain the 2.6-kb fragment, blunting ends with Klenow enzyme, and inserting into pGT5. The construction of HA-P¼P2 first involved site-directed mutagenesis to introduce a unique NotI site after the first ATG of P¼P2 in pRS18. Next, the NotI fragment from pSM491 (from B. Futcher, Cold Spring Harbor), which encodes three copies of the HA epitope, was introduced in-frame at the NotI site of P¼P2. Finally the HA-P¼P2 XbaI fragment was cloned in YC-plac111, into which the GAL1/10 promoter had been inserted, to generate pRS35. For expression of HA-P¼P2 using the P¼P2 promoter, the PstI-XbaI fragment containing the 5 region of P¼P2 and the XbaI fragment containing HA-P¼P2 from pRS35 were cloned in YEplac181 and YCplac111 to generate pRS41 and pRS42. Expression of P¼P2 under control of the GA¸1 promoter Site-directed mutagenesis was performed on the 4.6-kb XbaI fragment from pUC19-6.7 subcloned in the pSelect (Promega) derivative pRS15 to create an XbaI recognition site 4 bp upstream of the ATG initiation codon of P¼P2. The mutagenic oligonucleotide was 5-GAACCGCATCTAGATGAAATCCG-3. Nucleotides that were changed from the wild-type sequence are underlined. The resulting 3.2-kb XbaI fragment containing the P¼P2 gene was subcloned in pGT5. The resulting plasmid, pRS18, was transformed into yeast strain RSY15 and Ura> transformants were sporulated and dissected Segregants were germinated on YPGal and tested for histidine (pwp2 1 : : HIS3), and Ura> (pRS18) prototophy. Haploid P¼P2/ pRS18, and pwp2 1 : : HIS3/pRS18 segregants were grown at 30°C on media containing galactose (YPGal). To test the phenotype associated with Pwp2 protein depletion, expression of the plasmidborne P¼P2 gene was reduced by transferring the cells to YPD or SD (glucose containing) solid medium and incubating at 30°C; cells were monitored several times over 3 days after transfer. Growth of segregants containing the pwp2 1 : : HIS3 (RSY24) arrested after transfer to glucose medium, whereas cells containing a wild-type P¼P2 gene continued to grow. Analysis of morphology and bud site selection Methods for interference-contrast and fluorescence microscopy were performed as described . Microscopy of single cells was done with a Zeiss Axiophot microscope using DIC optics or fluorescence and a 100;objective. Fluorescent staining with Calcofluor, DAPI and rhodamine phalloidin was performed as described The assay for completion of cytokinesis (Healy et al. 1991) involved formaldehyde fixation and removal of the cell wall with Zymolyase 100-T (Seikahaku). For chitinase treatment, RSY24 cells and control cells cultured in YPD or YPGal for 12-18 h were washed, resuspended at an OD of 1 in 10 mM phosphate buffer (pH 6.3), 0.1% sodiumazide containing either 1 unit Streptomyces griseus chitinase (Sigma C1525) or an equal volume of buffer. After incubation at 30°C for 4 h, followed by vigorous vortexing, the cells were counted and chains or clumps of 3 or more cells were scored as clusters. Sites of bud formation were quantitated by the method described by Flescher et al. (1993) which involves staining fixed cells with Calcofluor and grouping cells with one bud and a single bud scar into three classes: those with a bud adjacent to the bud scar (axial pattern), those with a bud at the opposite pole to the bud scar (polar pattern) and those with a bud at an intermediate distance from the bud scar (central pattern). Fractionation and Western blot analysis of HA-Pwp2p The pwp2 1 : : HIS3 strains RSY54 and RSY55 carrying the HA-P¼P2 gene on low-and high-copy-number plasmids respectively, were used for the fractionation analysis. Lysis of exponentially growing cells and fractionation was by the method of Espenshade et al. (1995) except where indicated. Briefly, approximately 1;10 cells were harvested by centrifugation and washed once in breakage buffer (50 mM TRIS-HCl, pH 7.5, 150 mM NaCl, 15 mM MaCl ) and resuspended in 500 ml of the same ice-cold buffer. An equal volume of acid-washed glass beads was added to the cell suspension and lysis was achieved by vigorous vortex mixing for 2 min, four times with 1 min intervals on ice. The resulting homogenates were collected and unbroken cells, glass beads, and large debris were removed twice by centrifugation for 5 min at 450;g. Aliquots (0.4 ml) of this fraction were adjusted to 0.5 ml with breakage buffer or with one of the following reagents, to the indicated final concentrations: 2 M urea, 1% Triton X-100, 0.1 M Na CO , (pH 11), 1 M NaCl. The samples were incubated on ice for 20 min and subsequently centrifuged at 10 000;g for 40 min. The resulting soluble fraction (S10) was further centrifuged at 100 000;g for 60 min at 4°C. The particulate fractions were rinsed with the breakage buffer and resuspended in the same buffer. Samples from the total cell A Poly(A)> RNA (5 g) from strain SP1 was fractionated on agarose, hybridized with the radioactive YCR57c probe, and visualized by autoradiography. Numbers indicate marker sizes in kb. Markers were from the 0.24-9.l5 kb RNA ladder (BRL). B Transcription initiation sites of PWP2. The position of the 5 end of the PWP2 RNA was mapped by primer extension with reverse transcriptase and a synthetic oligonucleotide complementary to a region near the ATG start codon of YCR58c. Lanes 1 and 2 contain total RNA and poly(A)> RNA, respectively, from strain JRY182. The same primer was used in standard dideoxy sequencing reactions (G, A, T, and C) with pRS10 and the products were subjected to electrophoresis in the same gel as the cDNA extension products to permit direct comparison. The DNA sequence presented is that corresponding to the transcript and hence is complementary to the sequencing ladder. The sites of transcription initiation are indicated by the arrows. C In vitro translation. PWP2 (pRS26, lane 1) and YCR57c (pRS5 lane 2) were transcribed and translated in vitro in the presence of [S]methionine. The translation products were analyzed by SDSpolyacrylamide gel electrophoresis (12%) and visualized by autoradiography. Numbers indicate the sizes (Da) of marker proteins. D Immunoprecipitation of HA-tagged Pwp2 protein. The immunoprecipitates of lysed cells from S-labeled cultures of RSY38 (HA-PWP2, lane 1), and RSY24 (PWP2, lane 2) were immunoprecipitated, analyzed by SDS-polyacrylamide gel electrophoresis (12%) and visualized by autoradiography. Numbers indicate the sizes (Da) of marker proteins lysate (T), the soluble fractions (S10, S100) and insoluble fractions (P10, P100) were subjected to SDS-PAGE and separated proteins were electrophoretically transferred to a nitrocellulose membrane. The filter was processed to detect HA-Pwp2p with 10 ng/ml of 12CA5 mAb as a primary antibody and using the ECL Western blotting system (Amersham). Indirect immunofluorescence Strains RSY55 and RSY54 were processed for immunofluorescence microscopy by the method of Pringle et al. (1991). Cells from early log phase cultures were fixed in 4% formaldehyde at 25°C for 30 min. Spheroplasts were prepared by using Glusulase (DuPont) and Zymolyase 20T (Seikahaku) at a final concentration of 0.1 mg/ml. HA-Pwp2 was detected with affinity-purified 12CA5 mAb and FITC-labeled goat-anti-mouse IgG. The cells were costained with 4,6-diamindino-2-phenylindole dihydrochloride (DAPI) (Sigma) at a final concentration of 0.1 mg/ml for 15 min after washing three times with PBS-1% BSA. Results The YCR57c mRNA is 3.2 kb long and includes YCR58c and YCR55c The sequence of S. cerevisiae chromosome III revealed three open reading frames with homology to G protein -transducin: YCR84c, previously identified as ¹ºP1 (Genbank accession number P16649) YCR57c, and YCR72c (Bork et al. 1992;Oliver et al. 1992). In order to investigate the function of YCR57c, we first examined the pattern of its mRNA expression in vivo. Northern analysis of poly(A)> RNA isolated from exponentially growing wild-type cells using a YCR57cspecific probe revealed a single 3.2-kb mRNA band (Fig. 1A). The size of this band was consistent with a previous report that the primary transcript from this region was 3.1 kb long (Akikazu and Isono 1990). As judged by the relative intensity of the signal obtained after hybridization of the same membrane with an AC¹1 probe, the expression level of the 3.2 kb mRNA was approximately 10% that of actin. Analysis of total RNA from isogenic MA¹a, MA¹ and MA¹a/ strains also showed a single band of 3.2 kb which hybridized with the YCR57c probe (data not shown), indicating that the 3.2 kb species was of the same length and equally abundant in all cell types. YCR57c is an open reading frame of 1317 bp (Oliver et al. 1992). Since the YCR57c mRNA was surprisingly large, additional mapping of the 3.2-kb transcript was performed using probes specific for sequences flanking YCR57c (Fig. 2). Probes corresponding to YCR58c and YCR55c also hybridized to the 3.2-kb mRNA. The 5 end of the transcript was mapped by primer extension analysis to bases located at positions !567 and !576 distal to the predicted initiator ATG of YCR57c but only !54 and !63 bases upstream of YCR58c (Fig. 1B). These results indicated that the 3.2-kb RNA transcript extended for several hundred bases on either side of the predicted YCR57c coding sequence and included sequences from YCR55c and YCR58c. Sequences from YCR58c, YCR57c and YCR55c are part of a single open reading frame, P¼P2 Resequencing of chromosome III between positions 218778 and 222139 revealed four deviations from the previously published sequence (Oliver et al. 1992): insertions of G, C and A, respectively, at positions 221 662, 221 416 and 220 247 as well as a single substitution of C for T at position 219 175. These changes altered the predicted reading frame between YCR58c, The WD-repeats revealed by matrix analysis of the deduced amino acid sequence of P¼P2 were aligned manually for comparison with the consensus WD-repeat. n1 indicates the number of residues between the repeats. The Pwp2 consensus sequence represents the consensus for the 8 internal repeats. The WD-40 consensus sequence, shown at the bottom, represents the consensus for fourteen proteins (Voorn and Ploegh 1992); hydrophobic amino acids are represented by ; , indicates a noncharged side chain and x, any amino acid YCR57c, and YCR55c, resulting in a single, continuous ORF flanked by typical control elements (Fig. 3). This ORF will be referred to as P¼P2. A putative TATA element (5-TATAAT-3), resembling the consensus sequence, was found at position !80 from the predicted ATG start codon. Immediately downstream of the TATA box, the sequence 5-AATAATAGTA-3 is present as a tandem repeat. Transcription start sites were detected at the penultimate T in each element (Fig. 1B). The ATG at position 1 is the first ATG sequence after the TATA box and the flanking residues resemble the consensus sequence for the initiation of translation for eukaryotic ribosomes (Kozak 1986). At the opposite end of the ORF, potential pre-mRNA polyadenylation sequences are present as 5-TATTTAT-3, 5-TAG . . . TTTGT TTT-3, and 5-TATATA-3 (Heidmann et al. 1992;Russo et al. 1991). Identification of the P¼P2 gene product To characterize the polypeptides encoded by the 3.2-kb mRNA, the fragment of chromosome III from position 218778 to 221957 was subcloned in an appropriate vector (pRS26) and subjected to coupled in vitro transcription and translation. As shown in Fig. 1C, the largest of the translation products had an estimated mass of 104 kDa, which would correspond to an ORF of approximately 3 kb. To generate an HA-tagged Pwp2 polypeptide, oligonucleotides encoding the HA epitope of influenza hemagglutinin were fused in frame to the 5 end of the P¼P2 ORF and subcloned in YCplac111-GA( pRS35) to allow expression under control of the inducible GA¸1 promoter. A strain carrying pRS35 was grown on galactose medium and subjected to in vivo S labeling and immunoprecipitation. As shown in Fig. 1D, the anti-HA monoclonal antibody (12CA5) immunoprecipitated a single polypeptide with an apparent molecular weight of 104 kDa. No such band was detected in extracts from a control strain that expressed the untagged version of P¼P2. This result demonstrates that the P¼P2 gene encodes a polypeptide of 104 kDa, consistent with the size of the product of the 3.2-kb mRNA detected by in vitro translation. Structural features of P¼P2 The predicted amino acid sequence of P¼P2 was used to search the database of nonredundant sequences using the BLAST and FASTA algorithms. This revealed a weak similarity at the C-terminus to neurofilament proteins and to Asp/Glu-rich proteins related to nucleolin. Significant homology was found with G protein -subunits, including human GBB2 (Accession No. P11016, BLAST score p"10\) which shared 32% identity and 77% similarity with the sequence of Pwp2p between residues 140 to 500. This central region of Pwp2p was strongly related (BLAST scores p(10\) to other subtypes of mammaliansubunits; GBB1 (P04901), GBB3 (P16520) and GBB4 (P29387), as well as the homologous -subunits from Drosophila (P26308), Caenorhabditis elegans (P17343), oligo forbesi (P23232), Dictyostelium discoideum (P36408) and the S. cerevisiae Ste4 protein (A30102). Also related but with lower identity scores (BLAST scores p(10\) were ¹ºP1 (P16649), Espl (P16371), IS1 (P43034), AAC3 (P14197), CDC4 (P07834), COP (P35605), ME¹30 (P39014), and CDC20 (P26309). Graphical self-comparison analysis (Maizel and Lenk 1981) revealed that Pwp2p contained internal repeats in the central region and at the C-terminus. The central repeats (Figs. 3, 4A) correspond to eight copies of the WD motif that was first identified as a repeating unit in the -subunit of transducin (Fong et al. 1986). Five of these are full-length WD-repeats and three are ''incomplete'' half-repeats. As shown in Fig. 4B, the eight sequences are conserved with respect to each other as well as to the consensus WD motif (Voorn and Ploegh 1992). A statistical search for additional features of the predicted sequence using the SAPS algorithm (Brendel et al. 1992) revealed a high proportion of Asp residues (8.1% overall), particularly between amino acids 225-235 and near the C-terminus (Fig. 4A). Such acidic clusters occur in less than 4% of proteins from yeast and humans (Sapolsky et al. 1993) and occur in several other WD-repeat proteins (Durino et al. 1992). Fig. 5A, B P¼P2 disruption. A Diploid yeast strains were transformed with restriction fragments containing the indicated pwp2 1 : : HIS3 deletion or pwp2-1 : : HIS3 insertion allele and His> transformants were selected. Sporulation and dissection of strains heterozygous for the P¼P2 locus, yielded a 2 : 2 segregation of viability to non-viability. Shown are 18 tetrads from the diploid strain JRY182. The four spores of each tetrad were positioned vertically. All viable spores were His\, indicating that P¼P2 is an essential gene. B For Southern blotting analysis of the disrupted P¼P2 gene, total cellular DNA was prepared from the wild-type diploid strain JRY182 (lane 1), the heterozygous insertion diploid strain RSY12 (P¼P2/pwp2-1 : : HIS3 (lane 3), the heterozygous disruption diploid strain RSY15 (P¼P2/pwp2 1 : : HIS3 ) (lanes 4 and 5), one of its haploid progeny (lane 2) and the haploid disruption strain RSY18 (pwp2 1 : : HIS3 carrying pRS10(YCplac22[P¼P2]) (lanes 6). DNA was digested with EcoRI, transferred to the nylon membrane and hybridized with the radiolabelled 1.4-kb fragment of YCR57c. The resulting autoradiogram shows patterns of hybridization consistent with the DNA sequence: bands of 2.3-kb and 4.4-kb are derived from the intact P¼P2 gene (lanes 1 and 2); insertion of the HIS3 gene in P¼P2 produced an additional band of 8 kb (lanes 4, 5 and 6) and eliminated the 4.4-kb band (lane 6) Although an N-terminal signal sequence was not detected, a possible subcellular localization to the mitochondrial matrix was predicted by the PSORT program (Goffeau et al. 1993;Nakai and Kanehisa 1992). Since the C-terminal region of Pwp2p had weakly repetitive character and a number of proteins related to neurofilaments have coiled-coil domains, the Pwp2p sequence was analysed with an algorithm to identify regions with heptad repeats (Lupas et al. 1991). This analysis predicted two regions near the C-terminus with 70-80% probability of forming coiled-coil structures. P¼P2 is essential for growth To investigate the effect of loss of P¼P2 function, the chromosomal copy of the gene was inactivated using the one-step gene disruption method (Rothstein 1991). Two disrupted alleles of P¼P2 were constructed (see Fig. 5A, and Materials and methods). A deletion-disruption mutation was created (pwp2 1 : : HIS3) by replacing the 467-bp BamHI-EcoRI fragment by the HIS3 gene. The other mutation (pwp2-1 : : HIS3), was created by insertion of the HIS3 gene at the BamHI site in the coding sequence. A restriction fragment containing each of these constructions was purified and introduced into a diploid strain (JRY182) by transformation, followed by selection for histidine prototrophy. Transformants (His>) in which one copy of the P¼P2 locus had been disrupted were identified by Southern analysis (Fig. 5B, lanes 3, 4 and 5). Strains of heterozygous pwp2 1 : : HIS3/P¼P2 diploids (RSY15) and pwp2-1 : : HIS3/P¼P2 diploids (RSY12) were sporulated, dissected on non-selective (YPD) medium and incubated at 15, 25 or 37°C. At each temperature and in all tetrads examined (45) only two out of four spores produced colonies (Fig. 5A). Viable cells were auxotrophic for histidine and contained the wild-type allele of P¼P2 as shown by Southern analysis (Fig. 5B, lane 2). This indicated that spore progeny carrying the disrupted pwp2 gene were inviable; P¼P2, therefore, is an essential gene. Similar results were obtained following disruption and sporulation of a diploid with a different genetic background (RAY3A). Microscopic examination of the non-growing progeny revealed either swollen spores (20%), enlarged spores which had germinated with one large bud (60%), and spores with two or more large buds which could not be separated by micromanipulation (20%). Most spores lacking P¼P2 underwent one or two mitotic duplications before arresting growth with one or two large buds. Plasmids containing fragments of the P¼P2 gene were introduced into the diploid strain RSY15 (heterozygous for the P¼P2 locus) and tested for complementation (Fig. 2B). Tetrads carrying either pRS9 (YCR57c), pRS12 (YCR57c, YCR55c) or pRS25 (C-terminal deletion derivative of P¼P2) produced only two viable spores that were wild type for P¼P2. In contrast, tetrads derived from pRS10 (P¼P2), and pRS18 (GA¸1-P¼P2) produced four viable progeny, two of which were disrupted in the chromosomal copy (Fig. 5B, lane 6). As judged by retention of the auxotrophic markers, pRS10 or pRS18 were not lost from these haploid cells even after prolonged growth in rich medium. These results confirm that P¼P2 is an essential gene and demonstrate that the entire ORF of P¼P2 is required to rescue the pwp2 1 : : HIS3 mutation. Reduced expression of P¼P2 results in formation of cell chains and multibudded clusters To further characterize the effects of Pwp2 protein depletion, the P¼P2 gene was placed under control of the GA¸1 promoter in plasmid pRS18. On solid media containing glucose, RSY24 cells (pwp2 1 : : HIS3/ pRS18) were markedly inhibited, with no colony formation visible, whereas colonies on galactose medium grew to normal size (Fig. 6). In YPGal liquid broth, RSY24 grew normally. After an exponentially growing culture was shifted from galactose to glucose (YPD), the optical density increased over a 24 h period but cell division, as measured by the number of individual cells, was dramatically reduced. Microscopic examination of the same cells grown for 8 h after transfer to YPD revealed chains of 3 to 8 connected cells (Fig. 7) that were absent in control cultures (SP1 or RSY18). The proportion of connected cells and the number of cells in each chain increased dramatically over a period of 36 h. While only 8% of the cells had one or more buds attached before the shift, 18 h after shift to YPD, more than 60% of the cells remained connected in chains of 3 or more. Connected cells formed chains or branched chains which could not be separated by sonication. The average number of cells in a cluster was 4 (300 clusters counted). In most clusters, some buds appeared elongated rather than round (35% of cells in 75 clusters). These results suggested that depletion of Pwp2p resulted in defects in bud morphology and mother-daughter cell separation. Pwp2p depleted cells exhibit defects in cytokinesis To test whether the cells connected in chains and clusters had completed cytokinesis, they were fixed with formaldehyde and treated with zymolyase to remove the cell wall (Healy et al. 1991). This treatment reduced the number of cells in the chain and produced individual spheroplasts or pairs of spheroplasts (20% pairs in 150 cells treated) joined by internal connections. Inspection of spheroplasts after treatment with DAPI (to visualize the nucleus) revealed that most pairs had individual nuclei (64%), some had only one nucleus (32%) or two nuclei in one cytoplasm (4%) (data not shown). This indicated that cytokinesis was defective in some cells, whereas nuclear division was relatively normal in most cells and continued even in the absence of cytokinesis. Although most clusters could not be separated by micromanipulation, a few detached cells placed on solid medium containing galactose and incubated at 30°C gave rise to colonies (in 9 out of 12 cases), which suggested that at least some of the cells in the clusters contained a viable nucleus. The distribution of nuclei was examined by DAPI staining 18 h after shifting RSY24 cells to glucose medium. Most cells in clusters contained a single nucleus (50%); some cells contained two nuclei (33%) and less frequently large buds were anucleate (17%) (Fig. 7D). Together these results confirmed that nuclear division and bud emergence continued in spite of defects in cytokinesis and cell separation. Rings of cell wall chitin that form at the nascent site of bud emergence and the septal junction can be visualized by Calcofluor staining. Chains of glucose-cultured RSY24 cells stained with Calcofluor exhibited normal staining at the sites of budding. Mother cells, identified by the presence of multiple bud scars, stained relatively intensely, suggesting that chitin deposition may be somewhat delocalized whereas chitin deposition at the septal junction between cells appeared normal (Fig. 7F). Treatment of chains of glucose-cultured RSY24 cells with chitinase resulted in a dramatic reduction in the proportion of chains and clusters of cells (Fig. 8). Twenty-four percent (n"3) of cells incubated with buffer (control) contained clusters of 3-15 . These data indicate that the formation of cells in chains and clusters results from incomplete hydrolysis of the chitinous septum between mother and daughter cells. In wild-type cells the actin cytoskeleton functions in directing polarized cell-surface growth. To analyze the pattern of actin distribution RSY24 cells were stained with rhodamine phalloidin. Compared to galactosegrown RSY24 cells, glucose cultured cells exhibited a normal staining pattern with intensely staining cortical patches in buds and long fibers that extended from mother cells into buds (Fig. 7B). P¼P2 depletion alters bud site selection Since mutations in genes involved in cytokinesis have been reported to affect bud-site selection (Flescher et al. 1993), we examined the budding pattern of Pwp2pdepleted cells. The abnormal cells that resulted from culturing strain RSY24 in glucose medium were Fig. 8 Separation of mutlibudded clusters by chitinase. RSY24 cells were cultured in YPD for 10-12 h prior to incubation with chitinase (#) or buffer (!) as described in Materials and methods. The results from three independent experiments are shown. Numbers indicate the total number of single cells, chains, and clusters in each experiment. A group of 3 or more cells was counted as a cluster. The average number of cells per cluster was 8 in these experiments Fig. 9 Abnormal bud position in Pwp2p-depleted haploid cells. The budding pattern of mother and daughter cells stained with Calcofluor was determined as described in Materials and methods and designated as ''axial'', ''polar'' or ''central''. Bud position was counted as the number of mothers with one bud scar and one bud that exhibited each pattern. The percentages of buds in the axial position in the combined results of three experiments are shown attached either in linear chains or (less commonly) in branched chains. This arrangement suggested that many cells grew buds at their poles, a pattern suggestive of the bipolar bud site selection exhibited by diploid cells. As judged by its ability to mate and to respond to -mating factor, however, RSY24 is a MA¹a haploid strain. To analyze the pattern of bud site selection in RSY24 cells, the location of buds in mother cell containing one bud and one bud scar (4-12% of the population) was determined after staining with Calcofluor to identify the bud scar. Such cells were classified as exhibiting either an axial, central, or polar budding pattern. In galactose-grown RSY24 cells, most mother cells (70%) had budded adjacent to the scar (axial), 19% budded near the scar (central) and 11% budded at the opposite pole (polar). Control cells (SP1) wild-type for P¼P2, as well as SP1 cells carrying pRS18 (RSY39), which overexpress P¼P2 mRNA when grown in galactose medium, exhibited a similar, predominantly axial budding pattern. In contrast, 18 h after shifting RSY24 cells to glucose medium, the majority of such mothers had formed buds at the opposite pole (51%) and relatively fewer had budded at sites either axial (23%) or central (26%) to the bud scar (Fig. 9). In agreement with these observations, in glucose-cultured RSY24 cells with two large buds, 14% of the buds were adjacent, 12% were central, and 74% had buds at opposite poles. Thus, mother cells wildtype for P¼P2, exhibited a predominantly axial pattern of bud site selection, whereas the majority of mother cells with reduced expression of P¼P2 had budded at the opposite pole. Since SP1 control cells cultured in glucose medium showed a surprisingly high frequency of polar budding (23%), the analysis of bud site selection was repeated in isogenic strains from a different genetic background (RAY3A). In glucose-grown RAY3A haploid cells, nearly all mothers with one bud and a single bud scar had budded adjacent to the scar (97%), whereas 12 h after shifting RSY41 cells to glucose medium such mothers exhibited a random pattern and formed buds at either polar (31%), axial (40%), or central (29%) sites (Fig. 9). These results confirmed the observation that the pattern of bud site selection was abnormal (random) in Pwp2p-depleted haploid cells. The homozygous pwp2 1/pwp2 1 diploid strain (RSY50) carrying pRS18 also produced unseparated cells after shifting to glucose medium, but these cells maintained the normal bipolar budding pattern observed for diploid wild type cells, RAY3A-D and JRY182. Subcellular location of Pwp2p The HA-P¼P2 gene with the intact promoter region of P¼P2 was subcloned in high-and low-copy-number vectors containing the¸Eº2 marker to generate, respectively, pRS41 and pRS42 (Materials and methods). To test if HA-P¼P2 was biologically active, strain RSY41 (pwp2 1 : : HIS3) carrying the GA¸1-P¼P2 expression plasmid with the ºRA3 marker (pRS18) was transformed with pRS41 or pRS42. Transformants that grew on galactose medium lacking both uracil and leucine were subsequently plated on medium containing 5-fluorouracil (5-FOA) to identify cells that had lost the ºRA3 plasmid. This generated strain RSY54 carrying pRS42, and strain RSY55 carrying pRS41. Both RSY54 and RSY55 cells were morphologically indistinguishable from wild-type control cells. Therefore, expression of HA-P¼P2 can complement the pwp2 1 : : HIS3 mutation. The intracellular location of Pwp2p was examined by indirect immunofluorescence microscopy. Cells of RSY54 and RSY55, showed staining at multiple points dispersed throughout the cytoplasm (Fig. 10b, d). Cells carrying HA-P¼P2 on the high-copy-number plasmid (RSY55) exhibited more intense staining and a greater number of the cytoplasmic dots. Wild-type cells or RSY41 cells expressing an untagged version of P¼P2 treated with the 12CA5 anti-HA antibody gave very weak staining. Cells were also co-stained with DAPI to visualize DNA (Fig. 10a, c, e). As judged by the lack of correlation between the pattern of DAPI staining and the pattern of the antibody staining, HA-Pwp2p did not colocalize with the nucleus or mitochondria. Cells with large buds exhibited the same staining pattern as the unbudded cells, suggesting that the intracellular location of Pwp2p did not change during the cell cycle. The subcellular distribution of HA-Pwp2p was further examined by fractionation of RSY54 cells (low-copy-number HA-P¼P2) using differential centrifugation (Epenshade et al. 1995;Singer and Riezman 1990). Logarithmically growing cells were lysed by shaking with glass beads and centrifuged at 500 g to remove unbroken cells. The cleared supernatants were then spun at 10 000;g yielding the S10 supernatant and the P10 pellet enriched in nuclei, mitochondria, vacuoles and large structures of both the endoplasmic reticulum and cytoskeleton. Finally, the S10 was centrifuged at 100 000;g to produce the S100 fraction and the P100 pellet containing Golgi particles, small vesicles and small cytoskeletal elements. Each subcellular fraction was resolved by SDS-PAGE and immunoblotted with 12CA5 antibody. As illustrated in Fig. 11A, HA-Pwp2p was enriched in the insoluble fractions P10 and P100. In contrast, HA-Pwp2p was barely detectable in the S100 cytosolic fraction, even when overproduced in strain RSY55. As a control for cell lysis and fractionation, glucose-6-phosphate dehydrogenase was found enriched in the S100 fraction in these experiments. To determine the nature of the association of HA-Pwp2p with the particulate cell fraction, cell lysates were pretreated with various reagents for 20 min prior to centrifugation at 100 000;g and subsequent Western blot analysis. The results of this experiment (Fig. 11B) indicate that Pwp2p was partially solubilized by pretreatment with 0.1 M Na CO (pH 11) or 0.5 M NaCl at room temperature, but was unaffected by treatment with lysis buffer, 1% Triton X-100 or 2 M urea. These results indicate that the strong associations between Pwp2p and insoluble components involve electrostatic and pH-sensitive forces rather than simple hydrophobic interactions. Together with the Fig. 11A, B Subcellular fractionation of HA-P¼P2. A Extracts from RSY54 cells were prepared and fractionated by differential centrifugation as described in Materials and methods. Equivalent volumes of total lysate, P10, P100 and the corresponding supernatant (S100) prepared from approximately 10 cells were resolved by SDS-PAGE and probed with 12CA5 antibody. The 104-kDa HA-Pwp2p band from each fraction is shown. B Cell lysates were separated into pellet (P) and supernatant (S) fractions by centrifugation at 100 000;g after treatment with lysis buffer, 1% Triton X-100, 2 M urea, 0.5 M NaCl or 0.1 M sodium carbonate (pH 11). The distribution of HA-Pwp2p after extraction is shown immunolocalization results, these data suggest that Pwp2p is associated with a large proteinaceous complex, possibly involving the yeast cytoskeleton, rather than membrane structures such as the plasma membrane or organelles. Discussion In this report, we have described a new gene, P¼P2, that was cloned after identifying and correcting errors in the coding sequence of YCR57c and in the flanking sequences predicted to encode YCR55c and YC58c (Oliver et al. 1992). This protein (Pwp2p) is predicted to have an N-terminal extension preceding a central region with eight WD-repeats that share homology with the G protein -transducin, followed by possible coiled-coil structures at the C-terminal region. The present work shows that Pwp2p is part of an insoluble complex located at multiple points in the cytoplasm, where it is involved in regulation of processes that are essential for growth, cytokinesis, and cell separation. Previously known WD proteins have been found in a variety of cellular locations, where they are involved in diverse processes including cell cycle regulation, transcription regulation, signal transduction and RNA splicing. Most of then seem to have a regulatory function and many are known to exist in large protein complexes (Neer et al. 1993). Another structure predicted to be present in Pwp2p, the coiled-coil domain, is also found in proteins that exist in complexes. Coiledcoil domains have been found in filament-forming proteins, G protein -subunits, and in the dimerization domains of several transcriptional regulatory proteins (Lupas et al. 1991). These observations raise the possibility that Pwp2p may exist as part of a multiprotein complex, possibly interacting with other proteins through the coiled-coil and WD-repeat regions. This idea is supported by the results of subcellular localization studies. Indirect immunofluorescence methods using biologically active HA-tagged Pwp2p, revealed a number of small, brightly staining structures throughout the cytoplasm, which did not coincide with the nucleus, mitochrondia or plasma membrane. The multidot pattern did not change significantly during the cell cycle or upon overproduction of P¼P2. Despite its hydrophilic, charged character, Pwp2p was found to be strongly associated with the insoluble fraction of the cell. Even after repeated differential centrifugation steps, the particles were found equally in the P10 and the P100 fractions, indicating that they do not have a uniform size or density. Based on several observations, we conclude that these particles are unlikely to be associated with large organelles like mitochondria, nuclei, vacuoles or large membrane structures of the plasma membrane and endoplasmic reticulum. The fact that reagents known to perturb protein-protein interactions, such as alkaline buffer and high salt were able partially to solubilize Pwp2p suggests that Pwp2p is part of a protein complex. The finding that the association between Pwp2p and the insoluble fraction is completely resistant to treatment with Triton X-100 is suggestive of cytoskeletal associations, which, in yeast as in other eukaryotic cells, remain intact following extraction with nonionic detergents (Branton et al. 1981;Herman and Emr 1990). Our localization and fractionation evidence indicates, therefore, that Pwp2p is not associated with membrane vesicles but rather forms part of a proteinaceous complex, potentially including cytoskeletal elements of the cell. Further biochemical experiments are necessary to determine the precise nature of the association of Pwp2p with this insoluble complex. In an effort to identify the specific components that interact with Pwp2p in vivo, we have initiated a search for genes that suppress pwp2 ts mutants in multiple copies and we have identified peptides that interact with Pwp2p in the two-hybrid system. The combined results of these genetic and biochemical studies should lead to a better understanding of the role of P¼P2 in control of growth and morphology. What is the function of P¼P2 ? The P¼P2 gene is expressed in all cell types and is essential for growth. Spores carrying a P¼P2 gene disruption can germinate, but after one or two rounds of replication they arrest growth with one or more large buds. A similar phenotype was observed in detail following down-regulation of P¼P2 gene expression with the GA¸1 promoter. Cell separation is severely defective and cannot easily be accomplished by micromanipulation or sonication. Bud site selection is also abnormal and follows a random pattern, in contrast to the axial pattern typical of haploid wild-type cells. Although some of the buds are elongated, which suggest that bud growth is overpolarized, the formation of actin cytoskeletal elements and the chitin ring at the base of the bud appear to be unaffected. Thus, continued DNA synthesis, nuclear division, and bud emergence in combination with defective cell separation and abnormal bud site selection results in the formation of chains and clusters of cells connected at the bud neck. The cell separation defect in cells depleted of Pwp2p protein could reflect either the formation of an abnormal septum or the lack of hydrolysis of the junction between the mother and bud. An endochitinase is involved in hydrolysis of the primary septum and cells that lack the chitinase gene, C¹S1, are unable to separate (Kuranda and Robbins 1991). CTS1 transcription is regulated by ACE2 and cells that carry an ace2 mutation also display a clumpy phenotype (Dohrmann et al. 1992). The ability of exogenous chitinase to release Pwp2p-depleted cells from clusters is consistent with the idea that hydrolysis of the septum is delayed in the absence of P¼P2. In contrast to P¼P2, however, strains that lack the CTS1 gene do not have an abnormal budding pattern (Kuranda and Robbins 1991) and neither CTS1 nor ACE1 are essential genes. Thus, although P¼P2 may not participate directly in chitin hydrolysis, it may possibly mediate the localization or the activity of factors involved in cleavage of the septum. The morphology of Pwp2p-depleted cells also resembles phenotypes that have been observed in cells with defects in the cell division cycle genes CDC3, CDC10, CDC11, and CDC12 (Hartwell 1971). These genes encode proteins of the ring of 10 nm filaments that appears as a cortical ring at the bud site before bud emergence; cells carrying temperature-sensitive mutations in any of these four genes display essentially identical pleiotrophic phenotypes (Flescher et al. 1993). With respect to organization of the cell wall at the base of the bud, they seem to have more severe defects in cytokinesis, bud elongation and formation of the chitin ring. However, in the formation of unseparated cells and the abnormal pattern of bud site selection, they resemble cells depleted of Pwp2p. Flescher et al. (1993) have proposed that CDC10 and other neck filament proteins required for cytokinesis are involved in determining the next site of bud emergence. The finding that Pwp2p-depleted cells also show defects in both cytokinesis and bud site selection is consistent with the idea that these processes are related. Another instance in which cells fail to separate occurs in strains that have undergone a dimorphic transition and form pseudohyphae that penetrate the growth medium (Gimeno et al. 1992). Phenotypically related defects in cell separation and abnormally elongated buds were reported for haploid cells carrying mutant alleles of CDC55 (Healy et al. 1991), YCK1, YCK2 (Robinson et al. 1993), and ELM1, ELM2, and ELM3 (Blacketer et al. 1993). However, RSY24 cells that formed microcolonies on YPD plates did not show the distinctive pattern of growth below the surface of the agar medium, referred to as foraging, that is a property of the pseudohyphal form, nor do they follow the normal budding pattern which is required for the dimorphic transition. The finding of numerous sequences related totransducin indicates that the family of WD-repeat proteins is both ancient and diverse. In S. cerevisiae, where many WD-repeat sequences have been identified by systematic sequencing, the relative ease of genetics provides an attractive system for functional analysis. In addition, when homologous genes have been identified in other organisms, cross-species complementation can offer a paradigm for further experimentation. Thus, after completion of this manuscript we were interested to discover human ESTs (T16114, T75342, R20872, F13143) in the XREF database (Tugendreich et al. 1994) with significant identity to segments of P¼P2. It will be important to obtain the full sequences of the human cDNAs to verify that as seems likely, homologs of P¼P2 exist in more complex eucaryotes.
2017-07-29T01:22:34.837Z
1996-08-27T00:00:00.000Z
24263000
s2orc/train
v2
Health-related behaviors and associated factors among swimming pool users in Kombolcha Town, Northeastern Ethiopia
Health-related behaviors and associated factors among swimming pool users in Kombolcha Town, Northeastern Ethiopia Objective Unhealthy behaviors during swimming exposes at risk of recreational water-associated diseases. The swimming pool users are the high-risk group for getting and transmitting the diseases. Thus, conducting a study on swimming pool users' health-related behaviors is crucial to prevent the transmission of recreational water-associated diseases. Methods This cross-sectional study was employed among 140 randomly selected swimming pool users from April 1st to 30th, 2021 in Kombolcha Town. Data were collected using an interviewer-administered questionnaire and an on-the-spot-observational checklist. The collected data were entered to EpiData version 4.6 and exported to SPSS version 25 for data cleaning and analysis. Determinants of health-related behaviors were identified by using a multivariable logistic regression model at a p-value < 0.05. Results The overall good health-related behavior among swimming pool users was 41.4% (95% CI: 33.6–49.3). Out of the total 140 swimming pool users, 55% (95% CI: 46.4–62.9) had good knowledge about health risks during swimming. Good knowledge about health risks during swimming (AOR = 9.64; 95% CI: 3.14–29.61), educational status of college or above (AOR = 6.52; 95% CI: 1.76–24.10) and age being > 28 years (AOR = 6.49; 95% CI: 2.34–18) were factors significantly associated with good health-related behaviors. Conclusion The finding of the study showed that the majority of the swimming pool users had poor health-related behaviors. Thus, Kombolcha Town Health Bureau and swimming pool managers should give attention to this population to enhance health-related behaviors through addressing the significant predictors. Introduction A swimming pool is a water body used for recreational, medical, or sporting purposes (1). Swimming pools expose people to a variety of health risks beyond their use, related to microbial and chemical contamination, or risk of drowning and injury. Natnael . Of these health risks, viral gastroenteritis, hepatitis A, diarrhea, and Legionellosis were the most common (2). The health risks associated with attending swimming pools are public health challenges in the developing and developed world. Of the 381 outbreaks attributed to waterborne infections, nearly half (49%) occurred in New Zealand, 41% in North America and 9% in Europe (3). In the United States, of the 81 outbreaks in 2009-2010, 57 involved treated recreational water and 24 involved untreated recreational water (4). At least 1,030 cases and 40 hospitalizations were attributed to treated recreational water in 2011-2012 (5). Pool water can be contaminated with a variety of microbial pathogens, the most common being Staphylococcus aureus. Staphylococcus aureus causes infections such as osteomyelitis, pneumonia, conjunctivitis, and urinary tract infections in humans (6,7). Besides microbiological contamination, pools can also be contaminated with chemicals. These chemicals enter the pool through sweat, urine, dirt, lotions, and sanitizer byproducts. Pharmaceuticals and personal care products (PPCPs) can also be introduced into swimming water from the body surface or swimwear (8). The risk of contracting recreational water-associated diseases depends on ingestion rate, age, and sex. Swimming pool users ingest about 32 ml of water per hour and children swallowed four times higher water than adults. By gender, men can swallow more water than women (9). Men swallowed on average 27-34 ml per swimming event, women 18-23 ml, and children 31-51 ml (10). Participants in sports with a lot of water contact are also at high risk of recreational waterassociated diseases (11). These risk factors create opportunities for pathogenic microorganisms and chemicals to be ingested with water by pool users, creating health risks associated with pool participation. Pathogenic microorganisms in swimming pool water can be unintentionally ingested during swimming, causing the risk of acute gastrointestinal illness (AGI) (11). The meta-analysis reports also showed that swimming pool users exposed to recreational water present a higher risk of respiratory illness compared to non-swimmers (12). It has been estimated that every year more than 50 million cases of severe respiratory diseases occurred as a result of swimming in polluted swimming pool water (13). Skin infections can also be caused as a result of polluted swimming pool water (14,15). The risk of illness and injury can be prevented through practicing different health-related behaviors. From these behaviors, a pre-swim shower will help to remove traces of sweat, urine, fecal matter, cosmetics and other potential water contaminants. The other health-related behavior is using toilets before swimming which helps to minimize urination in the pool and accidental fecal releases. WHO Abbreviations: AOR, adjusted odds ratio; COR, crude odds ratio; CI, confidence interval; WHO, World Health Organization. guidelines for safe recreational water environments also recommend pre-swim footbath to minimize the transfer of dirt into the pool water and goggles to prevent the entrance of microorganisms into the eye during swimming (16). Moreover, appropriate treatment and creating awareness on health-related behaviors of the swimming pool operators is important in reducing the health risks in pool water (16,17). Although health-related behaviors are essential in reducing health risks in pool water, a study in the USA illustrated that only 57% of swimming pool users showered before entering the pool (18). An Italy study also found that only 65% of swimming pool users always practice pre-swim shower (19). A similar result was reported by another study in Italy where the pre-swim shower was 69%. Despite this, swimming caps and proper footwear were the dominant practices among Italy swimming pool users (20). Moreover, a study in Canada showed 78.2% showering before swimming (21). Despite this fact, preswim showers contain chlorine compounds, which are used to disinfect pool water, and organic matter such as sweat, urine, and personal care products brought in by swimming without a pre-swim shower. We limited the concentration of dissolved organic carbon (DOC) formed by reaction with 27% (22). In addition to releasing chemicals, showering before swimming can reduce microbial contamination in pool water (23). Despite health-related behaviors, several studies have been conducted on swimming pool water quality in Ethiopia as well as in other countries giving less emphasis on healthy swimming behaviors (24-30). Even the previous studies on health-related behaviors were concentrated in developed countries. A study in developing countries particularly in Ethiopia on swimming pool users' health-related behaviors is lacking. Despite this fact, swimming pool users' healthrelated behavior is important to reduce biological and chemical contamination of swimming pools. Furthermore, swimming pool users are the high-risk group for getting and transmitting swimming pool water-associated diseases (31). Thus, this study was designed to address the information gap by determining health-related behaviors and associated factors among swimming pool users in Kombolcha Town, Northeastern Ethiopia. Study setting The study was conducted in three swimming pools in Kombolcha Town. Kombolcha Town is a Town located 376 km North-East of Addis Ababa and 25 km away from Dessie City at an altitude of 1,857 meters above sea level. The estimated total population of Kombolcha Town was 126,144 (32). Study design, period, and population This cross-sectional study was conducted from April 1st to 30th, 2021 among swimming pool users in Kombolcha Town, Northeastern Ethiopia. The source population for this study were all swimming pool users who were swimming in the swimming pools of Kombolcha Town, Northeastern Ethiopia. The study population were all selected swimming pool users who were swimming in the swimming pools of Kombolcha Town, Northeastern Ethiopia. Sample size determination and sampling techniques The sample size was determined using single population considering the proportion of good health-related behaviors of 50% as there were no similar studies conducted, 95% CI and a 5% of marginal error. After adding non-response rate of 10% the final sample size was calculated and corrected to 145. Initially, the total sample size was proportionally distributed for the three swimming pools based on the total of 75, 57, and 62 regular swimming pool users in the three swimming pools, respectively. For each swimming pool, the proportionate number of study participants was determined using, n = nf/N * ni where, ni = number of swimming pool users in each swimming pool, nf = total sample size, and N = total number of swimming pool users in Kombolcha Town. Therefore, the numbers of regular swimming pool users in the three swimming pools by proportional allocation were 56 from the first swimming pool, 43 from second swimming pool, and 46 from the third swimming pool. Then, after proportional allocation of the sample size, the first swimming pool user was selected by lottery method. Next, swimming pool users were selected using a simple random sampling technique from the respective swimming pools. Health-related behaviors Swimming pool users who answered above or equal to the mean of 8 health-related behaviors questions were grouped as having good health-related behaviors whereas, swimming pool users who answered below the mean value of 8 health-related behaviors were grouped as having poor health-related behaviors. Knowledge about health risks during swimming Swimming pool users who answered above and equal to the mean of knowledge questions out of 7 knowledge questions grouped as having good knowledge about health risks during swimming whereas, swimming pool users who answered below the mean of knowledge questions out of 7 knowledge questions grouped as having poor knowledge about the health risks during swimming. Data collection tools and quality assurance The data were collected using interviewer-administered questionnaire and on-the-spot-observational checklist which was adapted from WHO guidelines and published articles (16, 20, 34). The tool consisted of three sections; Part I: socio-demographic factors; Part II: knowledge about health risks during swimming related factors and Part III: health related behaviors while attending swimming pools. The questionnaire was prepared in English and translated to the local language Amharic and the Amharic version was used for data collection and then retranslated back to English to ensure consistency. Pre-test in 5% of the selected swimming pool users was done in Dessie City. The reliability of the questionnaire was checked based on the pre-test results. The questionnaire was modified based on the reliability test results and feedback from experts. The Natnael . /fpubh. . Two data collectors and one supervisor were recruited. All data collectors and supervisor had previous experience in data collection. A 1-day training was given for data collectors and supervisor on the method of extracting the needed information, how to fill the information on a structured questionnaire and checklist, the ethical aspect in approaching the participants, the aim of the study, contents of the questionnaire as well as precaution about COVID-19 during data collection. The supervision was conducted daily by one degree-holder in Environmental Health. Data management and statistical analysis The collected data were entered into Epi-Data version 4.6 and exported to the Statistical Package for Social Science (SPSS) version 25.0 for data cleaning and analysis. Descriptive statistics such as frequencies and percentages were determined for categorical variables, while mean with standard deviation was determined for continuous variables. Binary logistic regression was done to see the crude significant relation of each independent variable with a dependant variable. Variables with 95% CI and a p-value < 0.25 from the bivariable analysis [COR (crude odds ratio)] were entered into multivariable logistic regression analysis [AOR (adjusted odds ratio)]. In turn, those variables with P-values < 0.05 were considered as significantly associated with health-related behaviors at 95% CI. The presence of multicollinearity among independent variables was checked using standard error at the cut-off value of 2 and there was no multicollinearity. The model fitness was checked using the Hosmer Lemeshow test and the model was fit. Socio-demographic characteristics A total of 140 swimming pool users completed the survey with a response rate of 97%. Of all swimming pool users, 103 (73.6%) were male and 37 (26.4%) were female. Regarding the age of the swimming pool users, 71 (50.7%) were aged ≤ 28 years and 69 (49.3%) were > 28 years with a mean age of 28 years. Overall, 28 (20%) of the swimming pool users' educational levels were primary education and 69 (49.3%) were college or above (Table 1). Knowledge about health risks during swimming To determine participants' knowledge about health risks during swimming, 7 items were used. Participants were given "correct" or "incorrect" response options to these items. A Natnael . /fpubh. . correct response to an item was assigned 1 point, while an incorrect one was assigned 0 point and the total score ranged from 0 to 7. Out of the total 140 swimming pool users, 55% (95% CI: 47.1-63.6) had good knowledge about health risks during swimming while 45% (95% CI: 36.4-52.9) had poor knowledge about the health risks ( Table 2). Compliance with health-related behaviors To determine health-related behaviors, participants were asked 8 questions with "always, " "sometimes, " and "never" responses. Those who responded as always were given 2 points, sometimes marked as 1 point, while never was marked as 0 point and the total health-related behaviors score ranges from 0 to 16. The proportion of good health-related behaviors among swimming pool users was 41.4% (95% CI: 32.9-49.3). More than half (58.6%) of the swimming pool users had poor health-related behaviors. 35.7% of the swimming pool users always use toilet before swimming and nearly half (46.4%) of them practiced a pre-swim shower ( Table 3). Factors of health-related behaviors The multivariable analysis result of this study showed that good knowledge about the health risks during swimming (AOR = 9.64; 95% CI: 3.14-29.61), educational status of college or above (AOR = 6.52; 95% CI: 1.76-24.10) and age being > 28 years (AOR = 6.49; 95% CI: 2.34-18) showed significant association with good compliance with health-related behaviors among swimming pool users (Table 4). Discussion This study focused on health-related behaviors of swimming pool users, assuming that there is a great possibility that swimming pool water is contaminated with unhealthy behaviors of swimming pool users. The finding of this study revealed that the overall good health-related behaviors score of the swimming pool users was 41.4%. In this finding, good knowledge about health risks during swimming, educational status of college or above and age being > 28 years showed a significant association with good health-related behaviors among swimming pool users. Urination in the pool water and accidental fecal releases can be minimized by using toilets before swimming (16). However; in this study, only 35.7% of the swimming pool users always use the toilet before swimming. A pre-swim shower is essential to reduce the risk of biological and chemical contamination of swimming pool water. In addition, a pre-swim shower reduces the number of micro-organisms, sweat, and chemicals that swimming pool users transfer to the water as a result water becomes easier to disinfect (16, 35). Despite this, only 46.4% of the study participants reported as they always take a pre-swim shower which was lower than the two studies conducted in Italy where the pre-swim shower was 65 and 69% (19,20), in Canada (78.2%) (21) and in the United States (57%) (18). The difference might be due to differences in study settings, the study period, socio-demographic characteristics, and regulatory factors. WHO guideline recommend footbath before entering the swimming pool and the use of goggles during swimming (16). In this study, only 12.1% of participants reported a footbath before swimming and the use of goggle was reported by 7.9% of swimming pool users. On the contrary, in Italy, 69% of swimming pool users practice footbath before swimming and 47.5% of Indian swimming pool users use goggles during swimming (19,36). In the present study, 48.6% of the swimming pool users reported that they always avoid using cosmetics in the swimming pool water and 45.7% avoid swimming if ill with sickness or diarrhea. Furthermore, 10 and 9.3% of the swimming pool users' always use a swimming caps and proper footwear, respectively. In contrast to the present findings, a study from Italy revealed that swimming caps and proper footwear were the dominant practices (20). This deviation may be due to the differences in socio-economic characteristics of the study population, and the population's way of life. To achieve good health-related behaviors, having good knowledge about health risks during swimming is a key factor, the result of this study showed that only 55% of the swimming pool users had good knowledge. The current study found that swimming pool users with good knowledge about health risks were 9.64 times more likely to have good healthy swimming behaviors than those with poor knowledge. Educational level also showed an association with good compliance with healthy swimming behaviors. Swimming pool users who had an educational status of college or above were 6.52 times more likely to have good healthy swimming behaviors than those who were primary level. The reason for the association of knowledge and educational level with compliance with healthy swimming behaviors could be because educated and knowledgeable people are in a better position to have access to healthy swimming behaviors information. Thus, swimming pool users need to be aware of health-related behaviors and should be encouraged to adopt healthy swimming behavior to save themselves and other swimming pool users from the risks associated with swimming pools. Moreover, having age > 28 years was significantly associated with good healthy swimming behaviors among swimming pools users, which agrees with previous studies in other countries showing that higher aged individuals have higher healthy swimming behaviors (20,37). Swimming pool users with age > 28 years were 6.49 times more likely to have healthy swimming behaviors than those with lower age groups. This could be due to the fact that adhesion to the rules is related with the age (19). This may indicate that older age people are more likely to apply better healthy swimming behaviors because of their age, and this may prevent the occurrence of health risks during swimming. Thus, behavioral intervention programs had better consider lower aged swimming pools users. This study has certain limitations. The first limitation of the study is the possibility of participants giving socially desirable responses as this study used self-reported data (38). Comparing the result of the study with different study areas is also the other limitation of the study. In addition, symptoms and morbidities suffered by swimming pool users related to the attendance of swimming pools were not studied. Moreover, this study only included outdoor swimming pool users, which may limit conclusions and the generalizability of these findings to indoor swimming pool users. Furthermore, the findings of this study may not represent the situation at the national level, as the study was conducted only in Kombolcha Town. Although the study faced the above mentioned limitations, to the best of my knowledge, no other studies had been reported to investigate the extent of healthy swimming behaviors and associated factors among swimming pool users in Ethiopia including in Kombolcha Town. Understanding determining factors can help us improve healthy swimming behaviors among swimming pool users in Kombolcha Town. Conclusion In this study, only 41.4% of the swimming pool users had good health-related behaviors. Factors significantly associated with good health-related behaviors were good knowledge about the health risks during swimming, educational status of college or above and age being > 28 years. Good healthrelated behaviors were relatively poor and require further Natnael . /fpubh. . improvement. Thus, Kombolcha Town Health Bureau and swimming pool managers should give attention to this population to enhance health-related behaviors by addressing these significant predictors through continuous supervision and awareness creation. Swimming pool managers should also encourage healthy swimming behaviors through obligatory paths. Generally, in the current study, the health-related behavior of indoor swimming pool users was not included. Thus, the conclusion could only be forwarded to healthrelated behaviors of outdoor swimming pool users. Future studies should include the health-related behaviors of indoor swimming pool users. The symptoms and morbidities suffered by swimming pool users related to the attendance of swimming pools should also be studied. Data availability statement The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Ethics statement The Helsinki Declaration was followed during the study's execution. The Ethical Review Committee of College of Medicine and Health Sciences at Wollo University provided ethical clearance. Following a request for assistance from the Kombolcha Town Health Bureau, the different pool managers at the study site gave their approval to carry out the study. Each participant who was chosen for the study was informed of its goal beforehand, and their written agreement was acquired. Swimming pool users who volunteered to take part in the study were also informed of their right to withdraw at any time throughout the interview. The response's confidentiality was maintained throughout the entire research process. Author contributions The author confirms being the sole contributor of this work and has approved it for publication. Funding This research was supported by Wollo University with grant number WU/20968/13.
2022-11-25T15:08:28.329Z
2022-11-25T00:00:00.000Z
253840830
s2orc/train
v2
Arab ESL Secondary School Students’ Spelling Errors
Arab ESL Secondary School Students’ Spelling Errors English spelling has always been described by many language researchers and teachers as a daunting task especially for learners whose first language is not English. Accordingly, Arab ESL learners commit serious errors when they spell out English words. The primary objective of this paper is to determine the types as well as the causes of spelling errors made by Arab ESL secondary school students. In order to collect the data, a fifty-word standardised spelling test was administered to seventy Arab student participants. The students’ types of spelling errors were detected, analysed and then categorised according to Cook’s (1999) classification of errors namely substitution, omission, insertion and transposition. In total, 2,873 spelling errors of various categories were identified. The study findings revealed that errors of substitution constituted the highest percentage of the students’ type of errors. In addition, the study indicated that the main causes of the students’ spelling errors were possibly attributed to the anomalous nature of the English spelling system, the Arab students’ lack of awareness of English spelling rules as well as L1 interference. Despite being conducted in an ESL context, the study was almost consistent with the findings indicated by other studies which were carried out in many Arabic EFL context. The findings suggest that spelling instruction should be emphasised while teaching English and should also be integrated with the skills and subskills of reading, writing, pronunciation and vocabulary in order to develop the students’ spelling accuracy. INTRODUCTION Spelling is a complex written language skill, which requires a learner to possess a number of language abilities, including phonological, morphological, visual memory skills, semantic relationships as well as adequate knowledge of spelling rules (Staden, 2010). As such, learning to spell words correctly is considered an important activity for various reasons. One is that accurate spelling makes a reader understand the written message clearly. Thus, a writer should have good competency in spelling in order to convey his written message without making any distraction. Okyere (1990) emphasised that spelling is an essential skill to master because it allows for the clear expression of thought in any written text. Accordingly, spelling is considered one of the indispensable skills in written communication and a principal component of a total language arts curriculum. Warda (2005) stated that spelling also affected the students' written performance, and students with low spelling confidence and skills are expected to write less and more plainly than confident spellers do. Despite its importance, English spelling presents a considerable challenge to most Arab learners (Al-Jarf, 2010;Bowen, 2011). The large amount of research conducted on spelling error analysis in the Arab countries revealed that With this in mind, the primary goal of the current study is to investigate the students' spelling errors committed by the Arab secondary school students at the Saudi School in Malaysia where English has a strong presence in everyday spoken and written communication, and in which the Arab ESL school students are anticipated to possess a moderately higher level of writing skills especially in spelling than many Arab EFL students living in the Arab countries. Accordingly, this study proposes to accomplish two objectives: 1. to identify the major types of spelling errors made by Arab ESL secondary school students. 2. to explore the main causes underlying the Arab ESL secondary school students' spelling errors. LITERATURE REVIEW Researchers in the field of applied linguistics have exerted considerable efforts to explore effective and practical procedures in order to resolve pedagogical as well as learning difficulties. Accordingly, their scholarly research has led to the emergence of three widely known approaches to learners' performance: Contrastive Analysis (CA), Error Analysis (EA) and Interlanguage (IL). To begin with, the approach of CA compares and contrasts two linguistic systems, i.e. phonetic, syntactic and so forth to surmount L2 difficulties confronting language learners and teachers. The central premise of CA is that language is a set of habit formation, rather than a rule formation, which can be learnt through imitation, practice and reinforcement (Ellis, 1985;Richards & Schmidt, 2010). This approach considers the learner's first language as a primary cause of the difficulties which he encounters while learning L2. Thus, errors are expected to occur when a learner negatively transfers some linguistic elements, e.g. sounds or structures from his first language to the target language (Selinker and Gass, 2008). However, empirical research carried out by scholars such as Corder (1981), James (1998), andAl-Jarf (2010) indicated that learners' error cannot be merely resulted from their mother tongue interference. In fact, most of the errors which second language learners made suggest that they were gradually developing an L2 rule system, i.e. L2 learners pass through stages of learning whose errors vary from one developmental stage to another (Dulay et al. 1982). Discontent with the CA approach has changed the scholars' focus to a more justifiable and effective procedure of analysing learners' errors, i.e. Error Analysis. Corder, the proponent of EA, and his followers view learner's errors as an experimental technique, which indicates how a learner's language evolves (1981). According to Benati and VanPatten (2010), EA is a tool that incorporates a set of procedures for identifying, describing and explaining the errors of second language learners. Compared with CA, the approach of EA enables language researchers understand the nature of the L2 learning process deeper. Researchers such as Keshavarz (2003), Nzama (2010) and Zawahreh (2012) assert that conducting EA studies is considered highly significant because it provides researchers with information about the language learning process and how it develops. It also helps teachers identify the difficulties which the students encounter while learning L2 In addition, EA is helpful in preparing tests, classroom activities and suitable teaching materials as well (Sridhar, 1980). In language learning, scholars have offered many definitions for errors. For example, Corder (1981) describes an error as a systemic defect caused by a learner's lack of linguistic competence. Along the same line, Ferris (2011:3) defines errors as 'morphological, syntactic, and lexical forms that deviate from rules of the target language, violating the expectations of literate adult native speakers.' Thus, learner's errors result from his lack of language knowledge and awareness rather than performance. In this paper, a spelling error refers to any inaccuracy in English words resulting from the student's lack of knowledge of phonology, morphology, orthography and semantics. Whilst EA solely focuses on the erroneous forms made by the learner owing to their mother tongue and target language, interlanguage (IL) is described as 'an autonomous linguistic system in its own right that evolved according to innate and probably universal processes' (Han & Tarone, 2014: 8). The major feature which makes the IL hypothesis essentially distinct from CA and EA is that 'it is wholly descriptive and avoids comparison' (James, 1998: 6). Saville-Troike (2006) states that an interlanguage has four salient characteristics. It is systematic, dynamic, variable and reduced in both form and function. In regard to learners' spelling errors, previous studies carried out in the Arabic EFL context by (Al-Jarf, 2008;Al-Karaki, 2005; Bahloul, 2007and others) revealed that the spelling errors of Arab EFL learners could result from a number of possible causes. For example, researchers such as Al-Karaki (2005), Al Jayousi (2011) and (Ahmad, 2013) claim that the irregularity of English spelling system could be the primary cause of the EFL Arab learners' spelling errors. Al-Karaki (2005) declares that there are six possible causes of Arab EFL learners' errors, namely, pronunciation (i.e. the non-phonetic nature of English), differences between the sound systems of English and Arabic, overgeneralization, inconsistent nature of English word derivation, incomplete application of English spelling rules, or the lack of knowledge of the exceptions of spelling rules, and performance errors. Al Jayousi (2011) divided the spelling errors of Arab EFL learners into four main categories: 1. Irregularity of English contains errors resulted from lack of connection between sounds and letters such as omitting silent letters in words like knew and light. 2. Mother tongue interference includes errors caused by the linguistic differences between English and Arabic systems such as substituting the sound /p/ for /b/, e.g. bark for park or /v/ for /f/ as in fan instead of van. 3. Lack of knowledge of spelling rules and their exceptions which can be found in the incorrect application of some spelling rules such as the plural formation, e.g. (*halfs -*partys) instead of (halves and parties). 4. Performance errors which occur due to tiredness or haste such as writing (*fo rather than of). Despite the global status which English language enjoys as a language of communication, science and business of today's world, the review of spelling literature revealed that most learners of English throughout the world face considerable difficulties in English spelling especially in EFL context, i.e. the Arabic-speaking countries, where there is a limited exposure to English, and it is merely used a school subject. Benyo (2014) explored English spelling errors committed by first year students studying at Dongola University to discover the factors behind these problems. In order to collect the data, two spelling tests (pre and post intervention) were administrated to 200 Sudanese EFL students in two different faculties. The pre-intervention test was given to the students during their first semester whereas the post-intervention spelling test was administered after two months of the second semester. The study revealed that students face difficulties with English vowels sounds as well as some English sounds which do not exist in Arabic. The study also indicated that the students' unawareness and overgeneralization of English spelling rules might be another primary cause of their spelling errors. Likewise, Alhaisoni et al. (2015) scrutinized the English spelling errors of 122 male and female EFL Saudi students studying at Ha'il University whose ages ranged from 18 to 20 years. The students were asked to choose from four suggested topics to write a well-organised and coherent essay. They categorised the learners' spelling errors into: omission, substitution, insertion and transposition. The data revealed that the students committed 1,189 spelling errors, and the errors of omission represented the highest percentage of all 39.6% (462 errors) followed by substitution errors which made up 34.9% (429 errors). Most of the errors were attributed to the wrong use of vowels and pronunciation. The researchers declare that the main reasons for the errors are the irregularity of English spelling which is clearly visible in the lack of phoneme-grapheme correspondence, and vice versa as well as the students' mother tongue interference. Similarly, Albalawi (2016) investigated the common spelling errors committed by 80 Saudi female EFL students studying English language as an essential requirement to begin their academic study in Prince Fahad Bin Sultan University. The data were collected through a writing task as well as an English spelling test. The researcher classified the students' spelling errors into four categories: substitution, omission, insertion, and transposition. The analysis of errors established that errors of omission (59%) constituted the highest proportion of errors followed by the substitution errors (28.9%) whereas transposition error category had the least frequency of errors with a percentage mean of (4.3%). The students' spelling errors were attributed to a number of causes including the wrong use of English vowels, mispronunciation as well as the irregularity of the orthographic system of English and mother tongue interference. In another study, Hameed (2016) investigated the spelling errors which Saudi students make in English while writing. The subjects of the study included 26 Saudi EFL university students, and the data were collected via a fifty-word dictation. The analysis of the students' responses showed that there was a concentration of errors around vowel sounds, diphthongs and words containing silent letters. About 93% of the responses turned out to be incorrect. In addition, learners applied their knowledge of mother tongue (i.e. as being a phonetic-based language) on their English learning experience. As far as spelling error type is concerned, the findings revealed that the students the errors of substitution were the highest followed by omission, transposition and then insertion. In the same way, carried out a study to examine and categorise the spelling errors of the introductory year students at Tabuk University in Saudi Arabia. The study included 45 EFL Saudi participants. The students' spelling errors were classified into three different categories including omission, substitution and addition errors. The findings showed that spelling errors may be related to the non-phonetic nature of English spelling as well as the differences between the sound systems of English and Arabic languages. All the aforementioned studies attempted to explore spelling errors committed by Arab students in different Arab countries (e.g. Saudi Arabia and Sudan) where English is considered as a foreign language, i.e. not used in everyday written and spoken interactions. The studies revealed that errors of omission and substitution constitute the highest percentage of Arab students' spelling errors. They also indicated that Arab students had grave difficulty in representing English vowels correctly. This could be due the inconsistency between English phonemes and graphemes as well as the students' mother tongue interference. All the studies cited above are related to the current study in that they attempted to accomplish similar objectives, i.e. to examine Arab EFL spelling errors in their writing. However, this study differs from those mentioned studies; this study is devoted to examining the types and causes of the spelling errors made by the Arab ESL secondary school students in the Saudi School in Kuala Lumpur. Unlike in the EFL context, English is used widely in Malaysia and is considered as an official second language used on a daily basis in communication and business contexts after Bahasa Malaysia, the official language of the country (Thirusanku & Yunus, 2014). METHODS Since the main objective of the current study was to investigate the types and causes of spelling errors made by Arab ESL secondary school students in the Saudi School in Kuala Lumpur, the four-stage procedure proposed by Corder (1974as cited in Ellis, 1994 for data collection and analysis was adopted as follows: 1. Collection of a sample of learner language, i.e. the spelling errors made by the participants in this study 2. Identification of learners' errors, 3. Description of learners' errors and 4. Explanation of learners' errors. Data of the study Data of the study comprises spelling errors collected via a 50-word spelling test administered to 70 male students attending the Saudi School in Kuala Lumpur. The male students were identified and selected via purposive sampling. All the students are from different Arab (e.g. Saudi Arabia, Syria and Iraq) and non-Arab countries (e.g. Malaysia and Singapore) and their age range from 16 to 18 years old. To control for any variation in the sampling, several selection criteria were considered. Firstly, the students are Arab secondary school students who have studied at the Saudi School in Kuala Lumpur, Malaysia for at least two years. Secondly, the students must have the ability to communicate well in English, and this was gauged by a short speaking test, which showed that the students' speaking ability could enable them to communicate well with others. In this study, female students were not included in the study due to the gender segregation policy implemented in the school. Accordingly, no access was given to female students. Data Collection and Administration In this study, a 50-word standardised spelling test, which was developed by Sacre and Masterson (2000), was administered orally in a sixty-minute session. During the administration of the test, each target word was singly read out by the examiner, followed by a meaningful sentence which has the word in context to avoid confusion in recognising the words among the students. The students were only required to write the fifty targeted words in the blank spelling test answer sheet. Each word was repeated three times to allow the students sufficient time for revising and checking their responses. Once the answer sheets were submitted, the students' results were scored as either correct or incorrect. Accordingly, the data was followed by identifying the students' spelling errors in the corpus. It is worth mentioning that Corder (1981) made a clear distinction between the two notions: 'errors' and 'mistakes'. He states that 'errors' are failures in competence whereas 'mistakes' are failures in performance. He also adds that 'errors' are important because they reflect underlying knowledge, but 'mistakes' are not as they occur due to the learner's memory lapses and physical states such as tiredness, nervousness and so forth. Thus, 'mistakes' do not exhibit the learner's internal linguistic knowledge. Another essential difference is that 'errors' are not self-rectifiable, i.e. they cannot be corrected by the learner himself whereas 'mistakes' are self-correctable (James, 1998). During the spelling test of this study, each target word was provided by a meaningful sentence, and was dictated three times. The students were given sufficient time for correction and revision while doing the spelling test. Despite the time allotted for word repetition and self-correction, many students were unable to correct the misspelt words. As a result, the students' incorrect spelling was considered 'errors' not 'mistakes'. In fact, the students' inability to self-rectify their errors also reflects their lack of competence in English. After collecting and identifying the students' errors found in the corpus, the errors are classified into different categories. In this study, the spelling errors found in the Arab ESL students' spelling tests were detected and categorised according to Cook's classification (1999), which divides the students' errors into four main categories: substitution, omission, insertion and transposition. Subsequently, the possible causes of the errors were explained, which is also one of the objectives of the study. In EA, explaining the causes of learners' errors is considered an exacting task because errors could be attributed to different internal and external factors (Dulay et al., 1982). Corder (1973) states that L1 negative transfer and learner's false hypothesis are regarded as a clear indication which explains a learner's errors. In relation to this, James (1998) proposed four primary causes of errors: 1. Interlingual errors which refer to those errors resulted from the learner's first language. 2. Intralingual errors which include learner's error that are attributed to the target language, e.g. overgeneralisation and false analogy. 3. Communication strategy-based errors which are resulted from using too many words to describe the target word. This happens when a learner cannot recall a specific word and attempts to explain using his own words. 4. Induced errors which result from classroom situations such as teacher-talk, material-based and exercise-based errors. Data Analysis The students' types of spelling errors were categorised according to Cook's classification of errors, which includes: 1. Substitution, which occurs when the learner replaces the right form with an incorrect one like sboon for spoon, 2. Omission, which is the absence of a letter that must appear in a well-formed utterance as in lit for light, 3. Insertion that takes place when an item is incorrectly inserted as in firist instead of first, and 4. Transposition, which is caused by reversing the order of two or more as in fromation for formation. As far as the causes of Arab EFL students' spelling errors, the literature reviewed, (Ahmad, 2013;Alhaisoni et al., 2015;Al-Jabri 2003;Al-Jarf, 2008;Al Jayousi, 2011;Al-Karaki, 2005;Al-Mezeini, 2009;Alzuoud 2013;Bahloul, 2007;Benyo, 2014;Hameed, 2016) indicate that the Arab EFL students' spelling errors could result from the following likely causes: 1. The irregular orthographic system of English, which is clearly apparent in the lack of correspondence between English phonemes and graphemes and vice versa. For example, the phoneme /k/ can be represented in different graphemes or digraphs such as <k> kit, <c> car, <ck> back, <cc> account, <ch> school, <q> quiet and so forth. This category also includes the omission of silent letters as in know, night, writing and so on. 2. The lack of awareness of spelling rules which could be attributed the students' limited knowledge of English inflectional morphology such as the inflectional suffixes -s, -ed and -ing, e.g. as in worries, stopped and planning. 3. The first language negative transfer, which occurs as a result of linguistic interference between L1 and L2. For example, Arab EFL learners are expected to incorrectly spell out the words (vast -push) as (fast -bush). This substitution happens due to the fact that the phonemes /v/ and /p/ do not almost exist in Arabic. RESULTS AND DISCUSSION The results of the study are discussed as per objective of the study stated earlier. The Arab ESL Students' Types of Spelling Errors The students' spelling errors were identified, computed and categorised into four major types: substitution, omission, insertion and transposition as shown in Figure 1 below. Based on the total number of the different types of spelling errors, the overall frequency of the students' spelling errors was 2,873. Among all the errors identified, the substitution category was the most frequent, with a percentage of 43.2 % followed by the errors of omission, which made up 39.8 %. This confirms the findings of some studies (Al-Jabri 2003, Al-Mezeini 2009Alzuoud, 2013 andHameed, 2016), which indicated that most Arab learners of English commit spelling error due to substituting or omitting a linguistic element, i.e. a letter or sound. However, the categories of insertion and transposition constituted the least common errors found in the corpus, with percentages of 10.5 % and 6.2 % consecutively. The analysis of the type of spelling errors enabled the researcher to identify three top five misspelt words in each type. It is worth noting that many misspelt words identified in the corpus have multi-category errors, i.e. one word containing two or three types of spelling errors. For example, the word environment was incorrectly written as *inviroment, whereby the grapheme <e> was wrongly substituted by <i> and also the letter <n> was omitted. Accordingly, this word was included in the two categories, substitution and omission as shown in Table 1 below. Table 1 illustrates the three top misspelt words according to their types. As far as substitution is concerned, the words circumference, entertained and environment were classified as the most frequent misspelt words. It seems that most substitution errors were vowel-based. This could be attributed to the students' wrong pronunciation of English vowel sounds. Consequently, the students were not able to spell out the words correctly. This can obviously be noticed in the misspelt word *sircomfrans, in which the graphemes <u -c> were incorrectly replaced by <o -s>. Similarly, the grapheme <e> was wrongly substituted by <i> in *intertaind and *inviroment. In the category of omission, the words environment, halves and stopped constituted the students' most incorrect words because the silent graphemes <n, l> in these words were left out. Accordingly, the students incorrectly wrote *inviroment, and *haves. In fact, the phenomenon of silent letters which is considered common in English spelling does not almost exist in Arabic. In this case, it is expected that Arab learners of English omit silent letters while writing as the letters are not pronounced. The word stopped was also misspelt as *stoped due to the double consonants, which are represented in two consecutive letters but pronounced as one sound such as apple, call, dress and so forth. Unlike English, Arabic has a phonetic-based spelling system in which words are written as they are pronounced. Thus, an Arab learner may misrepresent words containing double consonants and write, e.g. *aple, *cal, and *dres instead of apple, call and dress. These two phenomena, i.e. silent letters and double consonants, are considered perplexing as it increases the possibility of committing spelling errors. In their studies, Alhaisoni et al., 2015) revealed that the highest percentage of the Arab EFL students' spelling errors concentrated on the omission category. This could be attributed to the students' mispronunciation of English words as well as the inconsistent nature of English spelling in which there is no one-to-one correspondence between its graphemes and phonemes. Consequently, Arab students tended to omit silent letters and misrepresent double consonants. The results in this study are consistent with the above reviewed studies, which identified similar types of spelling errors in their writing. In respect to insertion, the words altogether, misused and misunderstanding were wrongly written as *alltogether, *missused and *missunderstanding. Although the target words were provided with examples when taking the dictation in order to avoid confusion, it seems that such errors could be due to the confusing homophonous words which the students are normally familiar with, i.e. all and miss. Consequently, the students falsely spell the word altogether as all together by inserting the grapheme <l>. Likewise, they incorrectly added the grapheme <s> to the derivational (Al-Jabri, 2003, Al Jayousi, 2011 and many others) claim that English homophones pose a challenge for EFL students. Accordingly, teachers should provide students with a meaningful context to help them spell out the target words easily. The errors of transposition made up the least errors identified in the study. Such errors resulted from misplacing letters as in *queitly, *traeuser and *advenuter rather than quietly, treasure and adventure. This could be attributed to the lack of correspondence between sounds and letters in English, i.e. one phoneme can have different representations, which appears to be confusing to the Arab students' whose first language is highly phonetic. In this regard, the errors could have resulted as reversing the order of vowel letters when spelling the words. The Likely Causes of the Arab ESL Students' Spelling Errors In light of the literature reviewed, (Ahmad, 2013;Alhaisoni et al., 2015;Al-Jarf, 2008;Al Jayousi, 2011;Al-Karaki, 2005;Alzuoud, 2013;Bahloul, 2007;Benyo, 2014;Hameed, 2016) the identified spelling errors of Arab EFL students could be due to intralingual and interlingual causes. The spelling test analysis revealed that the students in the study made 1,304 spelling errors. These errors were attributed to three inferred possible causes ( Figure 2), namely: 1. Anomalous nature of English spelling 2. Students' lack of awareness of spelling rules 3. Students' L1 interference. The anomalous nature of English spelling The data analysed indicated that most of the spelling errors made by the Arab ESL students in the Saudi School in Kuala Lumpur could be a result of the anomalous nature of English spelling, which constituted (62.2%) of the students' spelling errors. This includes four sub-causes: (a) the mismatch between English phonemes and graphemes (24.4%), (b) misleading homophones (17.5%), (c) silent letters (14%) and (d) double consonants (6.2%). Firstly, the mismatch between phonemes and graphemes makes English spelling unpredictable and illogical. The phoneme /f/, for instance, can have different graphemes <f, ff, ph, ough> as in fat, stuff, phone, and tough consecutively. Second, misleading homophonous linguistic units, i.e. words and syllables having similar sounds but different spellings, are also considered perplexing due to the inconsistency between English sounds and letters. In this study, many students incorrectly wrote miss and full while spelling out *misued and *respectfull. In addition, this mismatch can also be found in words containing silent letters such as night, knew, environment, halves, and writing. Many studies on spelling errors such as (Al Jayousi, 2011;Hameed, 2016) revealed that Arab learners suffer from words with silent letters. This could be attributed to the differences between Arabic and English writing systems. Finally, the least errors in this study were caused by words having double consonants such as glasses, worried and surrounded. The students' lack of awareness of spelling rules Secondly, it was inferred that the Arab ESL students' lack awareness of English spelling rules, especially, the inflectional suffixes -es, -ed, -ing, could have led to their incorrect spelling. Such errors constituted (19.7%) of the total spelling errors detected in the corpus. Words such as *galssis, *chrchis, *tomatos, *halfs, and *partys were misspelt due to the incorrect insertion of -es inflectional suffix. Likewise, *stoped, *replyed and *worryed were wrongly written, perhaps, due to some students' limited knowledge of the inflectional suffix -ed. Similarly, some students failed to add the inflectional suffix -ing accurately and hence they wrote *planing, *writeng, and *damageing. These results are consistent with (Al Jayousi, 2011;Al-Karaki, 2005) who revealed that the Arab students' inadequate knowledge of spelling rules and morphological changes negatively affects their spelling accuracy. In fact, being aware of the English word structure decreases the possibility of committing spelling errors and makes a proficient speller. This may positively affect the students' writing quality. The students' l1 interference Previous studies carried out on error analysis Alhaisoni et al, 2015;Benyo, 2014;Hameed, 2016) revealed that Arab students' L1 interference might be a possible cause of their spelling errors. In this study, L1 interference constituted the least errors identified in the corpus (17.9%). It was observed that some students tended to substitute the graphemes <b, f, ch> with <p, v, sh>. Thus, it is believed that spelling errors such as *reblied, *resbectful, *adfansher, *discofered, *shair and *shershes seem to be due the incorrect replacement of Arabic phonemes with their English counterparts. CONCLUSION The present study has attempted to identify the major types and causes of spelling errors, which the Arab ESL secondary Figure 2. Possible causes of spelling errors school students made. The study revealed that the linguistic differences between English and Arabic could be one of the primary causes of the students' spelling errors. One of these differences is the representation of vowel sounds in English, which Arab ESL students may not be familiar with due to the different nature of the writing systems in both languages. Moreover, the inconsistency in sounds and spelling could also cause some confusion which leads to spelling errors. Such an inconsistency between phonemes and graphemes may have negatively affected the students' ability to spell out English words correctly. Consequently, the highest percentage of the students' spelling error was concentrated on the substitution category. Differences in the spelling system are not only restricted to vowel sounds, but also include consonants especially those which do not almost exist in Arabic such as <p, v> which could be spelt incorrectly due to their mispronunciation as /b, f/. In fact, such errors could have also occurred due to the students' L1 interference in which they negatively transfer some similar linguistic elements from their L1 to the target language. The study also revealed that English words containing silent letters and so forth are considered perplexing because they are not pronounced. Therefore, they could be omitted while writing. As such, Arab ESL students are required to give them considerable attention and more practice while learning spelling, and teachers of English should concentrate on such words to minimise the possibility of making such spelling errors. In addition, the study revealed that some Arab ESL students were unaware of the English spelling rule especially the -s, -ed and -ing inflectional endings which constituted the second largest number of the students' spelling errors in this study. Due to the unawareness of spelling rules, some Arab students may incorrectly substitute, insert, omit, or transpose a letter(s) while spelling out English words. Though spelling errors may be regarded as something trivial, for Arab learners, such spelling errors may lead to bigger problems in writing, which thus needs to be given greater attention so as to help learners acquire the basics of writing in English. In this light, it is strongly recommended that that formal spelling instruction should be integrated with reading and writing lessons in the Arab school English curriculum in order to overcome the students' spelling deficiency at the early stage, which in turn, would facilitate the enhancement of both Arab young and adult ESL/EFL learners' writing.
2018-12-14T00:53:55.121Z
2017-07-31T00:00:00.000Z
54706930
s2orc/train
v2
Velocity Distributions of Granular Gases with Drag and with Long-Range Interactions
Velocity Distributions of Granular Gases with Drag and with Long-Range Interactions We study velocity statistics of electrostatically driven granular gases. For two different experiments: (i) non-magnetic particles in a viscous fluid and (ii) magnetic particles in air, the velocity distribution is non-Maxwellian, and its high-energy tail is exponential, P(v) ~ exp(-|v|). This behavior is consistent with kinetic theory of driven dissipative particles. For particles immersed in a fluid, viscous damping is responsible for the exponential tail, while for magnetic particles, long-range interactions cause the exponential tail. We conclude that velocity statistics of dissipative gases are sensitive to the fluid environment and to the form of the particle interaction. We study velocity statistics of electrostatically driven granular gases. For two different experiments: (i) non-magnetic particles in a viscous fluid and (ii) magnetic particles in air, the velocity distribution is non-Maxwellian, and its high-energy tail is exponential, P (v) ∼ exp (−|v|). This behavior is consistent with kinetic theory of driven dissipative particles. For particles immersed in a fluid, viscous damping is responsible for the exponential tail, while for magnetic particles, long-range interactions cause the exponential tail. We conclude that velocity statistics of dissipative gases are sensitive to the fluid environment and to the form of the particle interaction. Despite extensive recent studies, a fundamental understanding of the dynamics of granular materials still poses a challenge for physicists and engineers [1,2,3]. Remarkably, even dilute granular gases substantially differ from molecular gases. A series of recent experiments on granular gases, driven either mechanically [4,5,6,7,8,9,10] or electrostatically [11], reveals that the particle velocity distribution significantly deviates from the Maxwell-Boltzmann distribution law. In particular, the highenergy tail of the velocity distribution P (v) is a stretched exponential with v 0 the typical velocity. The exponent ξ = 3/2 is observed for certain vigorous driving experiments [4,11]. Non-Maxwellian velocity distributions were also observed in experiments with a variety of geometries and driving conditions [5,6,7,8,9,10] and in numerical simulations [12,13,14,15,16,17,18]. Energy dissipation is responsible for this behavior and this can be understood using a simple model: a thermally driven gas of inelastic hard spheres. For high-energy particles, there is a balance between loss due to inelastic collisions and gain due to the thermal driving. For hard-core interactions, kinetic theory predicts (1) with ξ = 3/2, in agreement with vigorous shaking experiments [19]. However, interactions between particles often do not reduce to simple hard-core exclusion. In this Letter, we study the effects of fluid environment and particle interactions on electrostatically driven granular gases. We perform experiments with particles immersed in a viscous fluid and with magnetic particles in air subjected to an external magnetic field. We find that the high-energy tail of the velocity distribution is characterized by (1) but with the exponent ξ = 1. We generalize the kinetic theory to situations with viscous damping and with long-range interactions and find that the experimental results are in-line with the kinetic the-ory predictions. We conclude that velocity statistics in granular gases depend sensitively on the environment and on the form of the particle interaction. Our experimental setup is similar to that in Ref. [20,21,22], see Inset to Fig. 1. The particles are placed between the plates of a large capacitor that is energized by a constant (dc) or alternating (ac) electric field E = E 0 cos(2πf t). To provide optical access to the cell, the capacitor plates were made of glass with a clear conductive coating. We used 11 × 11 cm capacitor plates with a spacing of 1.5 mm (big cell) or 4 cm diameter by 1.5 mm cell (small cell). The particles are 165 µm diameter non-magnetic bronze spheres or 90 µm magnetic nickel spheres. The field amplitude E 0 varied from 0 to 10 kV/cm and the frequencies f were between 0 and 120 Hz. The total number of particles in the cell is on the order 10 6 . To control the magnetic interactions, the cell was placed inside a large 30 cm electromagnetic coil capable of creating dc/ac magnetic field H up to 80 Oe. The cell can be filled with non-polar dielectric fluid (toluene) to introduce viscous damping. The electro-cell works as follows: conducting particles acquire a surface charge when they are in contact with the capacitor plate. If the magnitude of the electric field exceeds gravity, particles travel upwards, recharge upon contact and then fall down. This process repeats in a cyclic manner. By applying ac electric field and adjusting its frequency f , one controls the vertical extent of particles motion by effectively turning them back before they collide with the upper plate, making the system effectively two-dimensional. We extracted horizontal particle velocities using highspeed videomicroscopy. Images were obtained in transmitted light at a rate up to 2,000 frames per second from a camera mounted to a long focal distance microscope. Particle positions were determined to sub-pixel resolution. Inter-particle and particle-boundary collisions that introduce sudden changes in momenta were filtered out in a manner similar to Ref. [5]. An ensemble average for each of the velocity distributions was obtained from about 5 · 10 6 data points. We performed two sets of experiments: (i) electrostatically driven non-magnetic particles in viscous fluid; (ii) electrostatically driven magnetic particles in air subjected to external magnetic field. Some experiments were also performed with magnetic particles in fluid. Although the origin of the particle interaction is very different, both systems happen to show somewhat similar behavior: exponential asymptotic velocity statistics. For the fluid system, the exponential behavior results mostly from the dominant viscous drag. However, the effects of hydrodynamic dipole-dipole interaction between moving particles in fluid are of certain importance: the hydrodynamic interaction between particles become comparable with the viscous drag if the particles are close enough or in contact [23]. This interaction has consequence for high velocity tail, see discussion below. The ratio of viscous drag force F d to the gravity force F g at rms velocity in toluene is about 0.2-0.3 and in air is less then 0.007. Thus, viscous drag effects are obviously dominant in toluene. For the magnetic system the exponential behavior is attributed to dominant long-range dipole interaction since the air drag is negligible. Simple estimates show that the magnetic dipole forces between particles dominate gravity if the distance is smaller than 3 particles diameters. Thus, due to remnant magnetization of the particles magnetic interaction is dominant even for H = 0. Representative results for the fluid system are shown in Fig. 1. Pure toluene was used in most experiments. Further experiments were performed using a toluene/polysterene mixture in order to control the viscosity of the solution, but no qualitative differences were found. Throughout this Letter, we analyze the distribu- tion of the horizontal velocity components, The velocity is normalized such that the root-mean-square (rms) velocity equals one, v 2 = 1, and of course, the velocity distributions are symmetric, P (v) = P (−v). As shown in Fig. 1, the velocity distributions are all notably different than the Maxwellian distribution. Moreover, the best fit to Eq. (1) gives the value ξ = 1 in a wide range of parameters (driving amplitude and frequency). Remarkably, the velocity distributions for fluid-filled cells are different from those obtained for air-filled cells (where viscous drag is negligible) with ξ = 3/2, other parameters the same [11]. Experiments with magnetic interactions were performed using nickel magnetic microparticles with an average size of about 90µm (Alfa Aesar Company). The magnetic moment per particle at 80 Oe is 1 · 10 −5 emu; the saturated magnetic moment is 2 · 10 −4 emu per particle and the saturation field is about 4 kOe. A vertically oriented external magnetic field was applied in order to control the magnetic interactions: since the particles are multi-domain, and nickel is a soft magnetic material, the applied field can effectively increase the particle's magnetic moment. The corresponding velocity distributions are shown in Fig. 2. The magnetic field systematically widens the velocity distribution and it enhances the exponential asymptotic decay of P (v) at the high velocities. This observation is consistent with the fact that the applied magnetic field enhances the dipole-dipole magnetic interaction due to the magnetization of the particles. Comparing the fluid and the magnetic systems, we note that the velocity distributions have different cores, but the tails are exponential in both cases. In the course of the measurements we noticed that finite size effects have a strong influence on the tail of P (v). We performed a number of measurements using a 4 cm diameter cell, and about 3,000 particles. At the same conditions, the velocity distribution in the larger cell has a more pronounced exponential tail (see Fig. 2). We also carried out several experiments combining features of the two systems using magnetic particles in toluene solution, and obtained results consistent with the rest of our observations: non-Maxwellian velocity distributions with an exponential tail. Compared with pure magnetic interactions results, the range of exponential behavior for the tail becomes even more extended. We now compare the experimental results with the predictions of kinetic theory, the standard framework for describing granular gases. This requires generalization of (1) to situations with long-range interactions. For particles interacting via the potential U (r) ∼ r −σ , r is the interparticle distance, the collision rate K grows algebraically with the normal component of velocity difference ∆v, K ∝ (∆v) λ with λ = 1 − 2 d−1 σ , and d is the dimension of space [24]. Hard-spheres, λ = 1, model granular particles with hard-core (σ ≡ ∞) interactions while Maxwell molecules [25], λ = 0, model granular particles with a specific dipole interaction. In two-dimensions, relevant to our experiments, the collision rate effectively becomes independent on the relative velocity when σ = 2. First consider magnetic particles. To analyze the high-energy tail, we make the standard assumption that forcing is thermal. For energetic particles, gains due to collisions are negligible and losses due to collisions are balanced by the forcing. The high-energy tail of the velocity distribution is governed by this balance, [27]. Consequently, the velocity distribution decays as a stretched exponential (1) with ξ = 1+λ/2. In general, the velocity statistics are non-Maxwellian and the tails are over-populated with respect to a Maxwellian distribution. The upper limit, ξ = 3/2, is realized for hard-spheres, and pure exponential behavior, ξ = 1, occurs for Maxwell molecules. For magnetic dipole interactions one has σ = 2. In this case, in two dimensions the kinetic theory predicts a simple exponential tail, ξ = 1. In the following, we consider a representative data set for the fluid system (120 Hz) and for the magnetic system (44 Oe, 90 Hz). We display a single set because variations in P (v) among the different experimental conditions are relatively small: the kurtosis κ ≡ v 4 / v 2 2 varies by less than 5% for the different data sets. For the fluid system, the velocity distribution is very close to a pure exponential, as shown in Fig. 1, and furthermore, the kurtosis κ = 6.2 ± 0.2 is within 3% of the value corresponding to a pure exponential distribution κ = 6. Viscous damping is responsible for this behavior and the nearly exponential distribution is consistent with the damping process v → ηv suggested by van Zon et al [16]. This mimics viscous damping because v n = v 0 η n [16] with n the number of damping events and n grow- ing linearly with time. The damping rate is set by the frequency of collisions with the plates, but in the theory, it can be set to one without loss of generality. When the viscous dissipation dominates over the collisional dissipation, the kinetic theory is modified as follows where the last two terms represent gain and loss due to drag and D is diffusion coefficient. At large velocities, the gain term is negligible and consequently, the tail is exponential P (v) ∼ exp −v/ √ D . Eq. (2) can be solved analytically, and there is a family of velocity distributions characterized by one parameter η. When η → 0, the gain term in (2) is negligible and the distribution is purely exponential. This is reflected by the kurtosis κ = 6 1+η 2 . For strong damping, η → 0, the value corresponding to a pure exponential distribution is realized, κ → 6. A Discrete Simulation Monte Carlo methods was used to solve the Boltzmann equation for Maxwell molecules. In the simulations pairs of randomly chosen particles collide according to the inelastic collision rule with α the restitution coefficient andn the impact direction. In addition, particles are thermally forced dv/dt = ζ with ζ a white noise. Also, damping v → ηv with unit rate models the fluid effect. The simulation results represent an average over 10 2 runs in a system with N = 10 7 particles. When viscous damping dominates over collisional dissipation, the distribution is nearly exponential, see Fig. 1, and in a very good agreement with the experiments. We note that even though the drag term dominates over the collision terms, the experimental results suggest that the collision rate is velocity independent at least at large velocities, λ = 0. If this were not the case, the collisional loss term would dominate at some very large velocity and there would be a cross over to ξ > 1 [26]. But no such crossover is observed experimentally. We conclude that the results of kinetic theory of forced Maxwell molecules with strong viscous damping are consistent with the experimental results for all velocities. For magnetic interactions, there is an excellent agreement between the experiment and the theory of thermally forced Maxwell molecules for which the collision rate is completely independent on the relative velocity (see Fig. 3). The kurtosis, κ = 3.6 ± 0.1, falls within 2% of the analytically known value κ = 3 + 18 33 ∼ = 3.55 obtained [27]. We note that there are no fitting parameters. The restitution coefficient α was set to zero because particle collisions in the experiments are strongly inelastic and because as long as the dissipation is strong, there is only a weak dependence on α. Even though the tail of the distribution is close to a simple exponential, its core is approximately Maxwellian, as reflected by the kurtosis that is much closer to the pure Maxwellian value of 3 than the pure exponential value of 6. We conclude that for magnetic particles the collision rate becomes practically independent on the relative velocity, and conversely, that they are accurately modeled by Maxwell molecules. In summary, our main result is that velocity statistics of forced granular gases depend sensitively on the fluid environment and on the nature of the inter-particle interactions. The two sets of experiments can be universally described by a specific version of the kinetic theory, Maxwell molecules, with a velocity independent collision rate. Whereas dipole interactions are ubiquitous for the magnetic system, our studies indicate that the Maxwell model is the only way to interpret the fluid experiments. If hard sphere were used, the collision rate will grow linearly with the velocity and ξ = 3/2 stretched exponential tail will prevail at large velocities. No such crossover is observed and this is a strong evidence that the collision rate is velocity independent. Thus, although the core behavior is dominated by the damping, the tail behavior indicates that long range interactions play a role. In view of this, the two experiments are complementary. We comment that it is difficult to experimentally validate that the driving is thermal in nature. However, the consistent agreement between the experiments and theory supports this widely-used modeling assumption. In addition, the excellent quantitative agreement between the magnetic particles experiments and the Maxwell molecules kinetic theory suggests that magnetic particles are an ideal experimental probe for the predictions of this analytically tractable theory, including in particular, the transport coefficients [28]. We also propose that stretched exponential velocity distributions may be generic for dissipative gases with competing interactions, and may possibly be relevant for vastly different systems, such as dusty plasmas, colloids, and even star clusters, where long-range interactions (e.g due to gravity) are mediated by short-range collisions.
2015-03-03T00:52:33.000Z
2005-07-05T00:00:00.000Z
18933900
s2orc/train
v2
Prenatal diagnosis of 2193 pregnant women with invasive indications for chromosomal abnormalities in southern China: a retrospective study
Prenatal diagnosis of 2193 pregnant women with invasive indications for chromosomal abnormalities in southern China: a retrospective study Background: Although a variety of non-invasive techniques are used for prenatal genetic screening and diagnosis, our knowledge remains limited regarding the relationship between high-risk prenatal indications and fetal chromosomal abnormalities. Methods: We retrospectively investigated the prenatal genetic screening and karyotype analysis results of pregnant women who had undergone invasive prenatal testing in Prenatal Diagnosis Department of Meizhou People’s Hospital during Jan. 1, 2015 to Dec. 31, 2019. We analyzed the frequencies of chromosome abnormalities in women with high-risk indications. Results: A total of 2,193 pregnant women who had underwent invasive prenatal testing were included in our analysis. Chromosomal abnormalities occurred in 10.3% of these women, and rate increased with maternal age (P < 0.001). The frequencies of chromosome abnormalities varied for women with different high-risk indications, which was 10.3% (226/2193) for abnormal ultrasound results, 3.3% (31/938) for positive serum screening test results, 61.4% (78/127) for positive NIPT results, 9.3% (13/140) for AMA and 11.1% (10/90) for obstetric/family history. Follow up data showed that 380 pregnant women opted for termination the pregnancy, including 211 (55.5%) due to karyotype abnormalities and 169 (45.5%) due to abnormal ultrasonic outcomes. Conclusion: Our data suggested that the prenatal screening methods have high false positive rates. NIPT is the most accurate non-invasive prenatal screening. Apart from karyotype abnormality, abnormal ultrasound results alone accounted for a big part of pregnancy termination. Introduction Invasive procedures which mainly include chorionic villus sampling (CVS) and amniocentesis are currently recommended for prenatal diagnosis of fetal chromosomal abnormalities (1). However, according to guidelines, CVS are performed after 10 weeks of gestation and amniocentesis are performed approximately 16 weeks of gestation (2,3). Meanwhile, karyotype analysis takes about 10 days, a period which produces great psychological stress on pregnant women. Nowadays, a variety of non-invasive technologies, such as ultrasound, serum screening test and noninvasive prenatal genetic testing (NIPT) are used in prenatal genetic screening and diagnosis (4). Ultrasound screening is recognized as a safe and convenient non-invasive procedure to detect fetal anomalies, i.e. heart abnormality, bone abnormality, brain abnormality, multi-system malformations (5). Previous studies suggested that fetal increased nuchal translucency (NT) values and nasal bone growth were closely associated with the occurrence of Down's syndrome (DS, trisomy 21) (6). Ultrasound examination combined with maternal serological of double-marker, namely free beta human chorionic gonadotropin (free β-hCG) and pregnancy-associated plasma protein A (PAPP-A), or triple-marker, namely free β-hCG, alpha fetoprotein (AFP) and unconjugated estriol (uE3) is commonly used to determine the risk of fetal aneuploidy of chromosome 21 and 18 (7). Non-invasive prenatal test (NIPT), which analyzes the fetal genetic traits in maternal blood samples using DNA sequencing technique, is considered as a safe and accurate method for fetal aneuploidies detection, and becomes increasingly accepted in clinical practice (8). However, clinical guidelines also state that NIPT is a screening method rather than diagnostic (9). But for elderly pregnant women who have no intention of invasive procedure, NIPT would be recommended as a proper alternate. With the two-child policy full implemented from 2016 in China, the number of pregnant women with advanced maternal age (AMA) has risen sharply (10). AMA is considered as an important risk factor for fetal chromosomal abnormalities (11). Meizhou city, located in southern China, is known as the capital of Hakkas in the world, with unique culture, diet habits and physiological characteristics (12). Due to underdeveloped economic stature and tra c di culty, the population in this area remains stable and suffered badly from various diseases, such as thalassemia, cancers and infectious diseases. The two-child policy sharply increases the number of pregnant women in the recent years, especially women with AMA, and thus highlights the importance of prenatal diagnosis. Although many studies have focused on evaluating the e ciency and accuracy of prenatal screening techniques, our knowledge remains limited regarding the relationship between high-risk prenatal indications and fetal chromosomal abnormalities. The present study investigated the frequencies of fetal chromosomal abnormalities in Hakka pregnant women with high-risk indications. We aimed to determine the relationship between high-risk indications and chromosomal abnormalities. The ndings would provide useful information for pregnant women with high-risk indications. Invasive prenatal tests were performed by either amniocentesis or CVS. Invasive prenatal testing was conducted for the following indications: abnormal ultrasound, positive serum screening result, positive NIPT outcome, AMA (age ³ 35 years), and obstetric/family history (record of fetus or child with aneuploidy, or parental carriers of chromosomal balance translocations or inversions). Abnormal ultrasound results included increased NT, heart abnormality, choroid plexus cyst, neck lymphatic hydrocele, bone abnormality, brain abnormality, increased NF, kidney abnormality, and multiple abnormality. Positive serum screening was de ned as risk value ≥ 1/270 for DS or ≥1/350 for ES. Positive NIPT outcome was de ned as absolute value of z-score ≥ 3. Cytogenetic analysis Chorionic villus samples or amniotic uid were collected from pregnant women by professional gynecologist. Fetus cells derived from chorionic villus samples or amniotic uid were cultured in Amniotic Cell Medium (Dahui Bio, Guangzhou, China) for 8 to 13 days at 37 ℃ with 5% CO 2 . Three hours before the endpoint of culture, cells were treated with colcemid solution. Karyotype analysis was performed following GTG banding (13). A Zeiss Axio Imager Z2 system (Zeiss, Wetzlar, Germany) was used to capture microscopic images of metaphase cells for karyotype analysis. For each sample, at least 20 GTG-banded metaphases were counted and at least ve metaphases were analyzed. Karyotypes were classi ed according to the International System for Human Cytogenetic Nomenclature 2013 (13). Prenatal serological analysis At 11 to 13 +6 weeks of gestation, risk calculation for rst-trimester combined screening (FTS) was performed using maternal age, fetal NT thickness, and maternal serum levels of free β-hCG and PAPP-A. At 15 to 20 +6 weeks of gestation, risk calculation for second-trimester triple screening was performed using maternal age and maternal serum levels of AFP, free β-hCG, and uE3. Gestational week was determined by crown-rump length or biparietal diameter. Ultrasound testing and blood collection of pregnant women for FTS were performed on the same day. The levels of FTS serum markers were determined by Cobas e601 analyzer (Roche, Basel, Switzerland). The multiples of the median were derived from marker levels and NT thickness and used to calculate the risk of chromosomal abnormalities according to gestational age. Maternal weight, maternal age, and history of smoking were also considered in calculating Down syndrome risk on pregnancy. A risk cut-off value ≥ 1/270 was recognized as positive for trisomy 21 (Down's syndrome, DS) and a risk rate ≥1/350 was considered positive for trisomy 18 (Edwards syndrome, ES). NIPT analysis 5-10 ml of maternal peripheral blood was collected and placed in EDTA-containing tubes (BD Biosciences, Franklin Lakes, NJ, USA). The blood sample was centrifuged at 1600 × g for 10 minutes at 4 ℃ to separated plasma from the peripheral blood cells. Followed by carefully transferred the plasma into a polypropylene tube and centrifuged at 16,000 × g for 10 minutes at 4 ℃ to deposit the remaining cells. Brie y, cell-free DNA was extracted from 600 μL plasma using a nucleic acid extraction kit (CapitalBio Genomics, Beijing, China) according to the manufacturer's protocol. DNA was used for library construction and semiconductor sequencing, using a fetal aneuploidies (trisomies 21, 18, and 13) detection kit following the manufacturer's instructions (CapitalBio Genomics). An absolute value of zscore ≥3 in target chromosome were considered positive (14). Statistical analysis Data were analyzed using chi-squared test in SPSS 20.0 software (IBM Corp., Armonk, NY, USA). P < 0.05 was considered statistically signi cant. Results A total of 2,193 women who received an invasive test for prenatal diagnosis in Meizhou People's Hospital at a period of 2015 -2019 were included for analysis in our study. The average age of all pregnant women was 29.9 ± 5.9 years (range 16 to 47 years). Participants had either of the following high-risk indications, including abnormal ultrasound, positive serology, positive NIPT, AMA and obstetric/family history. Totally, 226 participants (10.3%) were diagnosed as chromosomal abnormalities, and the frequency of abnormality went up as maternal age increased (P < 0.001, Table 1). Among 898 pregnant women who were ultrasound abnormal, 94 (10.5%) were found to be karyotype abnormal. There were 938 participants that had positive serum screening results, and abnormal karyotypes were found in 31 (3.3%) pregnant women. As for 127 women who showed positive NIPT outcomes, 78 (61.4%) of them were con rmed to be karyotype abnormal. Among 140 participants of AMA, 13 (9.3%) had abnormal chromosome karyotypes. In addition, 10 out of 90 (11.1%) participants who had a record of obstetric/family history were diagnosed as chromosome abnormality (Figure 1). Our data indicated that the true positive rate for NIPT was 61.4% (78/127), and false positive rate was 38.6% (49/127). However, the accuracy of NIPT for different types of chromosomal abnormalities remained largely diverse. As shown in Table 3, NIPT was exclusively effective in diagnosing abnormality in trisomy 21 with the true positive rate of 83.6%. NIPT was also helpful in diagnosing abnormality in trisomy 18 and sex chromosome, with the true positive rate of 60.0% and 56.3%, respectively. As for abnormalities occurring in other sites, this method showed absent of accuracy. Of 938 participants who had positive serum screening test, abnormal fetal karyotype was con rmed in 31 (3.3%) of them. As shown in Table 4, the positive serum screening test results were classi ed into four groups. The true positive rate of karyotype abnormality was 10% if pregnant women have both DS ≥ 1/270 and ES ≥ 1/350, while it dropped to 2.8% if only DS ≥ 1/270 or 7% if only ES ≥ 1/350. In addition, none of the 60 pregnant women with 1/270 > DS ≥ 1/500 or 1/350 > ES ≥ 1/500 was further con rmed to be karyotype abnormal. Discussion The present study investigated the chromosomal abnormalities in pregnant women with different highrisk prenatal indications, and comprehensively characterized the relationship between high-risk prenatal indications and fetal chromosomal abnormalities in southern China. It is well known that several different techniques are used for prenatal screening and diagnosis. Ultrasonography has been recognized as a safe and essential method for pregnancy imaging, although it does not detect genetic defects. Ultrasonography can detect abnormalities and even minor structural changes in fetus, which are related to chromosomal abnormalities (15). The 2013 ISUOG practice guidelines stated that fetal NT thickness can predict chromosomal abnormalities in the rst trimester of pregnancy (16), and a wide range of other abnormalities have been reported in associations with increased NT (17). Our ndings suggested that increased NT (42.6%) was the most frequent indication. NIPT was applied to clinical practice since 2011, and became increasingly widely used in prenatal screen of trisomy 21 and other (18). NIPT is considered an unparalleled screening test for fetal aneuploidy because of its high accuracy. In the present study, 61.4% cases with positive NIPT results were con rmed to be fetal karyotype abnormal. Notably, NIPT performed well in detecting trisomy 21 with a true positive rate of 83.6%, which was similar to that reported by Rulin et al (19). However, NIPT was less effective in detecting trisomy 18 and even less for other chromosome abnormalities. The American Academy of Medical Genetics and Genomics suggested in 2016 that NIPT be used as only a screening test rather a diagnostic technology in aneuploidy (9). The serum screening is a conventional method to determine the risk of fetal aneuploidy. However, the true positive rate for serum screening testing was poor in our data. Recent study found that serum screening combined with ultrasonography could improve the detection rate of fetal chromosomal aneuploidy (20). As for pregnant women with high-risk indications by prenatal screen procedures, prenatal diagnosis should be implemented. The frequencies of chromosome abnormalities varied for women with different high-risk indications, which was 10.5% for abnormal ultrasound results, 3.3% for positive serum screening test results, 61.4% for positive NIPT results, 9.3% for AMA and 11.1% for obstetric/family history. These data suggested that positive NIPT result was the worst risk indictor of chromosomal abnormalities. However, NIPT by no mean can replace invasive diagnostic techniques in clinical practice. Besides, follow up data showed that 169 (45.5%) women terminated pregnancy based on ultrasonic outcomes, which highlighted the importance of ultrasonography in prenatal screen. Previous studies exploring the effect of maternal age on self-generating abortion found that AMA was a crucial factor related to fetal chromosome aneuploidy (21,22). Consistently, we con rmed AMA as an important risk factor of chromosome abnormalities. There are some limitations in our study that need to be clari ed. First, retrospective study design is a limitation of the present study. Second, the option for NIPT was low in our study. It is because the hospital started NIPT test from 2018 and the test far outspend other screening tests. Conclusion We observed a relatively high frequency of chromosomal abnormalities in prenatal samples from Hakka pregnant women. Our data suggested that the prenatal screening methods have high false positive rates. NIPT is the most accurate non-invasive prenatal screening. Apart from karyotype abnormality, abnormal ultrasound results account for a big part of pregnant termination. Consent for publication Not applicable. Availability of data and materials The data used to support the ndings of this study are included within the article. Competing interests The authors declare no con ict of interest. NIPT: positive non-invasive prenatal testing. Pregnant women with invasive indications and outcomes of prenatal diagnosis. LB: liveborn; TOP: termination of pregnancy.
2020-07-02T10:36:57.013Z
2020-06-25T00:00:00.000Z
241417900
s2orc/train
v2
ESPResSo++ 2.0: Advanced methods for multiscale molecular simulation
ESPResSo++ 2.0: Advanced methods for multiscale molecular simulation Molecular simulation is a scientific tool dealing with challenges in material science and biology. This is reflected in a permanent development and enhancement of algorithms within scientific simulation packages. Here, we present computational tools for multiscale modeling developed and implemented within the ESPResSo++ package. These include the latest applications of the adaptive resolution scheme, the hydrodynamic interactions through a lattice Boltzmann solvent coupled to particle-based molecular dynamics, the implementation of the hierarchical strategy for equilibrating long-chained polymer melts and a heterogeneous spatial domain decomposition. The software design of ESPResSo++ has kept its highly modular C++ kernel with a Python user interface. Moreover, it was enhanced by automatic scripts parsing configurations from other established packages providing scientists with a rapid setup possibility for their simulations. Introduction Molecular simulations methods [1,2,3,4,5,6,7,8,9] have facilitated the study, exploration and co-design of diverse materials. One way to access the functional and dynamic properties of biological and non-biological materials is tackling the associated length and time scales with sequential coarse-graining methods [1,10] or with concurrent coarse-graining and atomistic methods [11,12]. A common characteristic of such multiscale methods relies on coarse-graining the atomistic degrees of freedom into effective degrees of freedom representing a collection of atoms, entire monomers or even molecules [1]. An important benefit of multiscale methods is to achieve computational speed-up, due to both the coarse-graining method itself and also the algorithms optimization [13]. Within the past decades numerous researchers have contributed to different simulations packages, like GROMACS [14], LAMMPS [15], NAMD [16], ESPResSo [17], ESPResSo++ [6], among many others, have been devoted to the development of Molecular Dynamics (MD) atomistic and coarse-graining simulations and making MD packages a combination of highly parallelizable and flexible codes. The latter, flexibility, is the main design goal of ESPResSo++, in the sense that new simulation methods can be easily extended by users, and thus meet theoretical and experimentally driven demands, such as the ones pursued by multiscale simulations [7]. In addition, ESPResSo++ combines flexibility and extensibility with computational requirements of high-performance computing platforms via an MPI based parallelization. One proof of flexibility for multiscale simulations is the fluent implementation and extension of AdResS to H-AdResS and its latest features, like the flexible spatial atomistic resolution regions approach [18]. ESPResSo++ can be easily used as a molecular dynamic engine and combined with other algorithms, for example, to study complex chemical reactions at the coarse-grained scale [19,20], or to do the reverse mapping from a coarse-grained scale to an atomistic using an adaptive resolution approach [21,22]. In this article we have selected two multiscale simulation methods that have been implemented in ESPResSo++, namely, concurrent multiple resolution simulations using Adaptive Resolution Schemes (AdResS) [23,24,25,26] and the lattice Boltzmann technique which can be coupled to particle-based simulations [27]. The Adaptive Resolution Scheme has been utilized for the concurrent simulation of diverse physically inspired systems interfacing different levels of simulation techniques, ranging from concurrent simulations of classical atomistic and coarse-grained models [28,29,30,11,31,32,33,34], to interfacing classical atomistic to path-integral formulation of quantum models [35,36], as well as interfacing particle-based simulations with continuum mechanics [37,38]. Several systems have been simulated with the Adaptive Resolution Schemes implementation in the ESPResSo++ package, such as homogeneous fluids [28,29,30], biomolecules in solution [39,31,32] and DNA molecules in salt solution [34]. In the present ESPResSo++ release, we come closer to the requirements of adaptive resolution schemes in terms of scalability. Here we also included the Heterogeneous Spatial Domain Decomposition Algorithm (HeSpaDDA), namely a density-aware spatial domain decomposition with moving domain boundaries, for AdResS, H-AdResS simulations, among other heterogeneous systems, like nucleation, evaporation, crystal growth, among others. The second simulation technique introduced in the present release of ESPResSo++ is the lattice Boltzmann (LB) method, which accounts for hydrodynamic interactions in the fluids and can be applied to many problems, ranging from the studies of turbulence on macroscale to soft matter investigations on the microscale. The latter includes hybrid simulations of particle-based systems, e.g. colloids or polymers, in a solvent. The virtue of the LB method with respect to the explicit solvent treatment consists in its methodological locality and, as a consequence, computational efficiency. The LB-module of ESPResSo++ can be used: (i) as a stand-alone method for studies of turbulence and liquids driven by body-forces or (ii) in combination with molecular dynamics providing correct hydrodynamics (in contrast to, e.g., Langevin thermostat). On top of the highlighted methods for multiscale molecular simulations available in ESPResSo++, this new release also introduces the implementation details of the hierarchical strategy for the equilibration of dense polymer melts. The hierarchical equilibration strategy comprises a recursive coarse-graining algorithm with its corresponding sequential back-mapping [40]. The contents of this publication are focused on the introduction of the new or updated methods and algorithms within the second release of ESPResSo++. The adaptive simulation schemes are described in Sec. 2 while the Lattice Boltzmann method is presented in Sec. 3. The hierarchical strategy for the equilibration of polymer melts is described in Sec. 4. While in Sec. 5 we present the deployment of the HeSpaDDA algorithm. Sec. 6 provides information on how to contribute to the development of ESPResSo++. Finally, sec. 7 reports on the integration of ESPResSo++ with other useful packages. Regarding the development of ESPResSo++, we want to highlight its user friendly environment due to the Python interface used for the simulation scripts, and thus higher degrees of freedom to interact with other scientific software parts of the Python community, e.g. NumPy [41], SciPy [42], scikit-learn [43], Pandas [44] and PyEMMA [45]. Those wishing to get started with the package should visit our webpage [46] or directly go to our GitHub repository [47]. Directions for downloading and building ESPResSo++ are given in both references. Finally for more details of the methods, algorithms or general code of ESPResSo++ please make use of our documentation [48] and previous publication [6]. Introduction Heterogeneous systems containing a wide range of length-and timescales can be challenging to model using molecular simulation. This is because high-resolution, chemically detailed models are needed to describe certain processes or regions of interest; however, such models are also computationally expensive, and using them to model the entire system would be prohibitive. One approach to tackle this problem is the use of multi-resolution simulation techniques, in which more expensive, typically atomistic, and cheaper, typically coarse-grained models are used within the same simulation box, allowing one to reach larger overall length and timescales [23,49,50,51,52]. In such techniques, a region in space is defined in which molecules are modeled using atomistic detail, while coarse-grained models are used elsewhere (see examples in Figure 1). The Adaptive Resolution Simulation (AdResS) methodology deals with the coupling between atomistic (AT) and coarse-grained (CG) models [53,23,11,39,54,33,52]. In this methodology, solvent particles can freely diffuse between AT and CG regions, smoothly changing their resolution as they cross a hybrid or transition region. The AdResS approach can be useful for modeling a wide variety of different systems, such as simple solutes in dilute solution and complex biomolecular systems [55,56,57,58,59,60,34,31,32,18,61,62,63,64]. ESPResSo++ has full support for the AdResS methodology, and the ESPResSo++ AdResS implementation has been used to simulate systems ranging from homogeneous fluids to biomolecules in solution [39,31,32,33,18,64,36,65]. In the AdResS approach, the coupling of AT and CG models can take place via an interpolation on the Figure 1: AdResS simulation of an atomistic protein and its atomistic hydration shell, coupled to a coarsegrained particle reservoir via a transition region. [31] level of either forces or energies. Force-interpolation AdResS was included in release 1.0 of ESPResSo++. Energy-interpolation (known as Hamiltonian-or H-AdResS), as well as the latest features are presented in the course of the article. Force interpolation In AdResS, a typically small part of the system, the AT region, is described on the AT level and coupled via a hybrid (HY) transition region, to the CG region, where a coarser, computationally more efficient model is used. The interpolation is achieved via a resolution function λ(R α ), a smooth function of the center of mass position R α of molecule α. For each molecule, its instantaneous resolution value λ α = λ(R α ) is calculated based on the distance of the molecule from the center of the AT region. It is 1 if the molecule resides within the AT region and smoothly changes via the HY region to 0 in the CG region (see, for example, Ref. [33]). In the force interpolation scheme, the original AdResS technique [53,23], two different non-bonded force fields are coupled as where F α|β is the total force between the molecules α and β and F AT α|β defines the atomistic force-field which is decomposed into atomistic forces between the individual atoms of the molecules α and β. Finally, F CG α|β is the CG force between the molecules, typically evaluated between their centers of mass. Note that in addition to the nonbonded interactions usually also intramolecular bond and angle potentials are present. As these are computationally significantly easier to evaluate, they are typically not subject to any interpolation and therefore not further discussed here. Potential energy interpolation Alternatively, the atomistic and the CG models can also be interpolated on the level of potential energies. In the Hamiltonian adaptive resolution simulation approach (H-AdResS) [11,66,67], the Hamiltonian of the overall system is defined as where m αi and p αi are, respectively, the mass and the momentum of atom i of molecule α. The single-molecule potentials V AT α and V CG α are the sums of all non-bonded intermolecular interaction potentials corresponding to the AT and the CG model acting on molecule α. We again omitted additional intramolecular interactions. The crucial difference between the force-based and the potential energy-based adaptive resolution scheme is an additional force term, dubbed drift force, arising in the forces corresponding to the Hamiltonian in Eq. 2 (for details, see [11]). Both schemes have characteristic advantages and disadvantages and which approach is better suited for implementing the adaptive coupling depends on the application. On the one hand, the force interpolation technique exactly preserves Newton's third law, but it does not allow a Hamiltonian formulation, does not conserve the energy [68], and it therefore requires thermostatting for stable simulations [69,68,70,71,72,73]. H-AdResS, on the other hand, allows also microcanonical simulations and other approaches that require the existence of a well-defined Hamiltonian. However, it violates Newton's third law in the hybrid region and it features an additional undesired force that must be explicitly taken care of. Free energy corrections and the thermodynamic force Typical CG models have significantly different pressures with respect to the AT reference system [74,75,76,77]. In adaptive resolution simulations, this leads to a pressure gradient between the AT and the CG subsystems, which, in addition to the drift force in H-AdResS, pushes particles across the HY transition region. To enforce a flat density profile along the direction of resolution change, one must apply a correction field in the HY region which counteracts the pressure gradient and, in H-AdResS, also the drift force. An appropriate correction can be derived, for example, via Kirkwood thermodynamic integration [11,78]. This is know as free energy correction and in particular useful in H-AdResS. An alternative approach, frequently used in the force interpolation method, is to construct a correction force directly from the distorted density profile obtained without any correction and then refine it in an iterative fashion. This approach is known as the thermodynamic force [79]. ESPResSo++ allows the straightforward inclusion of such correction forces and also includes routines that can be used to calculate density profiles, pressures and energies, required for deriving these corrections. Self-adjusting adaptive resolution simulations Many complex systems, such as proteins, membranes and interfaces, do not feature regular spherical or planar geometries. Furthermore, they undergo large-scale conformational changes during simulation. Therefore, recently a scheme was derived within the force interpolation approach that allows AT regions of any arbitrary shape. Additionally, the AT region can change its geometry during the simulation to follow, for example, a folding peptide [18]. This is established by associating several spherical AT regions with many atoms of a macromolecule, such that their overlap defines an envelope around the extended object. When it deforms, this shell adapts accordingly. This scheme is available in ESPResSo++ within the force interpolation approach in combination with the thermodynamic force. Multiple time stepping in adaptive resolution simulations Since CG potentials are typically significantly softer than AT force fields, the corresponding equations of motions can be solved using a significantly larger time step. This suggests the use of multiple time stepping (MTS) techniques in adaptive resolution simulations, in which both AT and CG potentials are present simultaneously. A RESPA-based MTS approach [80,81] is now available in ESPResSo++, which enables different time steps for updates of the CG and the AT forces. Thermodynamic integration As explained above in Sec. 2.3, no global Hamiltonian can be defined in the forceinterpolation version of AdResS. Nevertheless, the potential-energy-based Thermodynamic Integration (TI) approach to free energy calculations can be combined with force-interpolation AdResS, as recently shown using simulations of amino acid solvation in ESPResSo++. [65] This is because AdResS allows the sampling of local configurations which are equivalent to those of fully atomistic simulations. The TI implementation in ESPResSo++ can also be used to perform standard fully atomistic free energy calculations, such as to calculate solvation free energies or ligand binding affinities, among other examples. Path integral-based adaptive resolution simulations The path integral (PI) formalism can be used in molecular simulations to account for the quantum mechanical delocalization of light nuclei [82,81]. It is frequently used, for example, when modeling hydrogen-rich chemical and biological systems, such as proteins or DNA [83,84,85,86,87]. In the PI methodology, quantum particles are mapped onto classical ring polymers, which represent delocalized wave functions. This renders the PI approach computationally highly expensive (for a detailed introduction see, for example, [81]). In practice, the quantum mechanical description is often only necessary in a small subregion of the overall simulation. Recently, a PI-based adaptive resolution scheme was developed that allows to include the PI description only locally and to use efficient classical Newtonian mechanics in the rest of the system [35,36]. In this approach the ring polymers are forced to collapse to classical, point-like particles in the classical region. The method is based on an overall Hamiltonian description and it is consistent with a bottom-up PI quantization procedure. It allows for the calculation of both quantum statistical as well as approximate quantum dynamical quantities in the quantum subregion using ring polymer or centroid molecular dynamics. The methodology is implemented in the ESPResSo++ package and it also makes use of multiple time stepping. Introduction The Lattice Boltzmann (LB) method in ESPResSo++ was designed for efficient simulations of phase-separating semidiluted polymer solutions. These solutions are characterized by: (i) a low volume fraction φ < 5% of polymeric material with respect to the system's volume V and (ii) long polymer chains that start to overlap. These requirements are satisfied for spatially large systems with only a few very long chains. Since it is not computationally feasible to treat the solvent as explicit particles (their number would be much greater than several millions) we rely on the lattice-based LB methodology in the solvent treatment [88,89,90,91,92]. The polymer chains are modeled by molecular dynamics (MD). The hybrid LB/MD method is used to study the phase-separation of the polymer solution upon the change of the solvent quality. Under good solvent conditions the chains are extended coils, as the interactions between their monomers and the solvent are favorable. This situation is shown in Fig. 2a. A quench into poor solvent regime (Fig. 2b) initiates the collapse of the polymers and sets out a slow coarsening (Fig. 2c), i.e. agglomeration of individually collapsed chains into multichain polymeric droplets. The quenched system evolves on multiple time and length scales and demonstrates rich dynamical properties. Implementation details The LB technique can be viewed as a version of coarse-graining of the solvent fluid on a lattice. At every lattice site r and time t the fluid is modeled by a set of singleparticle distribution functions or populations f i ( r, t). The sites are connected in one The LB step is divided into collision and streaming parts. At first, the populations collide according to the kinetic rules given by a collision operator. As the operator we use the multiple-relaxation times scheme [94] that allows a straightforward introduction of thermal fluctuations [12] relevant to soft matter research. At the streaming phase the post-collisional populations are propagated to the neighboring sites according to velocity vectors c i and the LB step is finished. The coupling between the LB fluid and MD particles is done in a dissipative fashion [27] as sketched in Fig. 3a. The force F exerted by a solvent onto an MD particle located at position R and moving with velocity v, is given by where F rand is the random force due to thermal motion, and the first term is the viscous friction with an amplitude ζ. This term accounts for the velocity of the MD particle R with respect to the velocity of the fluid at the position of the particle u( R). The latter is interpolated from the fluid velocities u i at the neighboring lattice sites. To conserve total momentum of LB/MD system a counterforce − F should act from the MD particle onto LB fluid. We recast this force in terms of the momentum change − F = ∆ j/∆t, where ∆t is an MD timestep. The momentum change ∆ j of the LB fluid is distributed to the neighboring lattice sites. A time-costly LB step is done only after several MD steps [27], so we use ∆t LB /∆t = 5 or 10. In this approach, the forces F onto MD particles are calculated in every MD step, while the concomitant momentum changes ∆ j at the LB sites are accumulated in memory. Firstly, they update the fluid velocities in every MD step: u i → u i + ∆ j i /ρ i , where ρ i and j i are the mass and momentum density of the LB fluid at the site i. Secondly, the accumulated momentum changes are applied at the LB collision step via correction of the collision operator. This algorithm is shown in Fig.3b. For a detailed description of the method we address the reader to Ref. [93]. Efficiency The LB method of ESPResSo++ employs a regular lattice. Along with an extreme locality of the algorithm (only neighboring sites are connected) it profits from a straightforward but efficient parallelisation strategy realised by MPI (message-passing interface). The hybrid LB/MD approach conserves hydrodynamics and is more feasible for large simulations than explicit solvent treatment. Moreover, the timestep separation between MD and LB realised in ESPResSo++ facilitates the further speed up, as a time-intensive LB update is done only every several MD steps. Introduction To study the properties of polymer melts by numerical simulations, we have to prepare equilibrated configurations. However, the relaxation time for polymer melts increases, according to reptation theory, with the third power of the molecular weight [95,96,97]. In fact, equilibrated configurations of high molecular weight polymer melts cannot be obtained by brute-force calculation in a realistic time, i.e. the CPU time for 1000 polymers consisting of 2000 monomers is roughly estimated about 4.0 × 10 6 hours on a single processor (2.2GHz) on the basis of reptation theory [95,96,97] and actual measured CPU time per one particle. Hence, an effective method for decreasing the equilibration time is required. The hierarchical equilibration strategy pioneered in Ref. [40,98] is a particularly suitable way to do this. The hierarchical equilibration strategy consist of recursive coarse-graining and sequential back-mapping [40]. At first, a polymer chain, originally consisting of M monomers, is replaced by a coarse-grained (CG) chain consisting of M/N b softblobs, mapping from each subchain with N b monomers, represented as the model developed by Vettorel [99]. In this model, the relaxation time doesn't increase in accordance to the reptation theory but Rouse theory, since the CG chains can pass through each other. Where the degree of freedom of the system is N b times less than that of the microscopic model. Hence, the relaxation time of CG chain configuration is drastically decreased. After equilibrating a configuration at very coarse resolution, each CG polymer chain is replaced with a more fine-grained (FG) chain. In this back-mapping procedure, a CG blob is divided into several FG blobs. The center of mass (COM) of the FG blobs coincides with the position of the CG blob's center and is being kept fixed during the relaxation of the local conformation of the FG monomers within the CG blob. Consequently, a microscopic equilibrated configuration can be reproduced by sequential back-mapping. The required functions of this strategy have been implemented into ESPResSo++. Efficiency The efficiency of the hierarchical strategy for polymer melts can be estimated by a comparison with the brute-force calculation. The CPU time for brute-force calculations τ brute is described as τ brute ∼ N × M 3 × (M/N e ) τ mon , where N e stands for the number of monomers between entanglement. This value is obtained for the product of the number of monomers, N × M , and the reptation time, M 2 × (M/N e )τ mon , [96]. Thus, τ push should be multiplied by 50 when we estimate the computational effort. Additionally, the cpu time for a step per particle per processor (2.2GHz) is about 4.0 × 10 −7 for the microscopic model and about 4.0 × 10 −5 for softblob models. Thus, τ blob should be multiplied by 100 for estimating the efficiency. Hence, the computational time τ hier is estimated as τ hier ∼ 100×τ 100 +100×τ 50 +100×τ 25 +50×τ push +τ micro . As a consequence, we can estimate the efficiency of the hierarchical strategy by the ratio of τ hier and τ brute . For example, after substituting N = 1000, M = 2000 and N e = 80 to τ hier and τ brute , we obtain the concrete value of the ratio τ hier τ brute ≈ 1 1.9 × 10 3 . (3) Please note that the efficiency of the hierarchical strategy increases with increasing M . Introduction Simulating heterogeneous molecular systems on supercomputers requires the conception and development of efficient parallelization techniques or Domain Decomposition (DD) schemes [15,14,101]. In the first release of ESPResSo++ the Domain Decomposition scheme was a combination of the Linked-Cell-List algorithm (LCL) with an homogeneous-spatial Domain Decomposition. Such schemes are applied to traditional molecular simulations, for instance in dense homogeneous polymer melt systems [102,100,40]. While traditional molecular simulations are performed with the same resolution for all molecules in the simulation box; in heterogeneous systems [13], we tackle different resolutions (densities). Spatially the simulation box is typically comprised by subregions with different resolutions, namely, for multiscale simulations the coarse-grained and the atomistic/hybrid subregions (see Figure 6(a)). In terms of computational costs, the most expensive regions are the ones holding atomistic details (higher resolution) followed by the regions using coarse-grained models [29,30] or ideal gas [32] which are significantly cheaper. The exhibited challenges in terms of algorithms for domain decomposition are mainly because of two constraints, namely, the interactions per subdomain and the inter-subdomains communication. An example of the inter-domains communication constraint is the imbalanced amount of data communicated between the fully atomistic and hybrid regions in comparison to the CG regions (see Figure 6(c)). While also the interactions per domain are going to be imbalanced if an homogeneous grid decomposes equally the systems shown in Figure 6 (4) where N res HR is the number of entities in the high-resolution region that corresponds to one entity in the low-resolution one N res LR . For example mapping the atomistic water Figure 6: AdResS simulation of an atomistic protein and its atomistic hydration shell, coupled to a coarsegrained particle reservoir via a transition region [31]. (a) Illustrates all details regarding the multiscale system subregions, in gray the low-resolution while the high-resolution is marked by its radio and a white circle and also the transition or hybrid region is also marked between the white and orange circles. (b) Scheme of interactions load (computationally exhaustive) as for the subdomains homogeneous distribution of the protein system described above and (c) shows the communication schemes derived from an imbalanced load distribution. molecule to the coarse-grained model can usually result in a R res SH = 3 [13] (See also Figure 7(a)). To tackle the afore mentioned limitations (illustrated in Figures 6(b) and 6(c)), the updated release of ESPResSo++ includes an implementation of the Heterogeneous Spatial Domain Decomposition Algorithm, for short HeSpaDDA [13]. In a nutshell, HeSpaDDA will make use of a priori knowledge of the system setup, meaning the region, which is computationally less expensive. This inherent load-imbalance could come from different resolutions or different densities. The algorithm will then propose non-uniform domain layout, i.e. domains of different size and its distribution amongst compute instances. This can lead to significant speedups for systems of the aforementioned type over standard algorithms, e.g. spatial Domain Decomposition [15] or spatial and force based DD [101]. Algorithm description The proper allocation of processors in heterogeneous molecular simulations is vital for the intrinsic computational scaling and performance of the production run. Moreover, the whole simulation performance is constrained to the initial domain decomposition and hence the correspondence of the number of processors to different resolution regions of the initial given configuration. An example of such a heterogeneous initial domain decomposition is provided by a multiscale simulation of water, where the system is decomposed in the x-axis by 8 processors and the homogeneous and HeSpaDDA cases are depicted one next to the other(see Figures 7(b) and 7 (c)). Also the algorithm flowchart can be found in Figure 7(d). Once the processors allocation has been built an initial cells distribution per subdomain is created to find the maximum number of cells to be used per region. Such a distribution can be done in both ways symmetrically and non-symmetrically. The symmetric case of cells distribution is triggered if the heterogeneous regions can be decomposed as mirrors within half of the box and the non-symmetric one occurs if the heterogeneous regions cannot be mirrored within the simulation box. Within those functions, control statements check if the number of processors to be used are even or odd, as well, as the number of cells in each dimension. In case there are still non distributed cells the symmetric and non-symmetric functions will call a pseudo-random weighted cells distribution for the remaining ones. As a final step the algorithm verifies if the performed DD is scalable, and suggests a possible number of cores to perform the heterogeneous simulation (function cherry picked processors cherrypickTotalProcs). A detailed description of the processors allocation and cells distribution algorithms are provided in a previous article [13] and all python scripts can be found in the referenced code [47]. Implementation in ESPResSo++ The implementation of HeSpaDDA in ESPResSo++ has involved the creation of new data structures, for the number of cells inhomogeneously distributed in each subdomain as depicted in Figure 7(C). Such data structures are linked to an iterative algorithm that allocates processors to the simulation box accordingly to the resolution of regions i.e. fully atomistic, coarse-grained, among others. The processors allocation algorithm flowchart is described in Figure 7(d). Development workflow Since the last release [6] we have moved to GitHub hosting and hence from Mercurial to Git https://git-scm.com/ as a version control system. Within change of the control system, we have also set a new development workflow, fork-and-branch, that is commonly used on the GitHub platform [103]. This approach requires two things from the developers: fork the repository and use pull requests to ask for the changes in the code base. Basically, every new feature is getting developed on a branch of the developer's fork repository. Once a feature reaches completion, a merge request is sent to the default branch via GitHub's pull request mechanism. We use master branch as the default development branch. The pull request then gets a review by one of the ESPResSo++ core developers. Usually, minor improvements like e.g. adding tests or documentation are requested from the feature developer. Once all the newly added and existing tests pass, the feature is merged into the master branch. This whole workflow is supported by continuous integration (CI) tests [104], meaning before the pull request is accepted, it has to fulfill three conditions: properly build, pass all unit-tests and do not decrease code coverage. The build process is pursued under three Linux distributions: Ubuntu (latest and long-term-support), Fedora, and OpenSUSE. Moreover, every change is checked against two different compilers, gcc [105] (versions 4.7, 4.8, 4.9) and clang [106] as well as the internal and external Boost library [107]. As for the tests, we use two types, one that tests particular features, so-called unittests. The second set, the regression tests, run against existing reference data. CI gives the developers the advantage that even before the actual code review within the pull request it is easy to see if any changes broke existing tests. The last condition is the code coverage, which describes the percentage of code lines that were tested with unit-tests. For that, we use the external tool (https://codecov.io/) If all conditions are matched then the pull request can be accepted. In addition, we use CI to build different pieces of documentation including the website, the Doxygen [108] documentation of the code and a pdf of the user's guide. The newly generated documentation is automatically deployed to http://espressopp.github.io/. This documentation is available to the developer to support the usage of the most recent development version. This continuous deployment allows to not waste any time using the possibly outdated documentation of the last release. Moreover, the latest master version is deployed and released to Docker Hub for users who prefer to test ESPResSo++ without building it themselves. The release versions of ESPResSo++ follow the idea of semantic versioning [109] and the releases workflow ties into it very easily. In a nutshell, fixes and small new features, which don't change the application interface can go into "stable" and hence only trigger a minor release, while big refactors that introduce a new feature or break backward compatibility of the application interface go into a major release. After each major release, a stable branch is created and git tags on that branch mark the individual minor releases. Bugs and hotfixes are merged (via the same pull request workflow) into the stable branch directly. If necessary, the stable branch is merged into the master branch occasionally to include all fixes into the master branch as well. Integration with other packages The ESPResSo++ is suitable to cooperate with other software packages, primarily due to its internal design to work as nothing more than a Python module. In this way, ESPResSo++ can be called from any other Python code, even from Jupyter [110] during an interactive session. This allows using ESPResSo++ as an educational tool during hands-on sessions. The recent release of ESPResSo++ brings support to the new file format H5MD [111] that uses HDF5 [112] storage. The H5MD file format is suitable to hold information about the simulation details in a self-descriptive, binary and portable format. Along with the particle positions, with this file format, it is possible to store information of box size, particle types, mass, partial charges, velocity, forces and other properties like software version, integrator time-step and a seed of random number generator. Together with the information about particles, the H5MD can store information about the connectivity, in a static or dynamic manner. With first, the information about bonds, angles, dihedrals are stored at the beginning of the simulation and they would not be updated during the run. With the later, we can track the changes in the bonds, angles and dihedrals during the simulation. This can be very important, e.g, when we use ESPReSo++ to perform chemical reactions [19,20]. By this, it is possible to share not only results of the simulation but also certain details that allow reproducing these results. In addition, HDF5 storage natively supports parallel input-output (I/O) operations which allows performing efficient parallel simulations. Apart from the H5MD file format, ESPResSo++ has trajectory writers to GRO-MACS [113], XTC, XYZ and PDB file formats. The simulation details can also be stored in LAMMPS [15] file format. When it comes to reading, the GROMACS topology and trajectory file formats are supported, hence it is possible to run directly a simulation from those input files. This is also true of LAMMPS input files. Because of the variety of file formats that are supported by ESPResSo++, the integration with existing packages is very easy, for example, VOTCA 1.3 [10] package can cooperate with ESPResSo++ out-of-the-box. Examples and documentation All features and their implementation are described in detail in the ESPResSo++ documentation. Furthermore, ESPResSo++ comes with many example scripts and tu-torials that cover all methodologies and demonstrate to the user how to set up various types of simulation systems in practice. Conclusions In this article we have presented two multiscale molecular simulation methods, a hierarchical strategy for equilibrating polymer melts and an accompanying domain decomposition scheme which have been recently implemented in the ESPResSo++ package. The deployment of those methods and algorithms shows the flexibility and extensibility offered by the software package for the method development of advanced molecular simulations techniques. From the software development viewpoint, we are providing scientists from the material and biomolecular scientific communities a computational tool that can be used out-of-the-box from simple Python scripts and allows exploring diverse molecular systems in polymer research, membranes, proteins, crystallization processes, evaporation, among others. This update is also very useful for the prototyping of new theoretical concepts and molecular simulation method development since it includes proven functionalities like: (i) extension of new simulation methods on top of the presented ones, as shown in Section 2, (ii) the development of new algorithms in Section 4 and Section 5, or (iii) the combination of multiple scales and multiple methods like the development shown in Section 3. We have covered many possible applications of the recent methods and algorithms included within the updated ESPResSo++. In particular, for the areas of soft matter science, as properly illustrated within each section of the article. Selected applications of the new ESPResSo++ release have been published and are as well referenced within this release communication. Whereby we remark the making of more feasible simulations for advanced materials from short proteins to huge polymer melts, passing through advanced path integral multiscale systems. On top of the aspects described above, the ESPResSo++ package is open source and hence offers the molecular simulation community a possibility of extending the package and/or adapting methods to their research interests. Moreover a friendly developers environment, including recent developers software tools, mailing lists, repository management, an improved documentation and even parsing of input files from other MD packages, aiming smooth transitions from such packages to ESPResSo++.
2018-12-16T07:13:17.828Z
2018-06-28T00:00:00.000Z
54685100
s2orc/train
v2
Optimizing Differentially-Maintained Recursive Queries on Dynamic Graphs
Optimizing Differentially-Maintained Recursive Queries on Dynamic Graphs Differential computation (DC) is a highly general incremental computation/view maintenance technique that can maintain the output of an arbitrary and possibly recursive dataflow computation upon changes to its base inputs. As such, it is a promising technique for graph database management systems (GDBMS) that support continuous recursive queries over dynamic graphs. Although differential computation can be highly efficient for maintaining these queries, it can require a prohibitively large amount of memory. This paper studies how to reduce the memory overhead of DC with the goal of increasing the scalability of systems that adopt it. We propose a suite of optimizations that are based on dropping the differences of operators, both completely or partially, and recomputing these differences when necessary. We propose deterministic and probabilistic data structures to keep track of the dropped differences. Extensive experiments demonstrate that the optimizations can improve the scalability of a DC-based continuous query processor. INTRODUCTION Graph queries that are recursive in nature, such as single pair shortest path (SPSP), single source shortest path (SSSP), variable-length join queries, or regular path queries (RPQ), are prevalent across applications that are developed on graph database management systems (GDBMS). Many of these applications require maintaining query results incrementally, as the graphs stored in GDBMSs are dynamic and evolve over time. For example, millions of travellers use navigation systems to find the fastest route between two points on a map. To keep the route information fresh, these systems need to continuously update their SPSP query results as road conditions change. Similarly, several knowledge graphs, such as RefinitivGraph [1] contain billions of connections between realworld entities, such as companies, banks, stocks, and managers. A Refinitiv product, World-Check Risk Intelligence 1 , searches for direct and indirect connections between entities to help companies and banks comply with mandatory regulations. Since these graphs 1 https://www.refinitiv.com/en/products/world-check-kyc-screening are frequently updated by new facts, these applications require the queries to be continuously evaluated. Many GDBMSs have capabilities to evaluate one-time versions of recursive queries over static graphs, but generally do not support incrementally maintaining them. As such, in dynamic graphs, existing systems require rerunning these queries from scratch at the application layer. A GDBMS that can incrementally maintain recursive queries inside the system would lead to easier and more efficient application development. In this paper we investigate the use of differential computation (DC) [23], a new incremental maintenance technique, to maintain the results of recursive queries in GDBMSs. DC is designed to maintain arbitrarily cyclic (thus, recursive) dataflow programs [23,24]. Unlike using a specialized incremental derivation rule, DC starts from a dataflow program that evaluates the one-time version of the query. By keeping track of the differences to the inputs and outputs of the operators across different iterations, called timestamps in DC terminology, DC maintains and propagates the changes between operators as the original inputs to the dataflow are updated. This makes DC more general than other techniques, as it is agnostic to the underlying dataflow computation. However, DC can have significant memory overhead [18], as it may need to monitor a high number of input and output differences between operators. For example, Table 1 shows the performance and memory overhead of the DC implementation of the standard Bellman-Ford algorithm for maintaining the results of SSSP queries on the Skitter internet topology dataset [19]. In the experiment, we modify the graph with 100 batches of 1 random edge insertion each, and provide the system with 10GB memory to store the generated differences. The table also shows the performance of a baseline that re-executes the Bellman-Ford algorithm from scratch after each update, thus not requiring any memory for maintaining these queries. Although the differential version of the algorithm is about five orders of magnitude faster, it cannot maintain more than 10 concurrent queries due to its large memory requirement. This limits the scalability of systems that adopt DC. In this paper, we study how to reduce the memory overheads of DC to increase its scalability when maintaining the popular classes of recursive queries mentioned above. Our optimizations are broadly based on dropping differences and instead recomputing them when necessary. We focus on optimizing the differential version of a common subroutine in graph algorithms where vertices Table 1: Execution time (in seconds) for an SPSP workload, on Skitter dataset, using a scratch algorithm, which re-executes a standard non-incremental Bellman-Ford algorithm, vs. a differential computation version, which keeps track of changes. Differential computation is more than five order of magnitude faster, but fail with out of memory when the number of queries increases. Number aggregate their neighbours' values iteratively and propagate their own values to neighbours until a stopping condition, such as a fixed point, is reached. Variants of this subroutine with different aggregation, propagation, and stopping conditions can be used to evaluate all of the recursive queries we focus on in this paper. This routine consists of Join operator and an aggregation operator, e..g, a Min, and has been given different terms in literature, such as propagate-AndAggregate [31] or iterative matrix vector multiplication [15]. We refer to it as iterative frontier expansion (IFE). In this work, we start with the base version implementation of DC as in the differential dataflow (DD) [23] and its precursor Naiad system [24]. We propose two main optimizations: Join-On-Demand (JOD) (Section 4) that completely drops the output differences of the Join operator of the IFE dataflow and only computes these differences when DC needs to inspect them; and (2) two partial difference dropping optimizations (Section 5) that drop some of the differences in the output of the aggregation operator in IFE. Our partial difference dropping optimization offers users a knob to drop a certain percentage of the system's differences. We begin by describing a baseline deterministic optimization Det-Drop that explicitly keeps track of the vertex and timestamp of each dropped difference. We show that although Det-Drop reduces the memory consumption of a system, it also has inherent limitations in terms of scalability improvements, as the additional state it keeps is proportional to the amount of differences that it drops. We then propose a probabilistic approach Prob-Drop that addresses this shortcoming by leveraging a probabilistic data structure, specifically a Bloom filter. Prob-Drop may attempt to reconstruct a non-existing difference due to false negatives but it more effectively reduces the memory consumption, so a system using Prob-Drop needs to drop fewer differences to meet same memory budgets as Det-Drop. Finally, we describe an optimization that uses the degree information of each vertex to choose which differences to drop as opposed to dropping them randomly. We demonstrate that JOD reduces the number of differences up to 8.2× in comparison to vanilla DC implementations. We also show that exploiting the degree information to select the differences to drop can improve the performance of partial dropping optimizations (Det-Drop or Prob-Drop) by several orders of magnitude. We further show that Prob-Drop achieves up to 1.5× scalability relative to Det-Drop when selecting the differences to drop based on degrees. Our optimizations overall can increase the scalability of our differential algorithms by up to 20× in comparison to DD, while still outperforming a baseline that reruns computations from scratch by several orders of magnitude. RELATED WORK Broadly, there are two approaches to maintaining the results of a computation over a dynamic graph: (i) using a computation-specific specialized solution; or (ii) using a generic incremental computation/view maintenance solution that is oblivious to the actual computation, at least for some class of computations. DC falls under the second category. Below, we review both approaches. Specialized Techniques and Systems There is extensive literature dating back to 1960s on developing specialized incremental versions of (aka dynamic) graph algorithms that maintain their outputs as an input graph changes. Many of the earlier work focuses on versions of shortest path algorithms, in particular all pairs shortest paths computation [5-8, 20, 27, 28]. These works aim at developing fast algorithms that can, in worstcase time, be faster than recomputing shortest paths upon a single update, e.g., when the edge weights are integer values. Fan et al [11] present theoretical results on the foundations of such algorithms. Specifically they show that the cost of performing six specific incremental graph computations, such as regular path queries and strongly connected components algorithms, cannot be bounded by only the size of the changes in the input and output. Then, they develop algorithms that have bounded guarantees in terms of the work performed to maintain the computation. On the systems side, there are several graph analytics systems that enable users to develop incremental versions of a graph algorithm. GraphBolt [21] is a recent shared-memory parallel streaming system that can maintain dynamic versions of graph algorithms. GraphBolt requires users to write explicit maintenance code in functions such as retract or propagateDelta that generic systems such as DD do not require. As graph updates arrive, the system executes these functions, and if a user has provided a dynamic algorithm with provable convergence guarantees, the system will correctly maintain the results. iTurboGraph [18] focuses on incremental neighbour-centric graph analytics with an objective to reduce the overhead of large in-memory intermediate results in systems like GraphBolt and DD. iTurboGraph keeps graph data on disk as streams and model the graph traversal as enumeration of walks to avoid maintaining large intermediate results in memory. They avoid expensive random disk access by adopting the nested graph windows approach [17]. Instead, our proposed solutions keep the intermediate results in memory and drop these differences to reduce memory overhead. Broadly, programming specialized algorithms or GraphBolt-like systems can be more efficient than generic solutions. For example, several references have demonstrated this difference between DD and GraphBolt [21,30]. In contrast, generic solutions such as DD, which we focus on in this work, are fundamentally different and have the advantage that users can program arbitrary static versions of their algorithms, which will be automatically maintained. Therefore they are suitable as core incremental view maintenance techniques to integrate in general data management systems. Generic Techniques and Systems When an input graph is modeled as a set of relations and a graph algorithm is modeled as a query over these relations, maintaining graph computation can be modeled as incremental view maintenance, where the view is the final output of the query. Traditional incremental view maintenance (IVM) techniques for recursive SQL and Datalog queries have focused on variants of incremental maintenance approaches [13] such as Delete-and-Rederive, which consists of a set of delta-rules that can produce the changes in the outputs of queries upon changes to the base relations. These rules can be highly inefficient as they first delete all derivations of updated/removed facts and then redrive them again using the updated facts, only to finally detect whether any of the deletions and/or additions have effects on the final result. This contrasts with DC as it does not store intermediate computations to speed up processing. Interestingly, the only incremental open-source Datalog implementation we are aware of does not use the Delete-Rederive maintenance algorithm but uses DC [29]. This work compiles Datalog programs into DD programs, so ultimately uses vanilla DD, which we optimize and use as a baseline in our work. Tegra [14] is a system developed on top of Apache Spark [33], that is designed to perform ad-hoc window-based analytics on a dynamic graph. Tegra allows the creation of arbitrary snapshots of graphs and executes computations on these snapshots. The system has a technique for sharing arbitrary computation across snapshots through a computation maintenance logic similar to DC. However, the system is optimized for retrieving arbitrary snapshots quickly instead of sharing computation across snapshots efficiently. There has been several systems work that use the generic incremental maintenance capabilities of DC. GraphSurge [30] is a distributed graph analytics system that lets users create multiple arbitrary views of a graph organized into a view collection using a declarative view definition language. Users can then run arbitrary computations on these views using a general programming API that uses DD as its execution engine, which allows Graphsurge to automatically share computation when running across multiple views. References [32] implements a DC-based Software Defined Network Controller that incrementally updates the routing logic as the underlying physical layer changes. Similarly, RealConfig [34] is a network configuration verifier uses DD to incrementally verify updates to a network configuration without having to restart from scratch after every change. PRELIMINARIES In this section we first review the graph and query models used in the paper. Then we summarize the IFE recursive algorithmic subroutine and differential computation. Table 2 shows the notations and abbreviations that are used throughout the paper. Graph and Query Model We consider property graphs, so vertices and edges can have attributes. Formally, a graph = ( , , , ), where is the set of vertices, is the set of directed edges, is the set of properties over vertices, and is the set of properties over edges. Our continuous queries compute properties of vertices, which we refer to as their states. We will not explicitly model states but these can be thought of as temporary properties in . For an edge , we maintain two properties: ( ), and ℎ ( ). If is unweighted, the the weights of each edge is set to 1. Partial difference dropping optimization using a deterministic data structure. Prob-Drop Partial difference dropping optimization using a probabilistic data structure. We focus on three recursive queries in this paper: SPSP, K-hop, and RPQ. K-hop is the query in which we are given a source vertex and output all reachable vertices from that are at a distance (in terms of hops) of ≤ for a given . Each one of these queries can interact with different parts of our graph model. The edge properties that a recursive query needs to access and the vertex states for this computation will be clear from context. In a dynamic graph setting, an initial input graph 0 may receive several batches of updates. Each batch is defined as a list of edge insertions or deletions = [( , , , ℎ , +/−)], which includes an edge, and its / ℎ , and a +/-to indicate, respectively, an insertion or a deletion (updates appear as one deletion and one insertion). We do not consider vertex insertions or deletions because these implicitly occur in our algorithms through explicit edge insertions and deletions. refers to the actual set of edges in a graph after receives its 'th batch of updates (so the union of 0 and the batches of updates). The problem of incremental maintenance of a recursive query is to report the changes to the output vertex states of after every batch of updates. These batches can be thought of as output in the form of ( , ( ), +/−), for a vertex and a new vertex state ( ) and +/-indicating addition or removal of a state. Iterative Frontier Expansion as a Dataflow Iterative Frontier Expansion (IFE) is a standard subroutine for implementing many graph algorithms solving many computational problems, including graph traversal queries like SPSP, SSSP, RPQs. At a high-level, the computation takes as input the edges (possibly with properties) of a graph and an initial set of vertex states, and, iteratively, aggregates for each vertex the states of its neighbours to compute a new vertex state, and propagates this state to its neighbours. These iterations continue until some stopping criterion is met, e.g., a fixed point is reached and the vertex states converge. Figure 1a shows the template IFE dataflow that consists of two operators, ExpandFrontier, that expands the frontiers and the Stop operator that determines when to stop the query execution. We use and optimize variants of this basic IFE dataflow to evaluate the queries we consider. As an example, Figure 1b shows a specific instance of the IFE dataflow implementing the standard Belman-Ford algorithm for evaluating an SSSP query where vertex states are latest distances from a source vertex . ExpandFrontier operator is implemented with two operators, Join and Min. For each vertex in the frontier, Join sends possible new distances to 's outgoing neighbours (considering 's latest distance and possible weights on the edges). For each vertex of 's outgoing neighbours', the new value is computed with a Min operator that computes the smallest received distance for considering 's latest known distance. For different variants of shortest-path queries, RPQs, and variable-length join queries, we use the IFE template dataflow with always the same Join operator, but possibly different aggregator implementations and different Stop conditions. Differential Computation Overview DC [23] is a general technique to maintain the outputs of arbitrarily nested dataflow programs as the base input collections change. Dataflow programs consist of operators, such as Join or Min in Figure 1b, that take input and produce output data collections, which are tables storing tuples. For example, in the IFE dataflow, the edges in an input graph are stored as (src, dst) tuples in the Edges ( ) data collection. We will refer to collections, such as E, that are inputs to the dataflow as base collections, and other collections that are outputs of an operator as intermediate collections. We review DC through an example. Consider the IFE instance from Figure 1b implementing the Bellman-Ford algorithm and running it on the input graph shown in Figure 2. Given this iterative dataflow computation, DC computes the input and output data collections of each operator as partially ordered timestamped difference sets and maintains these difference sets as the original input collections to the entire dataflow (in this case Edges ( ) and Distances ( )) change. Timestamps can be multi-dimensional. For example, in the above computation, the timestamps are two dimensional, the first is graph-version and the second is Bellman-Ford iteration, which we will refer to it as IFE iteration, represented as a ⟨ , ⟩ pair. Collections, e.g., , can change for two separate reasons: (1) changes in the graph ( ), such as inserting an edge, or (2) changes in distances ( ) during the computation of IFE iterations. More generally, for each data collection , let represent the contents of at a particular timestamp , and let be the difference set that stores the "difference tuples" (differences for short) for at . Differences are extended tuples with + or -multiplicities. For base data collections, such as , +/-indicate external insertions or deletions to them. For intermediate data collections, these may not have as clear an interpretation. Instead, the + or -'s are assigned to tuples to ensure that summing all the prior to a particular timestamp gives exactly . Sum of two difference sets adds the multiplicities for the differences with the same tuple values and if a sum equals 0, then the tuple is removed from the collection. Consider an operator with one input and one output collections, and , respectively. DC ensures that for each collection and operator the following equations hold: DC uses Equations 1 and 2 to compute the differences to store in and for each timestamp. Then, DC uses these difference sets to reassemble correct contents of and at each timestamp when needed during its maintenance procedure (explained momentarily). Suppose a system has maintained the Bellman-Ford dataflow differentially for many updates to its base collection ; that is, the system has computed the differences for each base or intermediate collection for timestamps ⟨ 0 , 0⟩, . . . , ⟨ , ⟩, where is the maximum number of iterations that the dataflow ran on any of 0 , ..., . Given a new, +1'st set of updates to the base collections, DC maintains the dataflow's computation by computing a new set of differences for collections at some of the timestamps = ⟨ +1 , ⟩ | ∈ {0... } by rerunning some of the operators at these timestamps. If on +1 , the Bellman-Ford dataflow computation requires more than iterations to converge, then the system generates difference sets for timestamps ⟨ +1 , ⟩ | > . We next explain DC's maintenance procedure. Suppose that the operators work on partitions of collections. In our example, the partitioning of the collections would be by vertex IDs and each operator would perform some computation per a vertex ID. Let indicate the contents of 's partition for key . DC reruns an operator at different timestamp according two rules: • Direct rerunning rule: if 's input has a difference at for a particular key , i.e., is non-empty, DC reruns (on key ) at timestamp . That is DC reassembles = ≤ and executes on , which computes a new . Then, DC computes the difference set as = − < . • Upper bound rule: For correctness, may need to be executed on later timestamps than for even if there is no immediate differences in at those timestamps. Specifically, DC finds every timestamps ≮ in which 's input has differences for key and reruns on timestamps that are least upper bounds of such and . Importantly, if no difference is detected to vertex 's partitions of inputs of an operator for timestamps from ⟨ +1 , 0⟩ to ⟨ +1 , ⟩, no operator needs to rerun on . For many dataflow computations, the effects of many updates in graphs can be localized to small neighbourhoods, and DC automatically detects the vertices in this neighbourhood on which operators need to rerun. As an example, Table 3 shows the full difference trace for each collection in the IFE dataflow implementing Bellman-Ford algorithm in the example dynamic graph in Figure 2 that has two updates: (i) an update on (a, d) from 20 to 100, at timestamp < 1 , 0>; and (ii) an update on (b, c) edge from 10 to 100 at timestamp < 2 , 0>. These updates are modeled as differences in collection at these timestamps. Reference [2] formally proves that applying this simple rule to decide which operators to rerun correctly maintains any dataflow computation. COMPLETE DIFFERENCE DROPPING: JOIN-ON-DEMAND When maintaining IFE with DC, the memory overheads of storing the difference sets for the output of the Join operator (J) is generally much larger than those for the output of the following aggregation operator (D). Consider the IFE implementation of SPSP, where edges have weights and vertex states represent shortest distances to a source vertex. Suppose at a particular iteration of the IFE at a specific graph version , a vertex 's state is ( , ) and has ( ) many outgoing edges, e.g., . Then to simulate propagating possible new shortest distances to its outgoing neighbours, J would contain ( ) many tuples at timestamp ⟨ , ⟩: . Similarly, the partition J of J contains one tuple for each of 's incoming neighbours. When maintaining IFE differentially, J's size is commensurate with the number of edges in , which can be much larger than D, whose size is commensurate with the number of vertices in . Example 1. Observe that in Table 3, has two differences for vertex at timestamp ⟨ 1 , 1⟩, −( , 20) and +( , 100). These changes lead to four differences in because has two outgoing edges, one to and the other to . The goal of JOD is to avoid storing any difference sets for J, i.e., to completely drop , and regenerate for any on demand when DC requires running the aggregation operator (in our example Min) on at a particular timestamp. We first describe an unoptimized version of JOD, then describe an optimization called eager merging that reduces the timestamps to regenerate , which is the optimized JOD we have implemented. JOD Recall that DC reruns Min on a vertex at timestamp = ⟨ +1 , ⟩ if (1) or are non-empty (direct rule); or (2) is an upper bound of 1 and 2 that satisfy the following conditions (upper bound rule): (i) 1 ∈ T 1 = {⟨ +1 , ′ ⟩| ′ < } and 1 and/or 1 are non-empty; and (ii) 2 ∈ T 2 = {⟨ ′ , ⟩| ′ < + 1} and 2 and/or 2 are non-empty. If are dropped, how can we correctly decide when to rerun Min and recompute the needed dropped for these reruns to ensure we correctly differentially maintain IFE? DC JOD is our modified version of DC maintenance subroutine that has this guarantee, which works as follows. In the below description, when Min is rerun on at timestamp , is constructed by inspecting for each incoming neighbour of , and and performing the join. Note that we do not drop the differences related to D and E. DC JOD : • Direct Rule: For each ( , , , , +/−) ∈ +1 , since there is a difference in ⟨ +1 ,0⟩ , there is also a difference in ⟨ +1 ,0⟩ . So we rerun Min on in ⟨ +1 , 0⟩ (direct rule). • Direct Rule: Each time Min reruns on at a timestamp ⟨ +1 , ⟩, we check if it generates a difference for ⟨ +1 , +1⟩ . If so, this implies there is a difference in ⟨ +1 , +1⟩ for each outgoing neighbour of . Therefore we schedule Min on at timestamp ⟨ +1 , + 1⟩ (direct rule). • Upper Bound Rule: Each time we schedule to rerun Min on a vertex , either by or Direct Rule at timestamp ⟨ +1 , +1⟩, by the upper bound rule, we schedule to rerun Min on at timestamp ⟨ +1 , ⟩ s.t. > +1 if either of these two conditions are satisfied: (i) there is a non-empty ⟨ ℎ , ⟩ s.t ℎ < + 1; and (ii) there is an incoming neighbour of with a non-empty ⟨ ℎ , ⟩ s.t., ℎ < + 1. Our next theorem proves that DC JOD correctly maintains the IFE dataflow. Theorem 4.1. The subset of timestamps that DC JOD re-computes Min on any key/vertex ID subsumes the timestamps that DC recomputes Min and correctly generates the same set of differences for D. Proof. Our proof is very technical and by induction on timestamps . We assume for simplicity that there is a global iterations on which the IFE computations runs. We take as the base cases ⟨ 0 , 0⟩ to ⟨ 0 , ⟩, as in this case the behaviour of DC JOD simply follows , which further directly follows the computation that is performed when running static version of IFE on an input graph. In this case the behaviour is that Min runs on all vertices in ⟨ 0 , 0⟩ and then for each vertex at ⟨ 0 , 0⟩ if one of its incoming neighbour's states change. As induction hypothesis, we assume that for each vertex , DC JOD runs Min on a larger subset of timestamps and correctly generates the same set of differences for the data collection D (and E, whose maintenance is independent of the JOD optimization) until ⟨ , ⟩. We will prove that for each key , if DC runs Min in timestamp ⟨ +1 , ⟩, so does DC JOD . Note that for 0 = ⟨ +1 , 0⟩, this is true because if is re-computed by DC in 0 it is because there is change in 0 , which can only occur if there is an edge ( , ) in 0 , i.e., one of 's incoming edges must have had an edge update that triggered DC to rerun Join which must trigger a difference in 0 . The first step of DC JOD ensures that for each edge (u, v) in 0 , Min is re-computed for in 0 . We use this as a base case, we do another induction only on the second component of the timestamps in ⟨ +1 , ⟩. That is we assume from ⟨ +1 , 0⟩ to ⟨ +1 , ⟩, the theorem is true and prove it for ⟨ +1 , + 1⟩. Now consider +1 = ⟨ +1 , + 1⟩. We will prove the contrapositive of the claim: if DC JOD does not re-compute Min on , then neither will DC. Note that DC JOD does not run on if two conditions are satisfied: (1) none of 's incoming neighbours had a nonempty ⟨ +1 , ⟩ . If there was we would schedule to re-compute, which means DC cannot execute Min on as an application of the direct rule. (2) none of 's incoming neighbours 1 , ..., 2 has a non-empty ⟨ ℎ , ⟩ s.t., ℎ < + 1. What we next show is that if this condition is true, then all such ⟨ ℎ , ⟩ are non-empty, so DC cannot have triggered the upper bound rule either. Finally, to prove this we will do a third induction starting from For the base case of ⟨ 0 , ⟩ , observe that: This is true because this is a 1 dimensional case and ⟨ 0 , ⟩ and ⟨ 0 , −1⟩ are computed buy the Join operator using the same of incoming neighbour states (recall that our assumption is that all ⟨ 0 , ⟩ are empty). Suppose now that ⟨ 0 , ⟩ up to ⟨ ℎ , ⟩ are empty. We prove that ⟨ ℎ+1 , ⟩ is empty: By induction hypothesis the last summation is non-empty. Note further that ⟨ ℎ+1 , ⟩ and ⟨ ℎ+1 , −1⟩ are the same set because they are computed by the Join operator using the same of incoming neighbour states for , as we are assuming that all ⟨ ℎ , ⟩ are empty for each incoming neighbour of , completing the proof. □ Note that there can be timestamps in which DC JOD unnecessarily re-compute Min, but by the correctness argument of DC in reference [2], any timestamp that DC avoids rerunning a computation, is guaranteed to produce empty differences, so these spurious re-computations cannot affect the correctness of DC JOD , so as a corollary of Theorem 4.1, we can conclude that DC JOD correctly maintains the IFE dataflow. A simple example of spurious rerun is the simple case in the SPSP example is when a vertex has two incoming edges, say from 1 to 2 , where suppose for purpose of demonstration 1 and 2 start with states 0 initially (so at a timestamp ⟨ 0 , 0⟩), and suppose further that edges ( 1 , ) has a weight of 10 and ( 2 , ) has a weight of 20, so the ⟨ 0 ,1⟩ would contain ( , 10, +) and ( , 20, +). Suppose in 1 , the weights of these edges are swapped. The original DC would not re-compute Min at timestamp ⟨ 1 , 1⟩ on , because there is no difference directly to 's input (nor through an upper bound rule), where as DC JOD would. This is because DC JOD would immediately schedule Min to execute on because 1 (or 2 's) states changed at ⟨ 1 , 0⟩ and is an outgoing neighbour of 1 . Example 2. We next demonstrate applications of JOD's rerunning rules on our running example. Consider the first update in our running example at timestamp ⟨ 1 , 0⟩, which updates the weight of edge (a, d) from 20 to 100. By the Direct Rule of JOD, we rerun Min on at timestamp ⟨ 1 , 0⟩. Further by JOD's Upper Bound Rule, we also schedule to run at timestamp ⟨ 1 , 2⟩ because ⟨ 0 ,2⟩ is nonempty and is an incoming neighbour of (condition (ii)). Note that rerunning Min on at timestamp ⟨ 1 , 0⟩ creates a difference for ⟨ 1 ,1⟩ . By the Direct Rule, we further schedule to rerun Min on and , which are the outgoing neighbours of , at timestamp ⟨ 1 , 1⟩. Eager-Merging The naive implementation of JOD can be expensive because the number of possible timestamps ⟨ ℎ , ⟩ to inspect, where ℎ < + 1, can grow unboundedly large as batches of edge updates continue to arrive. The eager merging optimization we describe next, which extends a periodic merging optimization of the DD system (explained momentarily), reduces the number of these timestamps. Consider the point at which a new set of updates to graph version has arrived and the system has finished maintaining the computation for . So there are × many different timestamps in the computation so far. Let us think of these timestamps in a 2D grid with columns as graph version indices and rows as IFE iterations as in Table 3. Observe that as more updates arrive to the system, the timestamps will increase in the graph version dimension to +1 , +2 , etc, so more columns will be added to this grid. Consider reassembling the contents of some collection at timestamp ⟨ +1 , 0⟩. To do so, DD has to sum the differences in 0 = { ⟨ 0 ,0⟩ , ..., ⟨ ,0⟩ }. To reassemble at timestamp ⟨ +1 , 1⟩, DD has to sum the difference sets in 0 and 1 = { ⟨ 0 ,1⟩ , ..., ⟨ ,1⟩ }, etc. Observe that once the (k+1)'st graph updates have arrived, the system will never have to re-execute an operator at timestamps ⟨ ℎ , ⟩ where ℎ < + 1. Instead of computing multiple times for each possible { ⟨ +1 , ⟩ | > }, the original DD periodically unions the individual difference sets in into a single difference set ⟨ +1 , ⟩ . This allows DD to reassemble collections faster and store the difference sets more compactly. Instead of periodic merging, we eagerly merge the differences along the graph-version dimension as we run DC's maintenance procedure for ⟨ +1 , 0⟩ to ⟨ +1 , ⟩. That is, as soon as DC finishes maintaining ⟨ +1 , ⟩, we merge the difference sets for D for timestamps ⟨ , ⟩ and ⟨ +1 , ⟩. This guarantees that for any vertex, we only need to keep one-dimensional timestamps, i.e., only for IFE iteration. Table 4 shows the states of the differences stored in the system with eagerly merging differences and the DC algorithm is in the process of maintaining the computation at timestamp ⟨ 2 , 2⟩. Differences at grey cells have been merged to the right most cell on the row. In presence of eager merging, whenever JOD needs to investigate if ⟨ ℎ , ⟩ | ℎ < + 1 is non-empty for any vertex, we only need to inspect timestamps with ℎ = . We end this subsection with a discussion of another benefit of eager merging. Eager merging allows dropping all differences with negative multiplicities in the difference sets for D. This is because in the algorithms we consider, vertices take one unique state at each iteration of IFE. Therefore in one-dimensional timestamps, the change in the state of a vertex from to ′ at iteration , is always represented with two differences: (i) one with positive multiplicity with ′ ; and (ii) one with negative multiplicity for . For example, readers can check that once DC with eager merging finishes maintaining the computation for all timestamps for graph version 2 , the distances stored for vertex will be {(1, 100, +),(3, 100, −),(3, 50, +)}, where the first values in these tuples are the timestamps, which we now represent only with IFE iteration number. These differences can be stored as {(1, 100), (3, 50)}, and the (3, 100, −) would be implied. In absence of negative multiplicities, we can also avoid doing any summations when computing the state of a vertex at timestamp , i.e., . Instead we can find the latest iteration * ≤ in which vertex has a (positive) difference and return it. PARTIAL DIFFERENCE DROPPING We next investigate optimizations that partially drop the differences in . When we apply JOD, is the only data collection for which we store differences, except for the original edges in the graph. Partial dropping the differences in allows trading off scalability with query performance. Specifically, the memory overhead to store decreases, yet it also decreases performance because when DC needs to reassemble the contents of at a timestamp , the dropped differences need to be recomputed. In this section, we will describe optimizations with different scalability/performance tradeoffs. Throughout this section we assume running DC with eager merging and use single dimensional timestamps to refer to data collections, such as , instead of ⟨ , ⟩ . A partial dropping optimization has two key components: • Dropped Difference Maintenance: When DC accesses , the system needs to identify if a difference was dropped with key/vertex ID at timestamp . Therefore, the system needs to maintain the dropped vertex ID-timestamp pair information. • Selecting the Differences to Drop: The system also needs to decide which differences to drop and which ones to keep. We describe alternative approaches to both components. A third important decision is to choose how many differences to drop given a memory constraint. At a high-level, the answer to this question is clear: drop as little as possible without violating the memory constraint. In practice however estimating this amount may be challenging because each update to the graph changes the amount of differences needed to maintain registered queries. Further, a system needs to estimate and plan for newly registered or deregistered queries. In such dynamic scenarios, systems can adopt adaptive techniques that determine how many differences to drop from each query by observing the stored differences. This is a future topic for a rigorous within study. Within the context of this paper, we will assume a user-define probability that drops each difference with probability (see Section 5.2). Dropped Difference Maintenance One natural approach to maintaining the dropped vertex ID-timestamp pairs (VT pairs for short) is to store them explicitly in a separate data structure DroppedVT. We discuss two possible designs for this data structure. We first present a straightforward deterministic data structure, and discuss its scalability bottlenecks. Then, we propose a probabilistic data structure, which can address this scalability bottleneck but possibly leading to spurious re-computations of undropped differences. In our evaluations, we show that, despite this possible performance disadvantage, our probabilistic approach can still be more performant as it can drop fewer differences than our deterministic approach under limited memory settings. Deterministic Difference Maintenance (Det-Drop). Det-Drop uses a hash table to implement DroppedVT. During , when is needed, we perform the following AccessD WithDrops procedure. Before we describe our procedure, recall from Section 4.2 that we do not store differences with negative multiplicities for when we eagerly merge differences, so we do not need to do any summation to compute . We only need to find and return the latest iteration * ≤ for which there is a difference for . AccessD WithDrops: be the index that stores the difference sets for . We check for the latest iteration * ≤ , if any, for which the system has stored a difference for . 2. Check DroppedVT for the latest iteration * ≤ , if any, for which the system has dropped a difference for vertex . 3. If a * > * exists, recompute the stored difference at * and return this value. Otherwise, return the value at at * . Note that to recompute a dropped difference at timestamp * in step 3 we rerun the aggregation operation, e.g., Min, for vertex at iteration * − 1. This procedure is similar to how we rerun Min operator for vertices at different timestamps as part of the procedure. Then using * −1 and * −1 we rerun Min and compute * . However when we access * −1 , we recursively call AccessD WithDrops, as there may be dropped differences for or one of its incoming neighbours at timestamp * − 1. Therefore, this may lead to further recomputations, which may further cascade. Example 3. Consider the running example. Suppose that after the first update, the system decides to drop the difference +( , 30) at iteration 1. Consider now the arrival of the second update where the weight of ( , ) changes from 10 to 100. To maintain the computation differentially, Min is rerun on at ⟨ 2 , 1⟩ (due to Direct Rule) and then due to the Upper Bound Rule on every timestamp in which has a difference. already has a difference that is not dropped at iteration 2, so is scheduled to rerun at iteration 2. We further check if has any dropped differences at iterations 3 and 4. Since it does not, we do not schedule to rerun at these differences. Then when rerunning at 2, we need both 's and 's distances at iteration 1 and check if they have any differences. We see that has stored difference but does not, so we check if has a dropped difference at 1. Since it does, we recompute that difference by rerunning at iteration 1. Explicitly keeping track of all dropped VT pairs requires keeping additional state that is proportional to the number of differences that are dropped, which limits its scalability. Note that a difference is simply a triple that consists of a VT pair plus a vertex state (e.g., distance). Suppose we need bytes to store VT pairs and bytes to store the actual state in a difference. Then, for each dropped + bytes, we have to keep bytes in DroppedVT. This means that even if we partially drop 100% of differences, there is a hard limit of + on the scalability benefits we can obtain from deterministically dropping differences. Our next optimization overcomes this limitation by using a probabilistic data structure. Probabilistic Difference Maintenance (Prob-Drop) . Prob-Drop drops the entire difference, i.e., both the VT pair and the state, and uses a probabilistic data structure to maintain the dropped VT pairs. Probabilistic data structure, such as Bloom [3] or Cuckoo filters [10], have the advantage that their sizes can remain much smaller than the amount of data they store. Prob-Drop requires a probabilistic data structure that never returns false negatives because if a VT pair was dropped and the structure returns false when queried, we may ignore this difference and reassemble incorrect states for vertices during . However, the structure can return false positives, because false positives can only lead to unnecessarily recomputing a vertex state, but the recomputed vertex state will still be correct. We use a Bloom filter 2 , into which we insert the dropped VT pairs. Using a Bloom filter requires minor modifications to the AccessD WithDrops procedure from Section 5.1.1. Specifically, in the second step, the procedure needs to check the Bloom filter for each potentially dropped difference at iteration ∈ ( * , ] starting from to see if a VT pair for ( , ) was dropped. If the answer is negative, then processing moves to the next until we arrive at * . In this case, the value from D for iteration * (obtained from step 1) is the correct value of at iteration . If the answer is positive for an iteration * ∈ ( * , ], then the value of pair ( , * ) is recomputed. In our evaluations, we show that Prob-Drop can increase the scalability of a GDBMS more than Det-Drop because its size does not grow as the system drops more differences. Furthermore, in some settings, the system does not need to drop as many differences in Prob-Drop as in Det-Drop to reach a certain scalability level (in our evaluations this is the number of concurrent queries). Selecting the Differences To Drop The second component of a partial difference dropping optimization is to decide which differences to drop. A baseline heuristic is to drop each difference uniformly at random. We next show a more optimized technique that use the degree information of vertices to select the differences to drop. Degree-based Difference Dropping. A GDBMS using DC to maintain continuously running recursive queries can exploit the fact that the dataset is a graph, therefore partitioning keys are vertex IDs. Intuitively, when executing the recursive algorithms we consider, high degree vertices are used frequently when computing the states of other vertices, i.e., they will be accessed more by DC when maintaining the input IFE dataflow. Therefore, dropping their differences can lead to frequent vertex state re-computations. Similarly, vertices with low-degrees are relatively less frequently accessed by DC. Based on this intuition, we implement a heuristic that takes two thresholds and , for minimum and maximum degrees, respectively, and a probability parameter . Then, our heuristic performs the following for a difference with a VT pair ⟨vertex, iteration⟩ pair (⟨ , ⟩) assuming that ( ) is the degree of vertex (Figure 3): We found empirically that setting as 2 and as the top 80th degree percentile of the input graph is reasonable for the graphs we used in the experiments. We note that more sophisticated properties, such as betweenness centrality of vertices, can also be used to decide the differences to drop. A practical advantage of using degrees is that, degree information is readily available in adjacency list indices, which are ubiquitously used in GDBMSs. EVALUATION 6.1 Experimental Setup We run all experiments on a Linux server with 12 cores and 32 GB memory, unless mentioned otherwise. For each experiment, we report the total time, in single-threaded execution, needed to update the graph and the query answer after applying a batch of updates. For each dataset, we shuffle the edges, and split the dataset such that 90% of the data is used as an initial graph, while the remaining 10% models the dynamism in the graph consisting of the update to the graph. We use a default batch size of 1, because differential computation is more suitable for near real-time dynamic graph updates than for infrequent updates. We evaluate the effects of batch size on the performance of DC in Appendix A. We use insertion-only workloads in our main experiments. In Appendix B we also present experiments that use workloads with different amounts of deletions for our main experiments. Datasets. We use a combination of real and synthetic graphs summarized in Table 5 3 . The four real graphs are Skitter, LiveJournal, Patents, and Orkut, all obtained from [19]. Skitter represents an internet topology from several scattered sources to millions of destinations on the internet and its vertices are strongly connected. LiveJournal and Orkut [19] represent social network interactions with a vertex degree distribution that follows power-law. Patent [19] represents a citation graph for all utility patents granted between 1975 and 1999. In order to experiment with weighted SPSP queries, we created weighted versions of both graphs by adding a random integer weight between 1 and 10 uniformly at random to each edge. LDBC SNB [9] is a synthetic graph that models dynamic interactions in social network applications. This graph has edge labels that are used in RPQ queries. LDBC SNB includes several types of entities, such as persons or forums. Each edge has a label such as Knows or ReplyOf. We use a scale factor of 10 that generates a graph of 7.2M vertices and 77.6M edges. 6.1.2 Workloads. We use SPSP, K-hop, and several popular RPQ queries as our main query workloads. We run SPSP and K-hop on the weighted and unweighted versions of the real datasets, respectively. For SPSP query generation, we pick a random pair of vertices in the graph. For K-hop, we pick a random set of vertices and set the value of maximum hops = 5 to make it a 5-hop query. RPQ queries require edge labels, so this experiment is conducted only on the LDBC dataset. We use a set of RPQ templates used in real-world workloads as defined in reference [4] which were used to study streaming RPQ evaluation in reference [25]. There are only two recursive relationships in LDBC SNB: Knows and ReplyOf. Recursive here refers to an edge label that can exist consecutively in an arbitrary path. Therefore, some templates that expect more 3 Reported degrees are for the initial loaded graphs in the experiments. than two recursive relationships cannot be used in LDBC SNB. We use the following RPQ query templates: We used Likes, Knows, ReplyOf, and hasCreator, to construct queries from these templates in the LDBC SNB dataset. SPSP, K-hop, and RPQs are queries that can be supported in highlevel languages of GDBMSs. These are the main queries that motivate our work. However our optimizations are applicable to other computations based on IFE. To demonstrate this, we implemented the differential versions of standard weakly connected components (WCC) algorithm, which is based on iteratively propagating and keeping track of minimum vertex IDs, and PageRank (PR) (ran a fixed 10 number of iterations) in our setting. Baselines and Different GraphflowDB Configurations. We implement our optimizations inside the continuous query processor (CQP) of GraphflowDB [16], which is a shared memory GDBMS. We extended the CQP of GraphflowDB to implement a baseline DC and our optimizations to maintain the recursive queries we cover (see Appendix C for the details of our implementation). We call the GraphflowDB configurations for different configurations of DC as: VDC, JOD, Det-Drop, or Prob-Drop. We compare our proposed optimizations with three baselines: DD, Scratch, and DC. DD is an implementation of our workloads in the Differential Dataflow system [22], which is the reference implementation of differential computation. Scratch is a baseline extension of GraphflowDB's CQP to support our queries by simply executing each query from scratch after every batch of changes. Scratch represents a baseline GDBMS's performance that does not support continuous queries. We use an IFE-like label propagation algorithm for K-hop queries and RPQs. We note that this algorithm is identical to what is referred to as the "incremental" fixed point algorithm in the original Differential Dataflow paper [23] (see Figure 1 in the reference). This term is used to indicate that only the vertices whose values are updated in a particular iteration propagate their labels in that iteration (as opposed to all vertices). VDC is the vanilla differential computation implementation in GraphflowDB. The difference between VDC and DD is that the former is our single machine implementation using Java while the latter is a distributed system implemented in Rust. In Section 6.2, we verify that VDC behaves similar to DD (and even outperforms it in terms of runtime); therefore, we use VDC as a suitable baseline for our optimizations that is implemented inside the same GDBMS. VDC ingests and stores the input graph in the same way, uses similar data structure to store the differences, and the same programming language as the following GraphflowDB configurations: (1) JOD: The DC version that implements join-on-demand optimization from Section 4; (2) Det-Drop: Integrates deterministic partial dropping optimization on top of JOD as discussed in Section 5. We also evaluate different versions of Det-Drop and Prob-Drop to evaluate our degree-based difference dropping optimization. Baseline Evaluation Our first set of experiments measure the performances of Scratch, DD, and VDC. Our goals are: (i) to obtain baseline measurements for our optimized DC implementations; and (ii) to validate that VDC is competitive with DD to justify its use as a more suitable baseline than DD for our optimizations. In this experiment, we ran SPSP, K-hop queries, WCC and PR on Skitter, LiveJournal, Patents, and Orkut datasets, and all three RPQ queries on LDBC dataset. For SPSP, K-hop and RPQ workloads, we used 10 queries. In each experiment, we simulated dynamism by using 100 insertion-only batches, with 1 edge in each batch. Our results are shown in Figure 4 (ignore the JOD charts for now). As shown in the figure Scratch, as expected, is several orders of magnitude slower than VDC and DD but also has the smallest memory overheads. Scratch is most competitive with VDC and DD in PR, though still over an order of magnitude slower. This is expected because as also observed in prior work [30], during differential maintenance, the changes in PR are harder to localize to small neighbourhoods as in other computations, i.e., small changes are more likely to change the PR values of larger number of vertices. We observe that VDC is slightly faster than DD while using comparable memory. We expect VDC to be faster than DD because DD assumes a distributed setting where messaging involves network protocols, even though we are running DD in a single machine setting. Instead, VDC assumes a shared memory setting avoiding such communication. Appendix B repeats these experiments with two different update workloads that include deletions: (i) where 25 of the batches are deletions; and (ii) where 50 of the batches are deletions. We observe that the performance tradeoffs our optimizations offer are broadly similar across these different update workloads. Note that this is expected as the amount of updates we ingest is relatively minor compared to the number of edges we start with, which recall comprise 90% of all edges in each dataset. Overall these results confirm that VDC is a more suitable baseline for analyzing the effects of our optimizations than DD. In the remainder, we use VDC and Scratch as the main baselines to evaluate our proposed optimizations on top of VDC. Join-On-Demand Our next set of experiments aim to study the performance and memory benefits and overheads of JOD. JOD is guaranteed to reduce the memory overhead of a system implementing vanilla differential computation, e.g. DD or VDC. However, in terms of performance, JOD has both computation overheads and benefits. On the one hand using JOD reduces the work done by vanilla differential computation for storing differences. However, as updates arrive, JOD requires re-computing the join on demand by reading the states of in-neighbours' of vertices at different timestamps to inspect if some partitions are non-empty. This is less performant than materializing difference sets and inspecting them to see if they are non-empty. Our goal is to answer: What is the net effect of these performance benefits and costs? What governs this net effect? Our hypothesis is that JOD computation overhead increases proportionally with the average degree of the input graph. This is because, given a vertex , looping through 's incoming neighbours to re-compute the join at a timestamp should increase with the number of neighbours of . At the same time the benefits of JOD from not storing the differences depends on how many differences are produced by the Join operator. This depends partially on average degree but also on the average number of times the state of a vertex changes during a computation. For example, readers can see that in the full difference trace of our running example, which is presented in Table 3, there is a new difference only when the state of a vertex changes. As we will momentarily demonstrate, this number is quite small and does not necessarily grow as the average degree increases on our computations. Therefore as the average degree increases, we expect that JOD's overhead to increase faster than its benefits, and we should eventually see VDC outperforming JOD in terms of performance. In our first experiment, we rerun our baseline experiments from Section 6.2 with JOD. The average in-degrees Orkut, Skitter, Live-Journal, Patents, and LDBC (for the subgraph containing Knows edges) are respectively, 34.4, 12.6, 14.2, 4.7, and 4.7. So expect VDC to be faster than JOD by larger margins on Orkut and Skitter and smaller margins on Patents and LDBC. Our results are shown in Figure 4. As expected, we observe that JOD uses significantly less memory (between 1.2× to 5.5×) than VDC irrespective of the input graph or query. In terms of performance, we find as expected that VDC is faster than JOD on Orkut (1.3x on k-hop) and Skitter (4.6× on K-hop) and even slower than JOD on Patents (2.4x on SPSP) and on LDBC RPQs (by a factor of 1.2×). Although the previous experiment provides support for our hypothesis, the degree differences between the input graphs we used are still relatively close to each other and we did not control for the queries we used across these datasets. We next perform a more controlled experiment. Using LDBC, we systematically increase the average degree of the Knows subgraph from its original value (4.7 to 20) to 100, 500, and 1000 and run all of our algorithms SPSP, K-hop, and RPQ query Q1 on each version of these graphs. We increase the average degree by adding random edges that connect vertices in this subgraph. Our results are shown in Figure 5. As we expect, when the average degrees are small, specifically for RPQ queries, JOD either outperforms or is competitive with VDC, but as the degrees get large, VDC consistently outperforms JOD. The numbers on top of the VDC and JOD bars in Figure 5 are the average number of differences in per vertex for vertices that have non-zero differences, measured at the end of the experiment. Note that this number is always 1.0 for K-hop and Q1 as vertex values in these computations take only one value assigned at the first iteration in which a vertex becomes reachable. We note that this number does not even necessarily increase as the degree increases, and remains small relative to the average degree. It can even decrease in SPSP, primarily because SPSP converges faster when the degrees are larger, i.e., the number of SPSP iterations decrease, so the number of different differences vertices get can decrease. Selecting the Differences To Drop We next evaluate the effectiveness of the two strategies we discussed in Section 5.2 for selecting which differences to drop in our partial dropping optimization. We refer to these as: (i) Random which randomly selects the differences with a probability ; and (ii) Degree drops differences based on vertex degrees. As we discussed in Section 5.2.1, we expect Degree to outperform Random. We run 10 K-hop queries over Skitter with 100 insertion-only batches of size 1 using Det-Drop and Prob-Drop with both Random and Degree selection strategies. In total, we have 4 system configurations. For Degree, we set to 2 and to the 80 ℎ percentile of the vertex degrees. We increase the dropping probability for Det-Drop and Prob-Drop starting from 0 to 100% and plot the total number of dropped differences on the -axis and the runtime on the -axis. Figure 6 shows our results. First, observe that as expected all of the lines in the figure go up, i.e., as we drop more differences the performance of each system configuration gets slower. Note that in JOD storing fewer differences potentially leads to performance advantage as we have to maintain less differences. This advantage does not exist for partial dropping optimizations because they still have to store and maintain auxiliary data structures to maintain the dropped differences. So dropping differences primarily has a performance cost, as it can lead the system to recompute those dropped differences. Second observe that as we expect, configurations with Degree (the two bottom lines), irrespective of if we use Det-Drop and Prob-Drop, are between 3 to 5 orders of magnitudes faster than the configurations with Random (two top lines). Note that the lines with Random have a bigger span on -axis because there are limits to the minimum and maximum number of differences that configurations with Degree can drop. For example, at the minimum when = 0, the configurations with Degree still drops all differences of vertices with degree < , whereas Random can drop as few as 0 differences. We perform further analyses using a micro-benchmark to better explain the performance difference between Random and Degree. We first fix the drop probability (= 0.1), a workload (10 K-hop queries) and a dataset (Skitter with 100 batch of 1 edge insertions). We then use Det-Drop with Random selection policy and count for each vertex the number of times Det-Drop re-computed a dropped difference with key , i.e., how many times Det-Drop has accessed at some point, but 's state had to be re-computed because a difference was dropped in DroppedVT. Then we bucket vertices by their degree, where for each degree bucket (e.g., [1 − 10)) we plot the average number of re-computations for each vertex in that bucket. Figure 6b shows our results. The bar charts use the left -axes and represent the average number of re-computations for vertices with different degree buckets, where a tick in the -axes represents a bucket with the next tick. The line chart uses the right -axes and plots the vertex degree distribution in the graph. As shown in Figure 6b, the degree distribution follows a powerlaw distribution, as is commonly the case in real world graphs. The average number of re-computations per vertex follows the opposite trend where vertices with smaller degrees on average lead to fewer re-computations, e.g, vertices with degree more than 2000 lead to more than 1000 re-computations on average, while those with degrees [1, 10) lead to less than 1 re-computations. Since the memory saving of dropping 1 difference is the same regardless of the vertex degree, as done by our Degree strategy, it is more efficient to drop more differences from vertices with smaller degrees. Difference Maintenance Our next set of experiments focus on evaluating Det-Drop and Prob-Drop. In the experiments reported in Figure 6a, we evaluate the performances of Det-Drop and Prob-Drop when both drop exactly the same number of differences when using Degree and Random selection policies. They behave similarly when using the same selection strategy, with Det-Drop slightly more performant, which is expected as Prob-Drop may perform spurious recomputations due to false positives. However, Det-Drop and Prob-Drop do not have similar memory footprints when they drop the same number of differences: Prob-Drop's approach is more efficient than Det-Drop. We next provide a more systematic evaluation of the scalability and performance tradeoffs of these techniques under Degree policy, which as we established outperforms Random. Our experiment analyzes how much Det-Drop and Prob-Drop increases the system scalability in terms of the number of concurrently maintained queries relative to VDC for a given memory budget for SSSP, K-hop, and RPQ queries. We omit PageRank and WCC from these experiments, as these are batch computations and we cannot increase the number of queries for these. For completeness, we also evaluate the performances of JOD and Scratch. To simulate a fixed memory budget environment, we give each system configuration 10GB memory for storing differences and/or additional data structures, e.g., to manage dropped VT pairs. We repeat our experiment from Section 6.2 with the same datasets and query combinations. However, we now increase the number of queries systematically until the system runs out of memory. Figure 7 shows our results. We use the maximum scalability level of VDC, which is the configuration with the highest memory overheads, as the lowest number of queries we use and increase the number of queries in the system from this point on. That is why VDC appears as a single grey point in our charts. For Det-Drop and Prob-Drop, for each number of queries , we find the lowest dropping probability for Det-Drop and for Prob-Drop that can support queries and report their performances with these levels. Note that here we are assuming an ideal setting in which a system is able to find this lowest dropping probability. Although this may be challenging in practice, this allows us to evalaute the most performant versions of Det-Drop and Prob-Drop for the given query level. We show that we use for Det-Drop under the Det-Drop line, and the that is used for Prob-Drop above the Prob-Drop line. We make several observations. First, as in Figure 4, we see that JOD can increase the number of queries that could be concurrently run by 2.3×−10× over VDC. Second, we observe that increasing the number of queries with partial dropping optimizations can increase the run time super-linearly beyond a particular point where increasing scalability requires increasing the dropping probability, which leads to more differences to be re-computed. However, we see that partially dropping differences can still increase the number of concurrent queries by up to 20× relative to VDC while still outperforming Scratch by several orders of magnitude. Third, we compare the performances of Det-Drop and Prob-Drop. As mentioned earlier, Det-Drop does not incur any spurious re-computations due to false positives but has to drop more differences than Prob-Drop to scale to more queries (as it has a higher memory overhead for storing the dropped VT pairs). We see that this advantage and disadvantage overall balance out for the scalability levels both Det-Drop and Prob-Drop can handle, i.e., they perform similarly at these scalability levels. However, Prob-Drop can consistently scale to higher levels than Det-Drop (up to 1.5×). Finally, we performed a similar experiment for PR and WCC, for which we can only run one "query". We used LJ and picked a memory budget of 2.75GB for PR and 2GB for WCC, which requires less memory and picked the lowest drop probabilities at which these budgets were enough for Det-Drop and Prob-Drop. Figure 8 shows our results, with the necessary drop percentages presented on top of the bars. We find that on PR Det-Drop requires 100% dropping rate and takes 369 seconds to complete while Prob-Drop requires 90% dropping rate and takes 268 seconds to complete 4 . On WCC, Det-Drop requires 90% dropping rate and takes 11.9 seconds to complete while Prob-Drop requires 70% dropping rate and takes 11.5 seconds to complete. Overall, similar to our previous experiments, Prob-Drop needs to drop fewer differences to successfully complete the experiment and leads to better performance. Further Applications of Diff-IFE Our previous experiments so far focused on demonstrating the performance tradeoffs that our optimizations offer when evaluating continuous recursive queries using Diff-IFE. Our final set of experiments do not evaluate our optimizations. Instead, we aim to demonstrate further applications of Diff-IFE in systems. Specifically, we show that we can improve our Scratch baseline for SPSP queries through using and differentially maintaining a popular shortest path index, called landmark indices [12,26]. A landmark index is a single-source shortest distance index, i.e., it stores the shortest path distance from a "landmark" vertex to the rest of the vertices. We use landmark indices to prune the search space of Scratch. Specifically, in the shortest path query from to , the sum of the distances of to and to give an upper bound ℓ on the shortest distance between and . Similarly, the difference between the to distance and to distance give a lower bound ℓ on the distance from to . If is visited at distance in the Bellman-Ford algorithm, and + ℓ is greater than ℓ , then we can avoid traversing as it cannot be on the shortest path from to . We used all of our datasets, except LDBC, and picked the 10 highest-degree nodes as the landmarks and implemented an optimized version of Scratch in which as updates arrive at the graph, we first maintain these 10 landmark indices using Diff-IFE. Then, we run each registered query using our landmark-enhanced Scratch, which we call Scratch-landmark, and compare this to our baseline Scratch. We registered 100 random SPSP queries in our system and measured the end-to-end time of 100 batches of single edge insertions. Our results are shown in Figure 9. The reported times for Scratch-landmark include both the time to maintain the index and then (non-differentially) evaluate each query. As shown in the figure, by using and differentially maintaining landmark indices, Figure 9: Comparing Scratch vs. Scratch-landmark on 100 queries and 100 batches of updates. Numbers on orange bars are the runtime improvements of Scratch-landmark. we can reduce Scratch time between 43% to 83% (albeit now using additional memory to store both the index and the differences to differentially maintain the index). A IMPACT OF BATCH SIZE This set of experiments show the impact of batch size in the performance of DC. We start by load the initial graph and register 10 K-hop queries, then add 1M edge updates while changing the batch size exponentially from 1, 10, 100, to 1M. Figure 10 shows the ratio between the execution time of each batch by VDC and by Scratch. When this ratio is more than 1, it means that VDC is slower than Scratch. This happens when the batch size is very large (more than 100K in our experiments). Changing the batch size has no significant impact on Scratch performance, because it re-executes the query from scratch anyway. In Figure 10 VDC gets slower as the batch size increases due to processing larger number of differences in every batch. Increasing the batch size means that the effort required by VDC to maintain all differences increases, because the number of potential vertices that need to be fixed increases. It is apparent that the smaller is the batch size the better is VDC performance, which shows that DC is more suitable for near real-time dynamic graph updates than to infrequent updates. -axis is batch size, and -axis is the ratio between VDC and Scratch batch time. A small batch size can lead VDC to be several order of magnitude faster than Scratch. As the batch size gets larger, VDC's cost increases until the ratio passes 1 and Scratch becomes faster than VDC. B IMPACT OF DELETE BATCHES All previous experiments has been assuming edge addition. In this section, we are evaluating the impact of delete batches. Figure 11 shows the baseline experiments when 25% and 50% of batches are deleting edges. These figures are very similar to the baseline figure ( Figure 4) where all batches are edge additions. For further analysis, we examine our workloads with different probability of delete batches (0%, 25%, 50%, 75%, 100%). Similar to previous experiments, we ran different queries with 100 batches, each with 1 edge. In Figure Figure 12, we did not add "Scratch" because it is several order of magnitudes slower than other approaches. In general, we found that changing the ratio of batches with delete does not change our results regarding JOD, Det-Drop, and Prob-Drop. An important observation, however, is that for SPSP query VDC is getting slower as the delete probability increases while JOD, Det-Drop, and Prob-Drop, are getting faster. This is because SPSP is a weighted query and typically has a large number of iterations and a large number of differences. A vanilla DC implementation (VDC) does not use early dropping, therefore deleting edges leads to adding more negative multiplicities which then adds more overhead to be stored and maintained. On the other hand, with early-dropping, deleting edges lead to reduce the number of differences. C IMPLEMENTATION Upon an update +1 to the graph, our implementation of DC and DC JOD keeps track of a "frontier", which is the list of vertices and iteration numbers during which the aggregation operator should be re-computed. These are stored as an array of hash sets, to remove duplicate additions of a vertex into this set, where there is a set for each IFE iteration until . Recall that is the maximum number of iterations IFE has executed in any of the graph versions. Since we do eager merging, in our case is the maximum IFE iteration for . We store the differences in vertex states (i.e., the output of the aggregation operator) also in a hash table where the keys are vertex IDs and the value is a list of pairs ⟨ , ⟩ that is sorted by , where is the new state of vertex in IFE iteration . Recall again that because of eager merging our timestamps are one dimensional. To check for the state of a vertex at iteration , we find the latest available iteration * ≤ in 's sorted list using binary search. For partial dropping approaches discussed in Section 5, we use a separate data structure (DroppedVT) to store the dropped vertextimestamp pairs. For Det-Drop we use a hash table, such that the key is vertex-id and the value is a sorted list of dropped iterations. When the algorithm needs to check if a pair (⟨ , ⟩) exists in DroppedVT, it finds the list of iterations using the vertex-id ( ) and then search for the latest dropped iteration * ≤ in the list. For Prob-Drop the hash table is replaced by a Bloom filter. Each object in this Bloom filter is 8-bytes object and is constructed by concatenating vertex-id and iteration number together using binary operations. Searching in the Bloom filter requires constructing a search object first, using binary operations, and then check if the Bloom filter contains this object.
2022-08-02T01:16:07.102Z
2022-07-01T00:00:00.000Z
251224300
s2orc/train
v2
Retroperitoneal hematoma by different causes: Presentation of two emergency cases at computed tomography
Retroperitoneal hematoma by different causes: Presentation of two emergency cases at computed tomography Retroperitoneal hematoma is a rare clinical entity with variable aetiology, which is increasing in incidence mainly due to complications related to interventional procedures. The causes of RE are different. We present 2 suggestive cases of RE, one for renal cause and another of adrenal origin. Both came to our attention as a matter of urgency. Retroperitoneal hematoma, therefore, originated from different causes, with consequent different treatments. Both cases were diagnosed, as a matter of urgency, thanks to the use of contrast Computed Tomography, which allowed a rapid diagnosis, careful specialist evaluation, a monitoring of their clinical conditions, and a consequent adequate outcome for the patients. ( Figs. 1 A-E) the surgical outcomes of the well-known surgery on the upper pole of the right kidney, and, already in the basale CT phase, the presence of a hyperdense hematoma with blood density (size 6.5 × 5.5 cm) that occupied the middle-upper third of the right kidney and extended towards the right renal pelvis. After intravenous administration of the contrast medium, opacity of the right renal artery was documented, which forked early in extrarenal in 2 branches; in the endorenal, in continuity with the upper arterial branch, a hyperdense image was appreciated, compatible with the spreading of the contrast medium within the hematoma. were obliterated and with signs of imbibition. In the late phase of the study, opacity of the right renal lower calyx group was found, while the remaining upper-middle caliceal structures were not recognizable. Inside the bladder there was highlighted the presence of a voluminous (size 6 cm) blood clot. The patient was immediately hospitalized, and shortly, due to the persistence of his serious clinical conditions since his arriving at the hospital, he underwent selective catheterization of the right renal artery, which highlighted pseudo-aneurysm of a higher polar branch, likely the cause of bleeding, and immediately continued with the embolization of vascular aference by means of a 3D spiral. This interventional procedure was successfully concluded ( Figure 2 ). The subsequent CT control exam showed the absence of active spreads of contrast in peritoneal cavity, as well as the complete exclusion of the pseudo-aneurysm. The patient's clinical-laboratory and blood parameters slowly regularized and, after about a month of hospitalization, he had returned at home. Now the patient will perform a clinical-radiological and laboratory follow-up. Case 2 A 82-year-old man arrived at the emergency room for severe abdominal pain in his left side, vomiting, agitation, significant reduction of the hemoglobin blood values (8 gr/dl), syncope and anemia. About ten years ago he had a vascular surgery with vascular endoprosthesis placement for the treatment of a voluminous aneurysm of the subrenal abdominal aorta. Five years ago he reported surgery for removal of a melanoma in the left inguinal region. Now, a contrast-enhancement CT study was urgently made, that showed ( Figs. 3 A-D) the presence in the left hypochondrium of a voluminous bilobate nodualar expanded, with a heterogeneous structure (size 8 cm) that already in the basal phase had several confluent hyperdense components, such as recent blood bleeding. In the context of this expanded, in the portal phase of the study, some hyperdense images with a serpiginous course are identifiable, tending to increase density in the late phase, as per active spreading of the contrast medium in the context. In the adjoining case was also identified an extensive fluid tissue, with blood components in the context, which extended in the retroperitoneum to the left iliac region, with evident imbibition of the adjacent fascial planes, and obliteration of adipose tissue, resulting in compression of the left kidney and left renal vein. The patient stabilized in the haemato-chemical parameters, and surgery of adrenalectomy. The surgical report was that of adrenal metastases bleeding from melanoma ( Figs. 4 A-D). One week after the surgery, an abdominal CT scan with contrast medium, the left retroperitoneal hematoma was in the reabsorbing phase ( Figs. 5 A-B), so that the patient, once back home, planning the next instrumental and therapeutic procedure with the oncologist. Discussion Case 1 showed the presence of an active right renal bleeding, due to a pseudo-aneurysm, in the context of a post-surgical hematoma (previous removal of right upper renal pole tumor). It is, therefore, an iatrogenic bleeding, resulting in a reduction in the values of hemoglobin. Renal hematoma is a frequent complication of kidney surgeries. The hematoma is visible at CT as an expansive, hyperdense formation just in basal conditions (density > 50 HU), in the surgical site, which may extend even in the surrounding spaces [1 ,2] . A renal pseudo-aneurysm is the result of an injury of an intrarenal artery at the surgical site, or the main renal artery or one of its branches, and is visible as a mass, round morphology, in continuity with the renal arterial vessels with respect to which it presents the same density after administration of the contrast [3 ,4] . Case 2 showed the presence of a left retroperitoneal hematoma, from adrenal cause, in oncologic patient, immediately subjected to urgent surgical treatment. Unilateral adrenal hemorrhage is an uncommon surgical emergency that can present as massive retroperitoneal hemorrhage and is potentially fatal [5] . Its causes include: severe physical stress, infection, bleeding disorders, use of anticoagulants, procedures, and tumor bleeding [6 ,7] . In this case the cause was the adrenal metastases bleeding from melanoma. Although acute adrenal hemorrhage within an adrenal mass is most commonly observed in cases of pheochromocytoma, it has also been described in patients with myelolipoma, metastatic lesions, adrenocortical carcinoma, adenoma, or hemangioma [8] . The clinical features of adrenal hemorrhage are non-specific, including: abdominal pain, nausea, vomiting, hypotension, hypertension, low-grade fever, agitation, and decreased hematocrit [9] . The adrenal gland is also commonly the site of metastases and hemorrhages, as well as, to a lesser degree, primary tumors [10 ,11] . Differentiating between potentially malignant and benign lesions is very important. Although adrenal hemorrhage is rare, its consequences are potentially fatal, especially if it is not diagnosed in a timely manner. Therefore, the radiologist must be familiar with the main imaging features of adrenal hemorrhage. Acute hemorrhage is characterized by the development of a mass, with hypoattenuation or heterogeneous attenuation, that fails to present enhancement after the infusion of contrast, in one or both of the adrenal glands. Other features that may be observed in acute adrenal hemorrhage include periadrenal infiltrate, active extravasation with retroperitoneal bleeding. In cases of adrenal hemorrhage, the bleeding is often continuous until the gland expands beyond its normal shape, and a rounded or oval hematoma forms around the gland. Such hematomas vary in size from a few centimeters to more than 10cm. On CT, they are characterized as circular masses with no contrast enhancement and attenuation greater than that of the liquids similar [12 ,13] . When there is suspicion of adrenal disease in a patient with retroperitoneal hemorrhage, hemodynamic monitoring, preferably in an intensive care unit, is recommended. First, in patients with active bleeding and hemodynamically stable, angiographic embolization is a valuable tool to achieve hemostasis. But, as in Case 2, if the conditions of the patient deteriorates, surgical option is necessary [14] . Hemorragic adrenal metastasis is rare, although adrenal metastases are common [15 ,16] . Patients with hemorrhagic adrenal metastases are usually symptomatic and typically experience acute onset of pain. Massive adrenal hemorrhage can be the initial clinical manifestation of a metastatic tumor. Bronchogenic carcinoma, colon cancer, renal cancer and melanoma are the most common causes of hemorrhagic adrenal metastases. Intratumoral hemorrhage (as in Case 2) may occur in a patient with metastatic melanoma [17] . Conclusions Both of the cases described give a picture of retroperitoneal hemorrhage, from different causes, which represent a real urgency, to be treated in the shortest possible time to save patient's lives. CT is very important for an early and correct diagnosis, to individuate the site of bleeding, the complicances, and for the better and appropriate management/outcome of the patients. Patient consent The patient confirmed the consent for publication of our case report.
2021-07-24T05:25:42.543Z
2021-07-09T00:00:00.000Z
236197000
s2orc/train
v2
On chiral medium with zero permittivity and permeability values
On chiral medium with zero permittivity and permeability values The possibility of propagation of electromagnetic waves of circular polarization in a Tellegen isotropic chiral medium with zero values of dielectric permittivity and magnetic permeability (chiral nihility) is considered. The refractive indices of normal waves are analyzed using the forward and backward wave identifier. In the presence of dissipative losses, the existence of two different normal waves in such a medium is impossible. Introduction The beginning of the third Millennium coincided with the emergence of interest in metamaterialsartificial composite media, the properties of which are determined mainly not by the composition, but by the structure of the incoming ingredients. As a rule, they have a pronounced periodic structure. If the period of the structure is small in comparison with the characteristic length of the electromagnetic or acoustic wave propagating in the material, such a metamaterial is considered as a macroscopically continuous medium with effective values of material parameters, such as dielectric permittivity (  ) and magnetic permeability (  ). In the most general form, metamaterials are bianisotropic media, which are described by means of tensor material parameters. In comparison with natural media, these substances can be very different from them in terms of effective values of material parameters, which is a prerequisite for the emergence of new, previously unseen wave manifestations. In the same connection, some restrictions on permissible values of material parameters or their combinations, which previously seemed impossible or of no practical interest, were revised [1]. For exotic media was proposed special names, such as "antivacuum" (  = −,  = −), "nihility" (  =  ,  =  ) [2]. The efforts of researchers were focused primarily on the study of the so-called double-negative media. Actually, these isotropic media were considered theoretically for the first time in the USSR and subsequently became known in the West mainly due to the work of V.G. Veselago [3,4], who used the term "left-handed media". The unusual phenomenon of negative refraction, which is observed on the interface with the left-handed media, led to the discovery and discussion of many new and tempting phenomena (a flat perfect lens that is not sensitive to the diffraction limit, object masking, optical illusions, etc.). These effects are due to the fact that in transparent media with real but negative values, not forward but backward normal waves are excited, as indicated by L.I. Mandel'shtam in the 40s of the XX century [5]. Veselago differently approached the description of negative refraction, suggesting that the refractive index n of the medium with    and    be considered negative. This view has led to another naming of such medium as "medium with negative refractive index" or "negative-index metamaterial" (NIM). Real metamaterials, in which the real parts of both permittivity and permeability are negative, have significant heat losses, which reduces their efficiency. J. Pendry [6] and S.A. Tretyakov with co-authors [7,8] proposed methods for obtaining the backward wave in an isotropic chiral medium with positive permittivity and permeability. Moreover, the possibility of an exotic medium called "chiral nihility" has been claimed [7]. In contrast to the non-chiral "nihility," in such a medium with a single non-zero material chiral parameter (  ), electromagnetic waves can still propagate, and such a medium retains the properties of a biisotropic chiral medium as a birefringent medium with two normal waves. The simplicity of material relations combined with the variety of wave manifestations attracted the attention of theorists to this model medium (for example, [9][10][11]). However, there is a disturbing circumstance that calls into question the principle feasibility of such a medium. Among the three equal systems of constitutive relations that are used to describe electromagnetic waves in an isotropic chiral medium (Tellegen relations, Post relations, Drude-Born-Fedorov relations), the state of "chiral nihility" is possible only in one Tellegen system. The purpose of this report is to clarify the conditions for normal waves in the medium "chiral nihility," using an approach unrelated to the concept of "negative refractive index". A nonchiral isotropic medium Quadratic equation (1) has two solutions, differing in sign, but by the condition (2) of them should choose the arithmetic square root n =  . Thus, it is always necessary to exclude from consideration negative values for the refractive index as not satisfying the condition (2). In order to distinguish between two complementary media that have the same refractive index, as well as common  and  , it is necessary to use the wave type identifier If the isotropic medium has losses, the refractive index becomes a complex value. The sign of the imaginary part of the index is chosen in accordance with the principle of ultimate absorption ("the principle of redemption" by G.D. Malyuzhinets), according to which the energy of the wave field decreases with increasing distance from the source. With the selected time factor, the imaginary additive is always assumed to be positive for forward waves, but for backward waves its sign changes. Generically, we can write The Tellegen chiral medium Let us now turn to the Tellegen chiral medium. The reduced (dimensionless) wave numbers are calculated by the formula n   =   . The chirality parameter in a transparent medium is a real value, positive or negative depending on the type of chirality of the medium. So far n  , which is typical for natural media and conventional chiral mixtures, both numbers   are positive. In metamaterials, strong chirality ( n  ) can be realized, and in this case one of the numbers in (7) turns out to be a negative value, and the corresponding normal wave becomes a backward wave. The strong chirality condition is also achieved by lowering the refractive index n of the equivalent nonchiral medium. In the extreme case, assuming n equal to zero, get "chiral nihility". This medium is thus characterized by two normal circularly polarized waves with a common refractive index  =  , and always one of them is a forward wave and the other is a backward wave. It should, however, be borne in mind that the wave impedance of such a medium is calculated by the formula that is, in the general case is an uncertain quantity. In the presence of dissipative losses in the medium, the chirality parameter becomes a complex value: Since the wave with index   is a forward wave, the wave with index 2  must be the backward wave, which by definition has ( ) 2 Im 0 . From formula (9) it follows that in fact ( ) 2 Im 0 . A contradiction is achieved, which indicates the impossibility of the presence of two different normal waves in a dissipative medium called "chiral nihility". Since the loss parameter i  can be taken as small as necessary, then making a limit transition ( 0 i → ) on it, we come to the conclusion about the complete physical incorrectness of the "chiral nihility" model for isotropic chiral medium. Summary The mathematical model of the isotropic chiral medium described by the Tellegen constitutive relations with degenerate (zero) values of dielectric permittivity and magnetic permeability ("chiral nihility") is internally contradictory, since at vanishingly small heat losses it does not provide the existence of two normal waves (forward wave and backward wave). Therefore, it is physically incorrect, and it should not be used in problems of electrodynamics of complex media and metamaterials.
2020-06-18T09:10:15.409Z
2020-03-01T00:00:00.000Z
226115200
s2orc/train
v2
Postoperative Recovery Comparisons of Arthroscopic Bankart to Open Latarjet for the Treatment of Anterior Glenohumeral Instability
Postoperative Recovery Comparisons of Arthroscopic Bankart to Open Latarjet for the Treatment of Anterior Glenohumeral Instability BACKGROUND Recurrent anterior glenohumeral instability is a disabling pathology that can be successfully treated by arthroscopic Bankart repair or open Latarjet. However, there is a paucity of studies comparing the postoperative recovery. The purpose of this study is to evaluate the postoperative pain and functional recovery following arthroscopic Bankart versus open Latarjet. METHODS This is a retrospective analysis of a multicenter prospective outcomes registry database. Postoperative recovery outcomes of either a primary or revision arthroscopic Bankart and open Latarjet procedures were compared. A minimum of 1-year follow-up was required. Outcomes measures included pain visual analog scale (VAS), American Shoulder and Elbow Surgeons (ASES) function score, ASES index score, and single assessment numeric evaluation (SANE) score. Overall, 787 patients underwent primary arthroscopic Bankart, 36 underwent revision arthroscopic Bankart and 75 underwent an open Latarjet procedure. RESULTS When compared to primary arthroscopic Bankart, open Latarjet demonstrated significantly lower VAS scores at 6 weeks (p = 0.03), 3 months (p = 0.01), and 2 years (p < 0.05). Medium-term outcomes for ASES scores and SANE score, at 1 and 2 years showed no difference. Latarjet demonstrated significantly lower (p < 0.05) preoperative early postoperative VAS pain scores with no difference at 1 year or 2 years when compared to primary Bankart. There was no difference in ASES function or index between Bankart and Latarjet. Revision Bankart provided inferior outcomes for VAS, ASES function, and ASES index when compared to primary Bankart and Latarjet at 1 year and 2 years. CONCLUSIONS Primary arthroscopic Bankart repair and open Latarjet provided nearly equivalent improvements in pain (VAS) and functional outcomes (ASES, SANE, VR-12) during the early recovery phase (2 years). This study supports the use of either procedure in the primary treatment of anterior glenohumeral instability. Revision arthroscopic Bankart repair demonstrated deteriorating outcomes at 1 and 2 years postoperatively. Introduction Although approximately 1.7% of the population experiences anterior glenohumeral instability (AGHI) [1], the optimal treatment remains controversial. There are multiple factors that impact treatment outcomes, including age, gender, sports participation, glenoid, and humeral bone loss [2][3][4]. In those patients that undergo surgical stabilization, there remains a relatively high recurrence rate [5,6]. Arthroscopic Bankart repair and open Latarjet are among the most common surgical options for addressing recurrent AGHI after a failure of conservative management. Arthroscopic Bankart repair is commonly performed in the primary setting for patients with minimal bone loss [7]. Significant improvements in pain and function have been reported after arthroscopic Bankart repair [8,9]. Open Latarjet is often reserved for patients with measurable bone loss exceeding 13.5% [10] or as a salvage procedure in the revision setting [11,12]. However, recent evidence has demonstrated that open Latarjet provides significantly lower rates of recurrent instability, apprehension, and operative revision when compared to arthroscopic Bankart repair at long-term follow-up [6,[13][14][15]. This has led some surgeons to prefer the Latarjet for all primary cases of glenohumeral instability [6]. Despite differing opinions on the optimal surgical stabilization technique for recurrent AGHI in both the primary and revision setting, there remains a paucity of clinical series comparing the process of recovery and the outcomes of these two procedures. In particular, the early postoperative recovery outcomes of arthroscopic Bankart repair and open Latarjet remain relatively unknown. The purpose of this study is to compare the process of early postoperative recovery for pain and function after arthroscopic Bankart repair, revision Bankart repair, and Latarjet. We hypothesized that there is no difference in postoperative pain or functional recovery following arthroscopic Bankart and open Latarjet. Surgical outcomes system database After approval from our institutional review board, we performed a retrospective analysis of a multicenter prospective outcomes registry database (Surgical Outcomes System (SOS) database; Arthrex Inc., Naples, FL). After consented for participation, patients receive seven surveys via email over the course of 2 years at select time intervals assessing patient-reported outcome measures regarding pain, range of motion, and functional scores. Operative details from each surgery are entered into the patient's SOS record by the care team. Aside from a preoperative survey, patients received questionnaires at 2 weeks, 6 weeks, 3 months, 6 months, 1 year, and 2 years. Outcomes measured included patients' American Shoulder and Elbow score (ASES), VR-12 physical score, SANE score (also known as subjective shoulder value), and visual analog scale (VAS) for pain, on a scale of 0 to 10. Patients were included who underwent surgery for recurrent anterior glenohumeral instability. Patients were excluded if they did not complete a preoperative baseline surgery questionnaire or did not have at least 6 months of follow-up in the database. Patient demographics A total of 898 patients were included in the study. There were 75 patients who underwent an open Latarjet procedure (primary or revision), 787 patients who had a primary arthroscopic Bankart surgery, and 36 patients with a revision arthroscopic Bankart procedure. There were no significant differences between the demographics of the three cohorts for gender, ethnicity, race, smoking status, diabetes diagnosis, insurance coverage, age, or body mass index (BMI). The demographic information is summarized in Table 1. Statistical analysis Descriptive statistics were utilized for overall outcomes and relevant comparisons between the open Latarjet, primary arthroscopic Bankart, and revision arthroscopic Bankart procedures. Dichotomous variables were compared using Fisher's exact test. A two-way mixed-model ANOVA was used to test for differences in PROMs, with surgical group as the between-subject factor and time as the within-subject factor. A p value < 0.05 was considered statistically significant. We performed a power analysis, using a minimum clinically important difference (MCID) of 17 points in the ASES score [16] and based on a prior study with a mean of approximately 86 points (SD 20) [17]; to find a 17 point difference in ASES scores, using a power of 0.8, alpha of 0.05, and allocation ratio of 1, the number needed per group is 23. Results Recovery curves for VAS ( Fig. 1 ASES, SANE, and VR-12 scores Primary arthroscopic Bankart repair and open Latarjet had statistically significant (p < 0.01) preoperative to postoperative improvements in ASES function, ASES index, and SANE scores at 6 months, 1 year, and 2 years (Table 3). Latarjet procedure when compared to a primary arthroscopic Bankart, showed no difference for measures of ASES function, ASES index, or SANE scores at 6 months, 1 year, and 2 years (Table 3). Compared to a revision arthroscopic Bankart, Latarjet and Primary Bankart outcomes were significantly higher (p < 0.05) at 2 years for ASES function, ASES index, and SANE scores. Discussion The optimal management of patients with recurrent AGHI remains controversial. While non-operative management is successful in the majority of patients following an initial dislocation, patients with recurrent instability experience considerable disability and lost time from athletics or work. Arthroscopic Bankart repair has traditionally been favored over Latarjet for its minimally invasive nature and ability to restore normal anatomy. Additionally, arthroscopic Bankart repair has been shown to provide predictable pain relief and improvements in functional outcomes [8,9,18]. However, there has been growing concern over the long-term durability of arthroscopic Bankart repair with a high rate of late recurrence after 6 years postoperatively [6,19]. Conversely, the Latarjet procedure is often reserved for glenohumeral instability in the presence of bone loss demonstrating predictable improvements in function with low rates of late recurrence [2,20,21]. In the current study, we evaluated 898 patients of which 75 underwent Latarjet, 787 underwent primary arthroscopic Bankart repair and 36 patients had revision arthroscopic Bankart repair and compared the early recovery curves for pain and function. Primary arthroscopic Bankart repair and Latarjet demonstrated very similar recovery curves throughout the first 2 years with both procedures resulting in improvements in pain and function when compared to preoperative measurements ( Table 2; Figs. 1, 2, 3). When these procedures were directly compared, the VAS pain scores for Latarjet were significantly lower than those observed for arthroscopic Bankart repair both preoperatively and throughout the early recovery phase [3,6,6] with no difference at 1 year and 2 years ( Table 2). The postoperative pain scores following Latarjet remained low throughout the study duration peaking at 2.8 points at 2 weeks postoperatively and ultimately decreasing to a mean of 0.9 points at final follow-up [2]. Furthermore, patient-reported outcomes including ASES function, ASES index, and SANE showed no difference between the two procedures at any time point. These findings are supported by a recent systematic review and meta-analysis comparing Latarjet to Bankart repair that concluded that the Latarjet procedure is a viable and possibly superior alternative to the Bankart repair, offering greater stability with no significant increase in complication rate [22]. The utilization of the Latarjet procedure is very common in Europe, while the United States has been slow to adopt it. Currently, treatment with arthroscopic Bankart accounts for 87% for cases of AGHI versus 3% Latarjet procedures according to a large United States national database [23]. However, this same study demonstrated the Latarjet has increased by a rate of 15% per year from 2007 to 2015 [23]. This relatively slow adoption may be related to concern for early postoperative complications, reported as high as 25%, including up to a 10% rate of neurologic injury [17]. Intraoperative neuromonitoring has since highlighted at risk parts of the Latarjet procedure [24] and a nerve stretch reduction protocol can reduce the rate of detectable nerve injuries by over 65% [25]. Furthermore, nearly all nerve injuries are neuropraxias with complete recovery [25]. More recently, Gartsman et al. [26] reported the rate of early complications following 416 Latarjet procedures to be only 5% with a neurologic injury rate of 3.1%, the majority of which were transient. Thus, when recurrent instability is considered a complication, the rate of complications following Bankart repair vastly exceeds that of Latarjet at long-term followup. Although we were not able to evaluate and compare the rate of complications following Latarjet and arthroscopic Bankart repair in this study, it is known that patients who sustain a complication following Latarjet have significantly worse functional outcome. Shah et al. [17] demonstrated that the ASES score after a Latarjet with a complication was 69.9 versus 91.8 (p < 0.001). In the current study, the mean ASES score at 2 years following Latarjet was 89.2 points, similar to primary Bankart measuring 85.8 points, reflecting that complications are relatively rare. Pain and functional outcome measures following revision arthroscopic Bankart repair were universally worse when compared to Latarjet and primary arthroscopic Bankart at nearly all time-points postoperatively (Tables 2, 3; Figs. 1, 2, 3). In the revision setting, a clear deterioration in both pain and function was identified in our study as early as 1 year postoperatively. While revision arthroscopic Bankart repair has shown the potential to provide satisfactory outcomes in carefully selected patients [27], the rate of recurrent instability exceeds 20% at mean of 36 months [27]. The increased pain and decline in functional outcome observed in this study may reflect the onset of recurrent instability when patients are cleared to return to sporting activities. Although many patients return to sport after revision arthroscopic Bankart, 90% describe a limitation in their shoulder during participation [28]. Buckup et al. [28] reported a mean Subjective Patient Outcome for Return to Sports (SPORTS) score of 5.2 out of 10, and therefore, recommended advising patients that although they can return to activities, they must expect persistent deficits and limitations of the shoulder with a low probably of return to activities with greater demand on the shoulder [28]. In the current study, it was not possible to determine if a Latarjet procedure was performed as a primary intervention versus a revision intervention following a failed Bankart procedure. Furthermore, patients who failed a Latarjet and required a revision stabilization procedure such as an Eden Hybinette or tibial bone block were not captured. Thus, the direct comparison of revision Bankart to the Latarjet population in this study has inherent bias and should be interpreted with caution. However, it is evident that revision Bankart patients show a clear deterioration in pain and function at 1 year and 2 years. The degree of glenoid bone loss, Hill-Sachs size, age, gender, laxity, and other predictors of recurrent instability should be carefully assessed in the revision setting to determine if a bone block procedure will more reliably restore stability and function to the shoulder. The following limitations should be considered when interpreting the results reported in this study. First, although this is one of the largest individual series evaluating the outcomes of arthroscopic Bankart repair to Latarjet, this study remains limited by its short-term follow-up. Second, there is a large discrepancy in the number of patients who underwent Bankart versus Latarjet procedure despite being treated for a similar pathology (i.e., recurrent glenohumeral instability). This suggests that a selection bias exists when surgeons are determining which procedure to perform. Third, the SOS database enables construction of patient-reported outcome recovery curves; however, these PROMS are not linked to intra-operative or postoperative complications, recurrence of instability, reoperations, surgical technique, or patient range of motion. We were also unable to evaluate radiographic parameters including osseous glenoid or humeral defects. Finally, as is the case for all databased studies, the outcomes are dependent on the accuracy of the coding of each surgery performed. Despite these limitations, in this comparative study we are able to illustrate the postoperative pain and functional recovery curves for arthroscopic primary and revision Bankart repair and open Latarjet. This study demonstrates early outcomes for primary arthroscopic Bankart and Latarjet are nearly equivalent. Furthermore, this is the first study to demonstrate the significantly worse outcome that results following revision Bankart repair. Conclusions Primary arthroscopic Bankart repair and open Latarjet provide nearly equivalent improvements in pain (VAS) and functional outcomes (ASES, SANE, VR-12) during the postoperative recovery phase (2 years). This study supports the use of either procedure in the primary treatment of anterior glenohumeral instability. In the revision setting, arthroscopic Bankart repair demonstrated deteriorating outcomes at 1 and 2 years postoperatively.
2019-08-07T10:27:30.561Z
2019-07-01T00:00:00.000Z
199435000
s2orc/train
v2
Autophagy mediates HIF2α degradation and suppresses renal tumorigenesis
Autophagy mediates HIF2α degradation and suppresses renal tumorigenesis Autophagy is a conserved process involved in lysosomal degradation of protein aggregates and damaged organelles. The role of autophagy in cancer is a topic of intense debate, and the underlying mechanism is still not clear. The hypoxia inducible factor 2α (HIF2α), an oncogenic transcription factor implicated in renal tumorigenesis, is known to be degraded by the ubiquitin-proteasome system (UPS). Here we report that HIF2α is in part constitutively degraded by autophagy. HIF2α interacts with autophagy-lysosome system components. Inhibition of autophagy increases HIF2α, while induction of autophagy decreases HIF2α. The E3 ligase von Hippel Lindau (VHL) and autophagy receptor protein p62 are required for autophagic degradation of HIF2α. There is a compensatory interaction between the UPS and autophagy in HIF2α degradation. Autophagy inactivation redirects HIF2α to proteasomal degradation, while proteasome inhibition induces autophagy and increases the HIF2α-p62 interaction. Importantly, clear cell renal cell carcinoma (ccRCC) is frequently associated with mono-allelic loss and/or mutation of autophagy related gene ATG7, and low expression level of autophagy genes correlates with ccRCC progression. The protein levels of ATG7 and beclin 1 are also reduced in ccRCC tumors. This study indicates that autophagy plays an anticancer role in ccRCC tumorigenesis, and suggests that constitutive autophagic degradation of HIF2α is a novel tumor suppression mechanism. Introduction Clear cell renal cell carcinoma (ccRCC) is the most common form of kidney cancer. The majority of cases exhibit von Hippel Lindau (VHL) gene deletions or mutations (1,2). VHL protein forms an E3 ubiquitin ligase complex with elongins B and C to mediate the ubiquitination of hypoxia inducible factors, HIF1α and HIF2α, and the degradation via the ubiquitin-proteasome system (UPS) (1,3). In ccRCC, HIF1α functions as a tumor suppressor and its expression is often silenced (1,(4)(5)(6), while HIF2α is an oncogenic transcription factor driving the expression of target genes involved in angiogenesis, glycolysis and tumor growth (1,6,7). Microenvironmental hypoxia and the genetic inactivation of VHL and other proteasome degradation pathway components result in HIF2α accumulation, strongly contributing towards ccRCC development (8)(9)(10). Macroautophagy (referred as autophagy hereafter) is a major intracellular degradation system responsible for the breakdown of long lived proteins, protein aggregates and damaged organelles (11,12). Autophagic degradation can also be a selective process mediated by receptor protein p62 and substrate protein ubiquitination (13,14). Proteasome inhibition induces autophagy and polyubiquitinated protein aggregation (15,16), indicating that autophagy may compensate for proteasome inactivation. However, the compensatory function of proteasome upregulation during autophagy inactivation is not well studied. The role of autophagy in cancer is context dependent, and it can be either tumor suppressive or tumor promoting (17). The autophagy gene BECN1 is mono-allelically deleted in human breast and ovarian tumors and functions as a haploinsufficient tumor suppressor in mice (18)(19)(20), supporting an anticancer role of autophagy. However, it is necessary to validate the genetic alterations of other autophagy genes in copy number, expression level and mutation frequency in different cancers (17). In ccRCC, multiple mutations were shown to exist in the phosphatidyl inositol 3-kinase pathway, as well as in MTOR itself (21), which may inhibit autophagy. Additionally, autophagy inducers reduce growth of ccRCC cells (22). Based on these findings, we reasoned that autophagy might serve as a tumor suppressive process in ccRCC. In this study, we show that autophagy collaborates with the UPS to degrade HIF2α. ccRCC is frequently associated with mono-allelic loss and/or mutation of the critical autophagy related gene ATG7, and low expression level of autophagy genes correlates with ccRCC progression. Inhibition of autophagy results in HIF2α accumulation HIF2α is degraded by the proteasome after oxygen-dependent prolyl hydroxylation and VHL-dependent ubiquitination (1,3). Under normoxia, HIF2α protein levels were very low in the VHL wildtype Caki-I RCC and in the retinal pigment epithelium (RPE) cell lines ( Figure 1A, 1B), indicating that HIF2α is constitutively degraded in the presence of intact VHL. When these cell lines were treated with the proteasome inhibitor MG132, there was a gradual accumulation in HIF2α protein levels ( Figure 1A, 1B). After 8 hours of treatment, HIF2α levels increased 6-7 fold ( Figure 1A, 1B, 1D). These results confirmed that the proteasome was involved in constitutive degradation of HIF2α. Autophagy is another major intracellular degradation system, but its role in HIF2α degradation is unknown. To this end, we treated Caki-I cells and RPE cells with bafilomycin A1, which blocks autophagic degradation by inhibiting autophagosome-lysosome fusion and acidification (23). As a result of the blockage of autophagic flux, bafilomycin A1 treatment led to accumulation of the autophagy receptor protein p62 and the autophagosome marker protein, lipidated form of LC3 (LC3-II) (23) ( Figure 1A, 1B). After 8-hour incubation with bafilomycin A1, HIF2α protein levels increased 3-fold ( Figure 1A, 1B, 1D). p53 is another transcription factor that is subjected to polyubiquitination-mediated proteasomal degradation (24). We observed that, in Caki-I and RPE cell lines, p53 was only stabilized by MG132 treatment but not bafilomycin A1 treatment ( Figure 1A, 1B). These results indicated that bafilomycin A1 increased HIF2α protein level specifically by inhibiting the autophagylysosome system but not by inhibiting or compromising UPS activity. To exclude the possible influence of mRNA transcription and vehicle-treatment on HIF2α protein accumulation, we established a HEK293T stable cell line expressing exogenous HIF2α-GFP and treated cells with DMSO as control. Compared with DMSO, MG132 and bafilomycin A1 induced a gradual accumulation of HIF2α ( Figure 1C, 1D). Treatment with chloroquine, another autophagy-lysosome inhibitor, also led to an accumulation of HIF2α ( Figure 1E). These results collectively suggest that HIF2α is constitutively degraded not only by the proteasome but also by autophagy under normoxic conditions. Although HIF2α is known as a transcription factor, we observed that it was mainly localized in the cytoplasm in DMSO-treated HEK293T cells, and it was almost undetectable in nuclear extracts ( Figure 1F). These results indicate that, in addition to its role as transcription factor in the nucleus, HIF2α may have an unidentified function in cytoplasm, or is sequestered in the cytoplasm as part of the cellular regulation of HIF-mediated transcription. Interestingly, MG132 and bafilomycin A1 increased HIF2α levels in both the nuclear and the cytoplasmic extracts ( Figure 1F). Nuclear membrane protein Lamin A was only found in the nuclear fraction, and the cytoplasmic marker protein lactate dehydrogenase (LDH) was only found in the cytoplasmic fraction, which confirmed that our fractions were pure. These results indicate that, although autophagy degrades proteins in the cytoplasm (11), the accumulated HIF2α in the cytoplasm during autophagy inhibition may also translocate to the nucleus. Consistent with previous reports that proteasome inhibition activated autophagy (15,16), MG132 treatment was also observed to induce autophagy in Caki-I, RPE and HEK293T cells, as indicated by the increase in LC3B-II and decrease in p62 ( Figure 1A, 1B, 1C). These data suggest that autophagy plays a compensatory role during proteasome inhibition. VHL is required for HIF2α accumulation during autophagy Inhibition Proteasome-mediated protein degradation is a selective process, regulated through the high level of specificity of E3 ligases for substrate proteins. Although autophagic degradation was originally described as a non-selective process, it was recently found that protein ubiquitination was also required for degradation of some autophagy substrate proteins (13,14). Autophagy receptor protein p62 interacts with ubiquitin via its C-terminal UBA domain, and interacts with LC3 via its LIR motif. In this way, p62 recruits and delivers ubiquitinated proteins to the nucleating autophagosome for degradation (13,14). VHL is the known E3 ligase for HIF2α ubiquitination. Since we observed that HIF2α was subjected to both proteasomal and autophagic degradation ( These results indicate that VHL is required not only for proteasome-mediated HIF2α degradation, but also for autophagy-mediated degradation. In contrast to HIF2α, VHL protein level was only enhanced by MG132 (Figure 2A) but not bafilomycin A1 ( Figure 2B), suggesting that VHL was exclusively degraded by the proteasome, and bafilomycin A1-induced HIF2α accumulation was not due to the side effect of proteasomal inhibition. Consistent with the changes in HIF2α protein level, inhibition of autophagy by bafilomycin A1 also increased the expression of HIF2α target genes, including vascular endothelial growth factor A (VEGFA), transforming growth factor A (TGFA) and cyclin D1 (CCND1), but it did not affect the expression of HIF2α itself ( Figure 2D). It should be noted that although HIF2α shares some target genes with HIF1α (3), the 786-O cell line is deficient in HIF1α (5), so the increased target gene expression exclusively represents HIF2α transcriptional activity. The data above show that VHL was required for autophagic degradation of HIF2α. One possibility is that VHL is required for autophagic activity. However, comparable accumulation of LC3B-II was induced by MG132 and bafilomycin A1 in 786-O cells with or without stably expressed VHL ( Figure 1D), which indicated that VHL deficiency did not affect autophagosome formation and subsequent fusion with the lysosome. Another possibility is that HIF2α ubiquitination is required for autophagic degradation, and is dependent on the E3 ligase activity of VHL (1, 3), since p62 is known to interact with and deliver ubiquitinated proteins to the autophagosome (13,14). To test this possibility, we first prevented VHL-mediated HIF2α ubiquitination using hypoxia mimetic CoCl 2 (25). As we expected, CoCl 2 treatment stabilized HIF2α but prevented additional accumulation induced by bafilomycin A1 ( Figure 2E). Second, we assessed HIF2α levels in 786-O cells expressing the HIF-binding incompetent VHL mutant W117R (26). The 786-O cell line stably expressing VHL W117R showed much higher HIF2α protein levels than those expressing wild type VHL ( Figure 2F), indicating its inability to degrade HIF2α. No further increase in HIF2α amount was observed in cells expressing VHL W117R mutant after bafilomycin A1 treatment ( Figure 2F). On the other hand, the expression of partially HIF ubiquitination competent VHL R167Q or VHL F148A isoforms (27) in 786-O cells reduced the basal level of HIF2α, and enabled 786-O cells to accumulate HIF2α in response to bafilomycin A1 treatment ( Figure 2G). These results showed that pharmacologic or genetic inhibition of E3 ligase activity of VHL blocked bafilomycin A1-induced HIF2α accumulation, and suggested that VHL E3 ligase activity was required for autophagy-mediated HIF2α degradation. Furthermore, endogenous HIF2α was mainly found in the cytoplasmic fraction of 786-O stable cell line expressing VHL. Bafilomycin A1 treatment increased HIF2α more obviously in the nucleus than in the cytoplasm, while the E3 ligase VHL was only found in the cytoplasm, indicating that the cytoplasm is the location of cellular regulation of HIF, and the accumulated HIF2α during autophagy inactivation translocates to the nucleus. Compared with the results we obtained with HIF2α-GFP ( Figure 1F), we suspect that the GFP tag might delay cytoplasmic-nuclear translocation. Induction of autophagy promotes VHL-dependent HIF2α degradation The results above show that the inhibition of the autophagy system led to the accumulation of HIF2α. We further studied the changes of HIF2α during autophagy induction. mTOR is a negative regulator of autophagy. mTOR phosphorylates Ulk1 at Ser 757, and prevents Ulk1 activation and autophagy induction (28). mTOR inhibition by starvation or via rapamycin has been reported to induce autophagy (11). When both 786-O parental cells and 786-O cells stably expressing VHL were exposed to starvation or treated with rapamycin, mTOR activity was inhibited as indicated by the decrease in ribosomal protein S6 phosphorylation and Ulk1 phosphorylation at Ser 757, and autophagy was induced as indicated by the decrease in p62 protein level and the transient increase of LC3B-II ( Figure 3A, 3B). However, an obvious decrease in HIF2α protein level was only observed in 786-O cells expressing VHL ( Figure 3B, 3C) but not in 786-O parental cells ( Figure 3A, 3C), indicating that VHL was required for autophagic degradation of HIF2α during autophagy induction. Consistent with the changes in HIF2α protein level, rapamycin treatment also decreased the expression of HIF2α target genes, including VEGFA, TGFA and CCND1 ( Figure 3D). Furthermore, starvation-or rapamycin-induced HIF2α decrease was blocked by the presence of bafilomycin A1 ( Figure 3E), confirming that HIF2α degradation was mediated by the autophagy-lysosome system. Unlike HIF2α, the degradation of p62 by autophagy was not significantly influenced by the expression of VHL ( Figure 3A, 3B). As an autophagy receptor protein, p62 functions to interact with more than one autophagy substrates, and VHL inactivation is not expected to attenuate p62 degradation significantly. Interestingly, replacement of old DMEM with pre-warmed fresh DMEM decreased HIF2α protein (Starvation 0, Figure 3A, 3B), indicating that HIF2α was unstable, especially in response to environmental stimuli. Furthermore, we studied the changes in HIF2α in VHL positive Caki-I cells. The basal level of HIF2α in Caki-I cells was too low to detect the further decrease under starvation conditions, but the accumulated HIF2α by CoCl 2 pretreatment was also depleted quickly when the cells were followed by continuous culture in starvation medium EBSS ( Figure 3F). These results further confirm that HIF2α is in part subjected to autophagic degradation, and E3 ligase VHL is required for this process. HIF2α interacts with the autophagy-lysosome system components So far, we have shown that HIF2α is in part constitutively degraded by autophagy under normoxic conditions. Inhibition of autophagy increased HIF2α levels while induction of autophagy decreased HIF2α levels. To further confirm these findings, we examined the interaction between HIF2α and autophagy-lysosome system components. In HEK293T cells, stably expressed HIF2α-GFP was mainly localized to the cytoplasm ( Figure 4A), which is consistent with previous fractionation results ( Figure 1D). Importantly, some cells contain HIF2α-GFP aggregates ( Figure 4A), which are supposed to be degraded by autophagy. We then transiently transfected this cell line with LC3A-RFP, and found that HIF2α-GFP aggregates co-localized with autophagosomes, as indicated by the LC3A-RFP punctate cytoplasmic structures ( Figure 4A). Next, we transiently expressed GFP or HIF2α-GFP in HEK293T cells, and immunoprecipitated GFP proteins using a GFP-binding antibody derived from Vicuna pacos. Although the expression and immunoprecipitation of GFP were much higher than those of HIF2α-GFP, endogenous autophagy-lysosome components, including p62, LC3 and lysosome-associated membrane protein 1 (LAMP1), were co-immunoprecipitated with HIF2α-GFP but not with GFP ( Figure 4B). Autophagy inhibition by bafilomycin A1 increased the HIF2α protein level in both Triton X-100 soluble and insoluble fractions, indicating part of HIF2α formed aggregates ( Figure 4C). Proteasome inhibition by MG132 also induced a dramatic accumulation of HIF2α, with most of the protein found in the Triton X-100 insoluble fraction ( Figure 4C). Although there was no obvious change in p62 in the Triton X-100 soluble fraction, more p62 was coimmunoprecipitated with HIF2α-GFP in the presence of bafilomycin A1 or MG132 ( Figure 4C), indicating that proteasome inhibition enhanced the interaction between HIF2α and p62 (Fig. 4C). Next, we transiently expressed GFP or LC3A-GFP with or without HIF2α-HA in HEK293T cells, and observed that HIF2α-HA and endogenous p62 were coimmunoprecipitated with LC3-GFP but not GFP ( Figure 4D). These results collectively confirm that HIF2α interacts with autophagy-lysosome system components, and the interaction is enhanced during proteasome inhibition. Autophagy collaborates with the proteasome to degrade HIF2α To further investigate the role of autophagy in HIF2α degradation, we studied the turnover of HIF2α in Atg5 knockout mouse embryonic fibroblasts (MEFs). The endogenous murine HIF2α level was either too low to detect even in the presence of MG132 or CoCl 2 or it was not recognized by the antibody (NB100-122, Novus Biologicals) (Data not shown). We then established MEF stable cell lines expressing human HIF2α-GFP. In Atg5 +/+ MEFs, the basal level of the stably expressed exogenous HIF2α was still almost undetectable, but treatment with bafilomycin A1 or MG132 induced an accumulation of HIF2α, confirming that both autophagy and the proteasome were involved in HIF2α degradation ( Figure 5A, 5B). In Atg5 −/− MEFs, LC3B-II was undetectable even in the presence of bafilomycin A1, and the baseline of p62 level was much higher than that in Atg5 +/+ MEFs ( Figure 5A, 5B) (29), indicating inactivation of autophagy by Atg5 knockout. Surprisingly, no obvious increase was observed in stably expressed HIF2α in Atg5 −/− MEFs ( Figure 5A, 5B). These observations suggest that autophagy inactivation might redirect HIF2α to the proteasome for efficient degradation. As we expected, MG132 treatment increased HIF2α to a much greater extent in Atg5 −/− MEFs than in Atg5 +/+ MEFs ( Figure 5A, 5B), indicating the proteasome was responsible for more HIF2α degradation in Atg5 −/− MEFs to compensate for autophagic inactivation. When both proteasomal and autophagic degradation were blocked by CoCl 2 , the accumulation of HIF2α was comparable in Atg5 +/+ and Atg5 −/− MEFs ( Figure 5C), confirming the expression of HIF2α was not affected by Atg5 knockout. Interestingly, bafilomycin A1 treatment still slightly increased HIF2α protein level in Atg5 −/− MEFs, possibly through inhibition of Atg5-independent alternative autophagy (30). Current data indicate that autophagy degrades protein aggregates and the proteasome is responsible for degrading soluble proteins (15). Since autophagy inhibition redirected HIF2α to proteasomal degradation ( Figure 5A, 5B), we hypothesized that shuttling by heat shock proteins (HSPs) was probably required for such a redirection process. Treatment with HSP90 inhibitor 17-AAG induced an increase in HIF2α protein level in Atg5 −/− MEFs but not in Atg5 +/+ MEFs ( Figure 5C), implying that HSPs were probably involved in the disaggregation of HIF2α and redirection to proteasomal degradation. The data described above show that an Atg5 deficiency did not increase stably expressed HIF2α, and this was probably due to the compensatory function of the proteasome. Since protein transient overexpression has been shown to overload the proteasome and thus block compensatory regulation (31), we then studied the turnover of transiently overexpressed HIF2α in BECN1 or p62 knockdown HEK293T cells. Compared with control shRNA, the expression of BECN1 shRNA dramatically reduced beclin1 protein levels and bafilomycin A1-induded LC3B-II accumulation, indicating an efficient knockdown and impairment of autophagy activity ( Figure 5D). Importantly, BECN1 knockdown led to an increase in the baseline of transiently overexpressed HIF2α ( Figure 5D). The inhibition of autophagic flux by bafilomycin A1 significantly increased HIF2α protein level in control knockdown cells but not in BECN1 knockdown cells ( Figure 5D). These data demonstrate that HIF2α turnover is truly autophagy dependent. Similarly, p62 knockdown also increased the basal level of overexpressed HIF2α ( Figure 5E), which confirmed the importance of p62 as an autophagy receptor in HIF2a degradation. The autophagy pathway is altered in human ccRCC samples It has been reported that oncogenic protein HIF2α level is higher in more advanced ccRCC (9,32). Our data show that HIF2α was in part degraded by autophagy, and we reasoned that autophagy pathway was probably genetically altered in ccRCC. To explore this possibility, we systematically examined gene expression levels, gene copy number alterations, and mutation rates of autophagy related genes in ccRCCs from the Cancer Genome Atlas (TCGA) database. First, we found that the expression of autophagy genes functioning in the nucleation step was coordinated, and varied in ccRCC samples. The class III phosphoinositide 3 kinase (PI3K) complex, which consists of VPS34, VPS15 and beclin 1, is important for autophagosome nucleation. Beclin 1 binding proteins, such as ATG14L, UVRAG, AMBRA1 and BIF1, also positively regulate autophagosome nucleation and maturation (33)(34)(35). Using K means grouping of a composite gene expression score for these 7 genes encoding these proteins, we divided ccRCC samples into 3 groups, the autophagy low expressing (ATG-low, 176 tumors), the autophagy high expressing subgroup 1 (ATGhigh-1, 119 samples) and the autophagy high expressing subgroup 2 (ATG-high-2, 148 samples) ( Figure 6A). The expression level of these genes in normal kidney tissues was higher than that in the ATG-low ccRCC group, while relatively close to that in the ATGhigh groups ( Figure 6A). These results imply that ccRCCs with low autophagy gene expression might be related to more aggressive tumors. Kaplan-Meier analysis showed that the ccRCCs with low autophagy gene expression had poorer survival, and ccRCCs with high autophagy gene expression had longer survival. There was no obvious difference between two high autophagy gene expression subgroups ( Figure 6B). Second, we checked the variation in gene copy number of autophagy genes. ATG7 is one of the key proteins involved in the ubiquitin-like conjugation system that regulates autophagosome extension. Notably, ATG7 is mapped on the p arm of chromosome 3 (3p), where VHL is located. Since most ccRCC tissues are known to harbor loss of 3p, we reasoned that ccRCC was associated with allelic loss of ATG7 at the same time. As we expected, more than 80% of ccRCC tissues lost one copy of VHL and ATG7 (Fig. 6A). Third, we examined the mutation frequency of autophagy related genes. We found that 56% ccRCC tissues harbored a VHL mutation (Fig. 6A, C). Importantly, 4 out of 219 tumor samples were found to have an ATG7 mutation, and 2 of these were nonsense mutations (Fig. 6C). Such nonsense mutations in ATG7 were also found in lung cancer samples from the TCGA database (data not shown). Furthermore, a list of other autophagy genes involved in different steps also harbored non-silent mutations (Fig. 6C), while such mutations were not found in normal kidney tissues. These data collectively indicate that ccRCC is a heterogeneous tumor, and that autophagy related genes are selectively inactivated in ccRCC at different levels, including copy number loss, decreased mRNA expression and direct mutations. Importantly, VHL mono-allelic loss and mutations are evenly distributed in three subgroups ( Figure 6A). These findings indicate that VHL copy number loss or mutation is a truncal event in the genomic evolution of RCC, and is more important in tumor initiation than in progression. In contrast, it is the variation of autophagy gene expression level that determines patient survival ( Figure 6A). Low expression of autophagy genes can be an alternative explanation for HIF2α deregulation in tumors where VHL is not lost or VHL mutants are competent in HIF2α ubiquitination. However, due to the unavailability of HIF2α protein level in TCGA database and the complexity of the regulatory mechanisms of HIF2α and its target genes, it is challenging to generate straightforward evidence to further confirm this correlation. Autophagy related proteins are reduced in ccRCC samples The results obtained with the TCGA dataset reveal the mutation and copy number loss of ATG7, and the reduced expression of autophagy genes involved in autophagosome nucleation. To study the changes at a protein level, tissue microarrays that contains triplicated normal kidney samples and ccRCC tumor samples were stained with hematoxylin and eosin (H&E) or immunohistochemical stained with an ATG7 antibody or a beclin1 antibody. Normal kidney tissue showed structures containing proximal tubules and distal tubules, while ccRCC tumors did not contain these structures ( Figure 7A). ccRCC samples also showed clear cytoplasm because the intracytoplasmic glycogen and lipids are dissolved during histologic processing (36). Compared with normal kidney tissues, the overall percentage of ATG7 or beclin1 positive cells were significantly reduced in ccRCC ( Figure 7A, 7B). These results indicate that the expression of some key autophagy proteins, such as ATG7 and beclin 1, are also reduced at a protein level in ccRCC tissues. Discussion Although it has been widely accepted that there is a relationship between autophagy and cancer, the role of autophagy in tumorigenesis is still a topic of intense debate (17). It is possible that the function of autophagy is context-specific, and it varies depending on tumor type, grade, stage and depth (17). The mono-allelic loss and low expression of BECN1 in breast and ovarian cancers support that autophagy functions as a tumor suppressor in some cases (18). Further comprehensive analyses of other autophagy genes and proteins in different cancers is required (17). In our current study, we examined the genetic alteration of the autophagy pathway in ccRCC, and revealed striking changes in autophagy genes, including ATG7 mono-allelic loss, reduced gene expression and somatic mutations. Importantly, the reduction of autophagy gene expression is associated with shorter patient survival. Taken together with the mono-allelic loss of BECN1, such a broad down-regulation of autophagy related genes provides further support for the tumor suppressive role of autophagy. Several anti-tumor mechanisms of autophagy have been reported, such as damaged organelle elimination, p62 degradation, genome stabilization, NRF2 and NFkB inactivation, and T lymphocyte attraction (17,(37)(38)(39). As a cellular degradation process, it is also likely that autophagy directly degrades oncogenic proteins to suppress tumorigenesis. Here we found that HIF2α, an oncogenic transcription factor that drives RCC tumor initiation and metastasis, was constitutively subjected to autophagic degradation. In support of this conclusion, we found that HIF2α colocalized and interacted with autophagy-lysosome system components. Autophagy inhibition by bafilomycin A1 increased HIF2α, while autophagy induction by starvation or rapamycin decreased HIF2α. These data support the presence of an autophagy-dependent mechanism for suppressing proteins implicated in oncogenesis. It has been widely accepted that HIF2α plays an oncogenic role as a transcription factor. Consistent with this concept, here we also observed that the modulation of autophagy activity by bafilomycin A1 or rapamycin not only affected HIF2α protein level, but also influenced HIF2α target gene expression. Moreover, HIF2α is mainly localized in the cytoplasm under steady state conditions, indicating that HIF2α might also play an oncogenic role in the cytoplasm, and the possible underlying mechanism is currently under investigation. Furthermore, it should be noted that HIF2α is only one of a number potentially oncogenic proteins that is subjected to autophagic degradation, and large-scale screening is necessary to identify additional autophagy substrates that are implicated in renal tumorigenesis. Additionally, it has been recently reported that HIF1α is degraded by chaperone-mediated autophagy (CMA) by not macroautophagy (40). Further investigation will be required to explore the function of CMA in HIF2α degradation and renal cell carcinoma development. The proteasome and autophagy system were initially thought to work in parallel, and recent investigations suggested that both degradation systems are functionally linked (15,16,41,42), but the collaboration in degrading an individual substrate that is subjected to both proteasomal and autophagic degradation is not well studied. HIF2α is known to be degraded by the proteasome in a VHL dependent fashion. Here we demonstrate that autophagy also mediated HIF2α degradation in a VHL dependent fashion. Since p62 is a self-oligomeric and stress response protein which binds ubiquitinated proteins via its UBA domain (13,14,43), we assumed that once HIF2α is ubiquitinated by VHL, the soluble VHL-elongin Celongin B-HIF2α fraction would be degraded by the proteasome, while other complexes would be recognized by p62 to form aggregates and be degraded by autophagy. It is possible that the HIF2α overexpression, accumulation or aggregation under stress conditions might redirect cells to initiate or favor autophagic degradation mediated by p62, which is supported by the enhanced interaction between HIF2α and p62 during proteasome inhibition. Importantly, autophagic inactivation by knockout of Atg5 did not led to the accumulation of stably expressed HIF2α at a basal level, which was due to the compensatory function of the proteasome. This study revealed a new collaborative interaction between both degradation systems in handling a co-substrate. With the presence of such a compensatory interaction, chronic autophagy suppression may not lead to accumulation of some autophagy substrates, and an acute genetic inhibition was probably required to transiently increase basal level of these substrates before cells initiate a proteasome-mediated compensatory mechanism. In ccRCC, frequent mutations of genes encoding VHL, TCEB1 and proteasome pathway components had been reported (8)(9)(10), and here we report genetic alterations in the autophagy pathway, including allelic loss, somatic mutations and reduced gene expression. Since our results show that HIF2α can be degraded cooperatively by the proteasome and autophagy in a VHL dependent fashion, it appears that ccRCC cells have evolved to keep HIF2α from being degraded by genetically disrupting the ubiquitination effector proteins and degradation pathways at the same time. In summary, our data reveal that the oncogenic transcription factor HIF2α is a novel target of autophagy, and there is a compensatory relationship between the proteasome and autophagy in HIF2α degradation. Autophagy plays a tumor suppressive role in ccRCC tumorigenesis probably via constitutive degradation of HIF2α. This study might open a new therapeutic window for ccRCC management by down-regulating HIF2α levels through the simultaneous modulation of autophagy and the proteasome. Tissue Microarray, Image Acquiring and Image Analysis Human subject protocol (2007-0511) was approved by Institutional Research Board at M.D. Anderson Cancer Center. Tissue microarrays (TMA) were generated and immunohistochemically stained as previously described (45). The slides were scanned with the Vectra image scanning system (Caliper Life Sciences). The percentage of ATG7 or beclin 1 cytoplasmic positive cells in whole tissue sections were analyzed using the Vectra Inform software (Caliper Life Sciences). Mann-Whitney test was used for statistical analysis. Genetic alteration analysis using TCGA dataset Level 3 RNA-Seq data, level 3 SNP array data, level 2 somatic mutation data, and clinical data for renal clear cell carcinomas were downloaded from the Cancer Genome Atlas (TCGA) data portal (https://tcga-data.nci.nih.gov/tcga/dataAccessMatrix.htm). Clustering analyses were done using the Cluster and TreeView software (available at http:// rana.lbl.gov/EisenSoftware.htm). K-means grouping, Cox proportional hazard regression, and Kaplan-Meier log rank test were done using the R software (http://www.r-project.org). were treated with 100 nM rapamycin for 8 hrs. Total RNA were analyzed by Real-Time PCR using primers specific for VEGFA, TGFA and CCND1. mRNA level in control cells are normalized to 1. Data represent mean±S.D., n=3. **, p<0.001, compared with control cells. Con, control. Rap, rapamycin. (E) 786-O stable cell lines expressing VHL-wt-Venus were treated with 100 nM rapamycin or cultured in EBSS (starvation) in the presence or absence of 200 nM bafilomycin A1. Con, control. St, starvation. Rap, rapamycin. BM, bafilomycin A1. (F) Caki-I cells with or without 6-hr pretreatment with 100 µM CoCl 2 were washed with PBS 3 times, and followed by continuous culture in DMEM or EBSS (starvation) for 1 or 2 hrs. Cell lysates were analyzed by immunoblot using antibodies against HIF2α, p62, S6, phosphor-S6, LC3B or β-actin. Oncogene. Author manuscript; available in PMC 2015 November 07.
2017-11-08T01:41:28.264Z
2014-06-14T00:00:00.000Z
1061200
s2orc/train
v2
Post hoc analysis of a randomised controlled trial: effect of vitamin D supplementation on circulating levels of desmosine in COPD
Post hoc analysis of a randomised controlled trial: effect of vitamin D supplementation on circulating levels of desmosine in COPD Background Vitamin D supplementation lowers exacerbation frequency in severe vitamin D-deficient patients with COPD. Data regarding the effect of vitamin D on elastin degradation are lacking. Based on the vitamin's anti-inflammatory properties, we hypothesised that vitamin D supplementation reduces elastin degradation, particularly in vitamin D-deficient COPD patients. We assessed the effect of vitamin D status and supplementation on elastin degradation by measuring plasma desmosine, a biomarker of elastin degradation. Methods Desmosine was measured every 4 months in plasma of 142 vitamin D-naïve COPD patients from the Leuven vitamin D intervention trial (100 000 IU vitamin D3 supplementation every 4 weeks for 1 year). Results No significant association was found between baseline 25-hydroxyvitamin D (25(OH)D) and desmosine levels. No significant difference in desmosine change over time was found between the placebo and intervention group during the course of the trial. In the intervention arm, an unexpected inverse association was found between desmosine change and baseline 25(OH)D levels (p=0.005). Conclusions Vitamin D supplementation did not have a significant overall effect on elastin degradation compared to placebo. Contrary to our hypothesis, the intervention decelerated elastin degradation in vitamin D-sufficient COPD patients and not in vitamin D-deficient subjects. Introduction The pathogenesis of COPD is characterised by chronic inflammation and an imbalance in elastase/ anti-elastase activity leading to accelerated elastin degradation and emphysema. Although tobacco smoke exposure has been clearly linked to the risk of COPD, not all smokers will develop irreversible airway obstruction. Factors other than smokingsuch as genetic and environmental factorsmust therefore be implicated. Different studies have suggested a role of vitamin D in the pathogenesis of COPD [1][2][3]. Vitamin D is either exogenously obtained from food or endogenously produced in the skin through sun (UV-B) exposure [4]. In the liver, vitamin D is hydroxylated into 25-hydroxyvitamin D (25(OH)D), which is used for serum measurements because of its long half-life of 2-3 weeks [4]. To become biologically active, 25(OH)D requires an additional hydroxylation step in the kidneys by 1-α-hydroxylase, an enzyme that is also present in different inflammatory and epithelial cells [4]. The latter autocrine and paracrine activation has been linked to a variety of non-calcemic effects of vitamin D, which include anti-inflammatory and antiproliferative properties. The vast pool of epidemiological data linking vitamin D deficiencydefined as a serum 25(OH)D level below 20 ng . mL −1 (=50 nmol . L −1 )to many infectious and chronic inflammatory diseases including COPD is in line with these mechanistic functions [4]. Vitamin D deficiency is a proven risk factor for COPD and is associated with increasing disease severity [5]. A recent meta-analysis also demonstrated that vitamin D supplementation substantially reduces exacerbation frequency in severe vitamin D-deficient (i.e. <10 ng . mL −1 ) COPD patients [6]. Furthermore, murine data demonstrated that low vitamin D status enhances the onset of COPD-like characteristics already after 6 weeks of cigarette smoke exposure [3]. Furthermore, vitamin D deficiency accelerates and aggravates the development of cigarette smoke-induced emphysema, which is potentially related to enhanced elastin breakdown [3]. Elastin is a unique protein providing elasticity, resilience and deformability to dynamic tissues, such as lungs and vasculature [7]. Elastin is an absolute requirement for both ventilation and circulation [7]. Elastogenesis starts with the synthesis of tropoelastin, which is subsequently secreted in the extracellular matrix and aligned with other monomers to form fibres [7]. These tropoelastin-polymers have to be cross-linked with each other by the enzyme lysyl oxidase in order to obtain elasticity and longevity [7]. During this cross-linking process, two amino acids, desmosine and isodesmosine (DES), are formed that are unique to cross-linked elastin [8]. Degradation of cross-linked elastin fibres in lungs and blood vessels by elastases can be quantified by measuring blood levels of DES [8]. Research has shown that plasma ( p) DES levels are elevated in patients with COPD compared to age-and smoking-matched controls [8]. Furthermore, it has been demonstrated that pDES is a predictor of mortality in patients with COPD [8]. We therefore regard elastin degradation as an attractive biomarker for COPD and, potentially, as a novel therapeutic target. Since vitamin D has anti-inflammatory, antioxidative, antiprotease and antimicrobial properties [9], we hypothesised that vitamin D supplementation in COPD patients might reduce the rate of elastin degradation, particularly in vitamin D-deficient subjects. In order to test this hypothesis, we measured pDES in patients with COPD from the Leuven vitamin D-randomised controlled trial before, during and at the end of the intervention period. Subjects The parent study was a single-centre (University Hospitals Leuven, Belgium), double-blind, randomised, placebo-controlled trial, in which 182 COPD patients received either high-dose vitamin D (100 000 IU of vitamin D3) supplementation or placebo every 4 weeks for 1 year [1]. The study was approved by the local ethics review committee of the University Hospitals Leuven (S50722; EudraCT number: 2007-004755- 11) and was registered with ClinicalTrials.gov (NCT00666367). The 142 vitamin D-naïve participants from the Leuven vitamin D intervention trial were included in our ancillary study (table 1). Forty participants were excluded from our current study, as they were already using vitamin D supplementation at study entry. Details of the Leuven vitamin D intervention trial have been previously published [1]. Plasma desmosine measurements The rate of elastin degradation was quantified by measuring pDES levels. Subjects with highest pDES concentrations were assumed to have the highest rates of elastin degradation. Isodesmosine and desmosine fractions were measured separately by liquid chromatography-tandem mass spectrometry as previously described using deuterium-labelled desmosine as an internal standard [10,11]. Coefficient of variations of intra-and inter-assay imprecision were <10%; lower limit of quantification 0.2 ng . mL −1 and assay linearity up to 20 ng . mL −1 . pDES levels were presented as the sum of isodesmosine and desmosine fractions. After randomisation, follow-up visits occurred every 4 months (at 4, 8 and 12 months). Blood was drawn independently from vitamin D intake. Blood samples were available from 142 patients at baseline, from 133 patients at 4 months, from 129 patients at 8 months and from 116 patients at 12 months. The plasma samples had been frozen at −80°C for 6 to 7 years. It is unlikely that the storage time influenced pDES concentrations given the extreme stability of DES. Other serum measurements Serum 25(OH)D levels were measured at baseline and after 12 months. Total serum 25(OH)D levels were measured in multiple batches by radioimmunoassay (DiaSorin, Brussels, Belgium) according to the standard protocol. These are mean values of duplicate measures. Levels were expressed in ng . mL −1 (conversion factor for nmol . L −1 : 2.5). Furthermore, serum calcium and phosphate levels were measured every 4 months to monitor safety of vitamin D supplementation. Statistical analysis Analyses were undertaken using SPSS Software (version 24, IBM, Chicago, IL, USA). Univariate linear regression analysis was used to assess associations between variables corrected for age as covariate. Repeated measurements linear mixed model analysis was used to determine pDES change during the course of the study measured at baseline, 4, 8 and 12 months, also corrected for age. A p<0.05 was used as the threshold for statistical significance. Results Baseline vitamin D status and desmosine No significant difference in 25(OH)D levels was found between placebo and intervention groups at baseline. As expected, 25(OH)D levels were significantly higher in the intervention arm compared to the placebo arm at 12 months ( p<0.001; figure 1). A significant positive association was found between age and pDES levels ( p<0.0005), and all pDES levels were therefore corrected for this variable. Baseline calcium, phosphate and desmosine Vitamin D supplementation did not significantly influence serum calcium and phosphate levels. No significant association between baseline serum calcium and pDES levels was found in the total study group ( p=0.230). A significant association between baseline serum phosphate and pDES levels was found, independent from the intervention ( p<0.0001; figure 5). Discussion We investigated the effects of serum 25(OH)D levels and high-dose vitamin D supplementation on the rate of elastin degradation in patients with COPD. Baseline serum 25(OH)D levels did not associate with elastin degradation markers. Vitamin D supplementation did not reduce elastin degradation in the whole study population compared to placebo, although a significant and unexpected association was found in the intervention arm between higher baseline serum 25(OH)D levels and a deceleration of elastin degradation during the course of the study. Vitamin D has anti-inflammatory and antioxidative effects [9], which could potentially dampen the rate of elastin degradation. Systemic inflammation in COPD is associated with higher circulating DES levels [8]. Furthermore, reactive oxygen species have the potential to oxidise and consequently weaken DES cross-links leading to accelerated elastin degradation [12]. We therefore speculated that vitamin D supplementation could favourably shift the elastase/anti-elastase balance and protect against elastin degradation. However, we did not observe a deceleration of elastin degradation in the total intervention arm. In order to test our hypothesis that vitamin D-deficient COPD patients would particularly benefit from vitamin D supplementation, we explored the association between pDES change during the course of the intervention period and baseline 25(OH)D levels. Whereas we had expected to find a positive association in the intervention arm between baseline serum 25(OH)D levels and pDES change, we found the opposite. It may suggest that higher serum 25(OH)D levels are needed to obtain any protective effect of vitamin supplementation on elastin degradation. Obviously, these 25(OH)D levels >50 ng . mL −1 were There is an apparent paradox given that vitamin D supplementation decreased exacerbation frequency in vitamin D-deficient participants [1], whereas our post hoc analysis revealed that the intervention had accelerated elastin degradation in these subjects. We suspect that exogenous vitamin D might have both favourable and unfavourable effects in patients with COPD. The reducing effect on exacerbation frequency and elastin degradation is probably due to vitamin D's anti-inflammatory properties. The enhancing effect of vitamin D supplementation on elastin degradation might potentially be explained by a transient rise in calcium levels with bolus administration [13]. Intermittent high-dose bolus interventions result in a sharp rise in serum 25(OH)D levels to often supra-physiological concentrations at the time of administration [14], which have been associated with hypercalcaemia and even mortality [15,16]. The buoyant effect of high-dose vitamin D supplementation on elastin's calcium content is much more pronounced than the transient rise in blood calcium levels [17]. This phenomenon was demonstrated in rats treated with extremely high-dose vitamin D [17]. Aortic tissue calcium content was raised ∼15 times, whereas the calcium concentration in the rats' serum was minimally affected [17]. Vitamin D supplementation also caused >50% reduction of aortic DES content in this animal model, illustrating the close relationship between vitamin D, elastin calcification and elastin degradation [17]. The calcifying effect of extremely high-dose vitamin D is not unique to the vasculature. Tissue calcium levels in rats' lungs were also much higher in the vitamin D than in the control group [18]. The effect of vitamin D on elastin degradation in the lungs was unfortunately not assessed in this animal study [18]; however, we would expect a similar reduction of pulmonary DES levels. Although we did not observe a difference in serum calcium levels at any time during follow-up, a calcifying effect of transient elevation of serum calcium levels following vitamin D boluses may explain our negative observations as blood samples were collected independently from drug intake [13]. In addition to this, there are also reasons to assume that bolus administrations not only have more unfavourable effects, but also less favourable effects than daily vitamin D supplementation. In particular, a recent meta-analysis demonstrated that the protective effect against infections was only obtained with daily dose interventions and not with pulses of vitamin D supplements [19]. Other studies with daily dose interventions in vitamin D-deficient COPD patients are warranted. Interestingly, there is evidence to suggest that vitamin D and K may play synergistic roles in the prevention of elastin calcification and degradation [20]. Matrix Gla protein (MGP) is a potent inhibitor of elastin mineralisation and degradation which requires vitamin K for its activation [21]. A vitamin Dresponsive element is found in the promoter of the MGP gene, which has the capacity to upregulate gene expression following vitamin D binding thereby increasing the demand for vitamin K to carboxylate the surplus inactive MGP [22]. Administration of vitamin D may have the potential to induce relative vitamin K deficiency through this mechanism [20]. Recent studies indeed show that vitamin D supplementation reduces vitamin K status [23,24]. Survival is strongly reduced in kidney transplant recipients who are treated with active vitamin D with low versus high vitamin K status, which is most likely caused by calcifying effects of vitamin D on blood vessels unopposed by sufficient MGP that has been activated by vitamin K [24]. Furthermore, data from our group show the presence of an inverse association between vitamin K status and the rate of elastin degradation [25]. Although we did not assess vitamin K status in our study, it might be that vitamin D-deficient COPD patients also had low baseline vitamin K status and therefore experienced negative effects of vitamin D supplementation on elastin degradation. We FIGURE 5 Association between phosphate and desmosine levels at baseline. Scatterplot showing the association between baseline serum phosphate (mg . dL −1 ) and plasma desmosine levels (µg . L −1 ). All 142 patients from both the placebo and intervention group were included. A significant positive association was found between both variables ( p<0.0001). Plasma desmosine = −0.739 + (age)*0.008 − (baseline serum phosphate) *0.233 (η 2 =0.157 and adjusted η 2 =0.145). hypothesise that vitamin K might potentially negate the alleged adverse effect of vitamin D administration on elastin calcification and degradation. An interesting observation is the positive association between serum phosphate and pDES levels. Although this correlation should be replicated in an independent cohort before drawing any definitive conclusions, data are available that could potentially explain why phosphate might have an accelerating effect on elastin degradation. Hyperphosphataemia is a well-established risk factor of arterial calcification and mortality in patients with end-stage kidney disease [26]. However, a recent study demonstrated that higher serum phosphate levels are also strongly associated with increased mortality in patients with COPD [27]. Elastocalcinosis is characterised by the accumulation of calcium phosphate (i.e. hydroxyapatite) within the arterial wall. Whereas elastin with little hydroxyapatite is relatively resistant to elastases, the vulnerability to these degrading enzymes increases parallel to the increasing calcium phosphate content causing accelerated elastin degradation [28]. If the positive relationship between serum phosphate and pDES could be replicated, it would form the rationale for an intervention trial to assess the effect of phosphate-reducing interventions on disease progression and rates of elastin degradation in COPD. One important limitation of our study is the low number of patients with normal to high baseline serum 25(OH)D levels. Interestingly, we observed a favourable pDES decrease during the course of the study in every subject from the intervention arm with a baseline serum 25(OH) level above 30 ng . mL −1 . However, due to the paucity of these patients (only five in the intervention arm), we were missing adequate power to determine whether vitamin D supplementation in vitamin D-sufficient patients indeed decelerates elastin degradation. Another limitation may be found in the study population of tertiary care patients in which elastin degradation might be affected by many other factors, such as repeated exacerbations. Additional studies are therefore needed to assess the effects of vitamin D supplementation on pDES levels in a population-based COPD population. In conclusion, we did not find an effect of serum 25(OH)D levels on the rate of elastin degradation in vitamin D-naïve patients. Contrary to our hypothesis, vitamin D supplementation seems to decelerate elastin degradation in vitamin D-sufficient COPD patients and not in vitamin D-deficient subjects.
2020-10-10T05:03:58.819Z
2020-10-01T00:00:00.000Z
222230200
s2orc/train
v2
Notes on simplicial rook graphs
Notes on simplicial rook graphs The simplicial rook graph SR(m,n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{SR}(m,n)$$\end{document} is the graph of which the vertices are the sequences of nonnegative integers of length m summing to n, where two such sequences are adjacent when they differ in precisely two places. We show that SR(m,n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{SR}(m,n)$$\end{document} has integral eigenvalues, and smallest eigenvalue s=max-n,-m2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s = \max \left( -n, -{m \atopwithdelims ()2}\right) $$\end{document}, and that this graph has a large part of its spectrum in common with the Johnson graph J(m+n-1,n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J(m+n-1,n)$$\end{document}. We determine the automorphism group and several other properties. Introduction Let N be the set of nonnegative integers, and let m, n ∈ N. The simplicial rook graph SR(m, n) is the graph obtained by taking as vertices the vectors in N m with coordinate sum n, and letting two vertices be adjacent when they differ in precisely two coordinate positions. Then has v = n+m−1 Integrality of the eigenvalues We start with the main result, proved by a somewhat tricky induction. Theorem 2.1 All eigenvalues of SR(m, n) are integers. Proof Let be the graph SR(m, n), and let X be its vertex set. The adjacency matrix A of acts as a linear operator on R X (sending each vertex to the sum of its neighbors). By induction, we construct a series of subspaces 0 = U 0 ⊆ U 1 ⊆ . . . ⊆ U t = R X and find integers c i , such that (A−c i I )U i ⊆ U i−1 (1 ≤ i ≤ t). Then p(A) := i (A−c i I ) vanishes identically, and all eigenvalues of A are among the integers c i . For j = k, let A jk be the matrix that describes adjacency between vertices that differ only in the j-and k-coordinates. Then A = A jk . If (A jk − c jk I )u ∈ U for all j, k then (A − cI )u ∈ U for c = c jk . A basis for R X is given by the vectors e x (x ∈ X ) that have y-coordinate 0 for y = x, and x-coordinate 1. For S ⊆ X , let e S := x∈S e x , so that e X is the all-1 vector. Since (A − cI )e X = 0 for c = (m − 1)n, we can put U 1 = e X . For partitions of the set of coordinate positions {1, . . . , m} and integral vectors z indexed by that sum to n, let S ,z be the set of all u ∈ X with i∈π u i = z π for all π ∈ . If is a partition into singletons, then |S ,z | = 1. For a vector y indexed by a partition , letỹ be the sequence of pairs (y π , |π |) (π ∈ ) sorted lexicographically: with the y π in nondecreasing order, and for given y π with the |π | in nondecreasing order. We use induction to show for S = S ,z and suitable c that the image (A − cI )e S lies in the subspace U spanned by e T for T = S ,y , where ( , y) < ( , z). Note that the sets S = S ,z induce regular subgraphs of . Indeed, the induced subgraph is a copy of the Cartesian product π SR(|π |, z π ). The image (A − cI )e S can be viewed as a multiset where the x ∈ X occur with certain multiplicities. The fact that S induces a regular subgraph means that we can adjust c to give all x ∈ S any desired given multiplicity, while the multiplicity of x / ∈ S does not depend on c. If j, k belong to the same part of , then A jk e S only contains points of S and can be ignored. So, let j ∈ π , k ∈ ρ, where π, ρ ∈ , π = ρ, and consider A jk e S . Abbreviate π ∪ {k} with π + k and π \ { j} with π − j. The image (A jk −cI )e S equals S 1 −S 2 , where S 1 is the sum of all e T , with T = S ,y and = ( \ {π, ρ}) ∪ {π − j, ρ + j} (omitting π − j if it is empty) and y agrees with z except that y π − j ≤ z π and y ρ+ j ≥ z ρ (of course y π − j + y ρ+ j = z π + z ρ ), and S 2 is the sum of all e T , with T = S ,y and = ( \ {π, ρ}) ∪ {π + k, ρ − k} and y agrees with z except that y π +k < z π and y ρ−k > z ρ . [Let u be a ( j, k)-neighbor of s ∈ S. Since i∈π s i = z π , it follows that i∈π − j u i = i∈π − j s i ≤ z π , so that u is counted in S 1 . Conversely, if u is counted in S 1 , then we find a ( j, k)-neighbor s ∈ S by moving u j − s j from position j to position k (if u j > s j ) or moving s j − u j from position k to position j (if s j > u j ). The latter is impossible if u k < s j − u j , i.e. i∈π +k u i < z π , and these cases are subtracted in S 2 .] We are done by induction. Indeed, for the pair { j, k} we can choose which of the two is called j, and we pick notation such that (z π , |π |) ≤ (z ρ , |ρ|) in lexicographic order. Now in S 1 and S 2 only ( , y) occur with ( , y) < ( , z). The smallest eigenvalue We find the smallest eigenvalue of by observing that is a halved graph of a bipartite graph . Consider the bipartite graph of which the vertices are the vectors in N m with coordinate sum at most n, where two vertices are adjacent when one has coordinate sum n, the other coordinate sum less than n, and both differ in precisely one coordinate. Let V be the set of vectors in N m with coordinate sum n. Two vectors u, v in V are adjacent in precisely when they have distance 2 in . If the adjacency matrix of is 0 N N 0 , with top and left indexed by V , then for the adjacency matrix A of we find A + n I = N N , so that A + n I is positive semidefinite, and the smallest eigenvalue of A is not smaller than −n. Together with the results of [9], this proves that the smallest eigenvalue of A equals max −n, − m 2 . Proof Let s be the smallest eigenvalue of A. We just saw that s ≥ −n. Elkies [6] observed that s ≥ − m 2 , since A is the sum of m 2 matrices A jk that describe adjacency where only coordinates j, k are changed. Each A jk is the adjacency matrix of a graph that is a union of cliques and hence has smallest eigenvalue not smaller than −1. Then A = A jk has smallest eigenvalue not smaller than − m 2 . It is shown in [9] that the eigenvalue − m 2 has multiplicity at least and hence occurs with nonzero multiplicity if n ≥ m 2 . It is also shown in [9] that the multiplicity of the eigenvalue −n is at least the number of permutations in Sym(m) with precisely n inversions, that is the number of words w of length n in this Coxeter group, and this is nonzero precisely when n ≤ m 2 . Proposition 3.2 The eigenvalue − m 2 has multiplicity precisely Proof For each vertex u, and 1 ≤ j < k ≤ m, let C jk (u) be the ( j, k)-clique on u, that is the set of all vertices v with v i = u i for i = j, k. An eigenvector a = (a u ) for the eigenvalue − m 2 must be a common eigenvector of all A jk for the eigenvalue −1. That means that v∈C a v = 0 for each set C = C jk (u). Order the vertices by u > v when u d > v d when d = d uv is the largest index where u, v differ. Suppose u i = s for some index i and s ≤ m − i − 1. We can express a u in terms of a v for smaller v with d uv ≥ m − s via v∈C a v = 0, where C = C i,m−s (u). Indeed, this equation will express a u in terms of then by induction a v in its turn can be expressed in terms of a w where w is smaller and d vw ≥ m − t > m − s, so that w is smaller than u, and d uw > m − s. In this way, we expressed a u when u i ≤ m − i − 1 for some i. The free a u have u i ≥ m − i for all i, and the vector u with u i = u i − (m − i) is nonnegative and sums to n − m 2 . There are such vectors, so this is an upper bound for the multiplicity. But by [9] this is also a lower bound. Thanks to a suggestion by Aart Blokhuis, we can also settle the multiplicity of the eigenvalue −n. Proposition 3.3 The multiplicity of the eigenvalue −n equals the number of elements of Sym(m) with n inversions, that is, the coefficient of t n in the product m i=2 (1 + t + · · · + t i−1 ). Proof As already noted, it is shown in [9] that the multiplicity of the eigenvalue −n is at least the number of permutations in Sym(m) with precisely n inversions. The where w runs over Sym(m) and l(w) is the number of inversions of w, is standard, cf. [8], p. 73. Since A + n I = N N , the multiplicity of the eigenvalue −n is the nullity of N , and we need an upper bound for that. We first define a matrix P and observe that N and P have the same column space and hence the same rank. For u, v ∈ N m , write u v when u i ≤ v i for all i. Let P be the 0-1 matrix with the same row and column indices (elements of N m with sum m and sum smaller than m, respectively) where P xy = 1 when y x. Recall that N is the 0-1 matrix with N xy = 1 when x and y differ in precisely one coordinate position. Let M(y) denote column y of the matrix M. For d = n − y i , we find that where W i is the set of vectors in {0, 1} m with sum i. Indeed, suppose that x and y differ in j positions. Then j ≤ d, and N xy = δ 1 j , while the x-entry of the right-hand side We see that N (y) and d P(y) differ by a linear combination of columns P(y ) where y i > y i , and hence that N and P have the same column space. Aart Blokhuis remarked that the coefficient of t n in the product m i=2 (1 + t + · · · + t i−1 ) is precisely the number of vertices u satisfying u i < i for 1 ≤ i ≤ m. Thus, it suffices to show that the rows of N (or P) indexed by the remaining vertices are linearly independent. Consider a linear dependence between the rows of P indexed by the remaining vertices, and let P be the submatrix of P containing the rows that occur in this dependence. Order vertices in reverse lexicographic order, so that u is earlier than u when u h < u h and u i = u i for i > h. Let x be the last row index of P (in this order). Let h be an index where the inequality x i < i is violated, so that x h ≥ h. Let e i be the element of N m that has all coordinates 0 except for the i-coordinate, which is 1. Let z = x − he h . Let H = {1, . . . , h − 1}. For S ⊆ H , let χ(S) be the element of N m that has i-coordinate 1 if i ∈ S, and 0 otherwise. Consider the linear combination p = S (−1) |S| P (z + χ(S)) of the columns of P . We shall see that p has x-entry 1 and all other entries equal to 0. But that contradicts the existence of a linear dependence. If u is a row index of P , and not z u, then p u = 0. If z u, and z i < u i for some i < h, then the alternating sum vanishes, and These three propositions settle conjectures from [9]. An equitable partition A partition {X 1 , . . . , X t } of the vertex set X of a graph is called equitable when for all i, j the number e i j of vertices in X j adjacent to a given vertex x ∈ X i does not depend on the choice of x ∈ X i . In this case, the matrix E = (e i j ) is called the quotient matrix of the partition. All eigenvalues of E are also eigenvalues of , realized by eigenvectors that are constant on the sets X i . There is a basis of R X consisting of eigenvectors that either are constant on all X i or sum to zero on all X i . The partition of X into orbits of an automorphism group G of is always equitable. In this section, we indicate an equitable partition of SR(m, n), and in the next section a much finer one. Let be the graph SR(m, n) where n > 0, and let V i be the set of vertices with precisely i nonzero coordinates, so that and all other neighbors in V i . The quotient matrix E is tridiagonal and has eigenvalues Proof The e i j are easily checked. It remains to find the eigenvalues. Let u be an eigenvector for the Johnson graph . It is not wrong to say that it has these eigenvalues and multiplicities for 0 ≤ i ≤ n since by convention the multiplicities of an eigenvalue are added, and eigenvalues with multiplicity 0 are no eigenvalues. For example, J (5, 4) has spectrum 4 1 (−1) 4 , which is the same as where multiplicities are written as exponents. The The common part of the spectra of SR(m, n) and J(m + n − 1, n) Both SR(m, n) and J (m+n−1, n) have m+n−1 n vertices. Both have valency n(m−1). These graphs resemble each other and have a large part of their spectrum in common. Let m, n > 0. Proposition 5.1 The graphs SR(m, n) and J (m + n − 1, n) have equitable partitions with the same quotient matrix E, where E has eigenvalues (n − i)(m − i) − n with multiplicity m i for 0 ≤ i ≤ n − 1, and multiplicity m n − 1 for i = n. In particular, the spectrum of E is a common part of the spectrum of SR(m, n) and that of J (m+n−1, n). i f |S| = |T | = i and S, T differ in two places, 0 otherwise, in both cases. It follows that our partitions are equitable with the same quotient matrix E. We may conclude that J (m +n−1, n) and SR(m, n) have the n i=1 m i eigenvalues of the matrix E in common. Claim: These eigenvalues are (n −i)(m −i)−n with multiplicity m i for 0 ≤ i < n, and multiplicity m n − 1 for i = n. These are the eigenvalues of J (m + n − 1, n), so we need only confirm the multiplicities. Let W i j be the (symmetrized) inclusion matrix of i-subsets against j-subsets in a v-set. −1 has no columns, and W −1, j has no rows, so that (a, b, c, . . .) and y = (a +d, b −d, c, . . .). We find (a , b , c, . . .) with a +b = a +b Note for later use the structure of the graph (x, y) induced by the common neighbors of x and y. It is K a+b−1 + K m−2 + K g , where g is the number of c (common coordinates of x and y) not less than d. If λ xy = m + n − 3 for all n(m − 1) neighbors y of x (and m > 2), then x has either a unique nonzero coordinate n or only coordinates 0, 1. Thus, we can recognize this set of m + m n vertices. The induced subgraph (for n > 2) is isomorphic to K m + J (m, n). We see that determines m + n − 3 and also the pair {m, m n }. Now m is the smallest element of the pair distinct from 0, 1, so we find m and n. Suppose first that n = m − 1. Then we recognized the set S of vectors with a unique nonzero coordinate. At distance i from S lie the vectors with precisely i + 1 nonzero coordinates, and the positions of the nonzero coordinates of a vector u are determined by the set of nearest vertices in S. We show by induction on m that all vertex labels are determined. If a vertex (a, b, . . .) has at least two nonzero coordinates, and m > 3, then its neighbor (0, a + b, . . .) lies in the SR(m − 1, n) on the vertices with first coordinate zero, and by induction a + b is determined. If it has at least three nonzero coordinates: (a, b, c, . . .), then each of a + b, a + c, b + c is determined and hence also a, b, c. If it has precisely two nonzero coordinates: (a, n − a, 0, . . .), then it has neighbors (a − i, n − a, i, 0, . . .) (1 ≤ i ≤ a − 1) and (a, n − a − j, j, 0, . . .) (1 ≤ j ≤ n − a − 1) of which all coordinates are known, and a and n − a follow unless {a, n − a} = {1, 2}. This settles all claims when m > 3, n = m − 1. If m = 3, n ≥ 3, we recall that the common neighbors of vertices x and y induce K a+b−1 + K m−2 + K g , where m − 2 = 1 and g ≤ 1, so that a + b can be recognized directly when a + b ≥ 3. But we also know the zero pattern, so can also recognize a, b when a + b ≤ 2. This determines all for m = 3. Finally, if m = n + 1 ≥ 4, we have to distinguish the copy of K m on the vectors of shape n 1 0 n from that on the vectors of shape 1 n 0. Both sets have m m−1 2 neighbors, but if x = (n, 0, . . .) and y = (n − a, a, . . .), then (x, y) 2K n−1 , while (x, y) K n−1 + K n−2 + K 1 if x = (1, . . . , 1, 1, 0) and y = (2, 1, . . . , 1, 0, 0). This settles all cases. Proof The diameter is at most m − 1, since one can walk from one vertex to another and decrease the number of different coordinates by at least one at each step. The diameter is also at most n, since one can walk from one vertex to another and decrease the sum of the absolute values of the coordinate differences by at least two at each step. If m > n, then (0, . . . , 0, n) and (1, . . . , 1, 0, . . . , 0) show that the diameter is at least n. If m ≤ n, then (0, . . . , 0, n) and (1, . . . , 1, n − m + 1) show that the diameter is at least m − 1. Maximal cliques and local graphs We classify the cliques (complete subgraphs) and find the maximal ones. We also examine the structure of the local graphs of . Lemma 8.1 Cliques C in SR(m, n) are of three types: 1. All adjacencies are ( j, k)-adjacencies for fixed j, k. Now |C| ≤ n + 1. . . , m}, and x i ≥ a for i ∈ I . Now |C| ≤ m. Proof Suppose u, v, w are pairwise adjacent, not all ( j, k)-adjacent for the same pair ( j, k). Then u, v are (i, j)-adjacent, u, w are (i, k)-adjacent, and v, w are ( j, k)adjacent, for certain i, j, k. Now u k = v k , u j = w j and v i = w i , so that u = x + ae i , v = x + ae j , w = x + ae k , where a > 0 or a < 0. Proof For n > 0, the m vectors ne i are distinct and mutually adjacent, forming an m-clique. And for m > 1, the n + 1 vectors ae 1 + (n − a)e 2 (0 ≤ a ≤ n) form an (n + 1)-clique. Conversely, no larger cliques occur, as we just saw. Fix a vertex u of SR(m, n). We describe the structure of the local graph of u, that is the graph induced by SR(m, n) on the set U of neighbors of u. If vw is an edge in this local graph, then uvw is a clique in SR(m, n), so we can invoke the above classification. The set U has a partition into m 2 cliques of type 1, where the ( j, k)-clique has size The set U has a partition into n cliques of type 2, each of size m − 1. Finally, U has a partition into cliques of type 3. Lemma 8.4 Let m, n ≥ 3, and fix a vertex u. Each neighbor v of u is contained in at most two maximal cliques precisely when u has only one nonzero coordinate. Proof Suppose each point v of U is covered by at most two maximal cliques. Then one of the cliques of types 1 or 3 on v in U has size 1. This means that whenever u j + u k ≥ 2, we have u i = 0 for i = j, k. If u j ≥ 2, this means that u has only one nonzero coordinate. If u j = u k = 1, this means that n = 2. Suppose m, n ≥ 3. We see that we can retrieve V 1 as the set of vertices that are locally the union of two cliques. Cospectral mates For m ≤ 2 or n ≤ 2, the graph SR(m, n) is complete or triangular and hence determined by its spectrum, except in the case of m = 7, n = 2 where it is isomorphic to the triangular graph T (8), and cospectral with the three Chang graphs (cf. [4,5]). The graph SR(3, 3) is 6-regular on 10 vertices, and we find that its complement is cubic with spectrum 3 1 2 1 1 3 (−1) 2 (−2) 3 . All integral cubic graphs are known, and SR(3, 3) is uniquely determined by its spectrum, cf. [3], Sect. 3.8. We give some further cases where SR(m, n) is not determined by its spectrum. Proof Apply Godsil-McKay switching (cf. [3,7], 1.8.3, 14.2.3). Switch with respect to a 4-clique B such that every vertex outside B is adjacent to 0, 2 or 4 vertices inside. If m = 4, take B = {n000, 0n00, 00n0, 000n}. If n = 3, m ≥ 2, take B = {ae 1 + be 2 | a + b = 3}. In both cases, every vertex outside B is adjacent to 0 or 2 vertices inside. The switching operation preserves all edges and nonedges, except that it changes adjacency for pairs bc with b ∈ B, c / ∈ B, and c adjacent to 2 vertices of B, turning edges (resp. nonedges) into nonedges (resp. edges). The resulting graph has the same spectrum. We show that it is nonisomorphic to SR(m, n) for m = 4, n ≥ 3 and for n = 3, m ≥ 4. In the former case, B = V 1 . If switching does not change the isomorphism type, then B must remain the V 1 of the new graph (since it is a single orbit of size m contained in V 1 ∪ V 2 ). But after switching the common neighbors of n000 and 0n 10 (with n = n − 1) include the pairwise nonadjacent 0n 01, 01n 0, 001n , contradicting Lemma 8.4. The eigenspace of the smallest eigenvalue Fix π ∈ Sym(m), and let a i = #{ j | i < j and π i > π j } for 1 ≤ i ≤ m. Then a = (a i ) is a vertex of SR(m, n) when n is the number of inversions of π . Say that σ ∈ Sym(m) is π -admissible if a i +i −σ i ≥ 0 for 1 ≤ i ≤ m. Let Adm(π ) be the set of π -admissible permutations and define x(σ ) by x(σ ) i = a i + i − σ i . Then σ ∈ Adm(π ) if and only if x(σ ) is a vertex of SR(m, n). Theorem 11.1 (Martin and Wagner [9], Thm. 3.8) For each π ∈ Sym(m) with n inversions, let Then each F π is an eigenvector of SR(m, n) with eigenvalue −n, and the F π are linearly independent. Then each F p,w is an eigenvector of SR(m, n) with eigenvalue − m 2 , and for fixed w, the collection of all such F p,w is linearly independent. Picking w = 1 2 (1 − m, 3 − m, . . . , m − 3, m − 1) yields the lower bound already mentioned earlier: The multiplicity of the eigenvalue − m 2 is at least For the eigenvalue −n, it follows that its multiplicity is at least the number of elements in Sym(m) with precisely n inversions, and one conjectures that equality holds. The proof of Theorem 11.1 shows that for each π ∈ Sym(m) with n inversions, the set X π = {x(σ ) | σ ∈ Adm(π )} induces a bipartite subgraph of SR(m, n) that is regular of valency n. Proposition 11.3 For It follows that classifying all (m, n, π) for fixed n is a finite job. Let Q k denote the k-cube. Using Sage, we find for n = 1 that only Q 1 occurs, for n = 2 that only Q 2 occurs, for n = 3 that only K 3,3 and Q 3 occur, and for n = 4 that only K 3,3 × K 2 and Q 4 occur. For larger n, one finds more complicated shapes. It was conjectured in [9] that all graphs (m, n, π) have integral spectrum. 12 Spectra for small m or n If we fix a small value of n, we find a nice spectrum (eigenvalues and multiplicities are polynomials in m, n). If we fix a small value of m ≥ 3, we get a messy result (also congruence conditions play a rôle). Below, multiplicities are written as exponents. Let a m ↓ b denote sequence of eigenvalues and multiplicities found as follows: The eigenvalues are the integers c with a ≥ c ≥ b, where the first multiplicity is m, and each following multiplicity is 2 larger for even c, and 10 larger for odd c. Now the conjectured spectrum of SR(4, n), n ≥ 6, n = 7 consists of For example, −5 has multiplicity 6n − 28. The above is trivial for m < 3 or n < 3. It was done in [9] for m = 3 and will be done below for n = 3, 4. The suggested spectra for n = 5 were extrapolated from small cases. We have not attempted to write down a proof. Proof In view of the common part of the spectra of SR(m, 3) and J (m + 2, 3), and the fact that m(m 2 − 7)/6 is the coefficient of t 3 in m i=2 (1 + t + · · · + t i−1 ) (for m ≥ 3), and the fact that the stated multiplicities sum to the total number of vertices, it follows that we only have to show the presence of the part (m − 3) m−1 . Fix an index h, 1 ≤ h ≤ m and consider the vector p indexed by the vertices that is 1 in vertices 2e h + e i and −1 on vertices e h + 2e i and 0 elsewhere. One checks that this is an eigenvector with eigenvalue m − 3, and the m vectors defined in this way have only a single dependency (namely, they sum to 0). Any eigenvector for one of these eigenvalues sums to zero on each part of the fine equitable partition found earlier, that is, on each set of vertices with given support. Since there are unique vertices with support of sizes 1 or 4, these eigenvectors are 0 there, and we need only look at the vertices 3e i + e j and 2e i + 2e j and 2e i + e j + e k . Fix an index h, 1 ≤ h ≤ m and consider the vector p (indexed by the vertices) that vanishes on each vertex where h is not in the support, is −1 on 2e h + 2e i and on 3e h + e i , is 2 on e h + 3e i , is −2 on 2e h + e i + e j , and is 1 on e h + 2e i + e j . One checks that this is an eigenvector with eigenvalue 2m − 5 and that the m vectors defined in this way are linearly independent. That settles the part (2m − 5) m . Fix a pair of indices h, i, 1 ≤ h < i ≤ m, and consider the vector p (indexed by the vertices) that is 1 on e h + 3e j , 2e i + 2e j and 2e h + e i + e j , is −1 on e i + 3e j , 2e h + 2e j and e h + 2e i + e j , and is 0 elsewhere. One checks that this is an eigenvector with eigenvalue m − 6 and that the m 2 vectors defined in this way are linearly independent. That settles the part (m − 6) ( m 2 ) . Having found all desired eigenvalues except one, it is not necessary to construct eigenvectors for the final one, since checking θ = tr A = 0 and θ 2 = tr A 2 = vk suffices. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2022-12-13T15:22:51.725Z
2015-09-09T00:00:00.000Z
254572900
s2orc/train
v2
Hydroponically Grown Sanguisorba minor Scop.: Effects of Cut and Storage on Fresh-Cut Produce
Hydroponically Grown Sanguisorba minor Scop.: Effects of Cut and Storage on Fresh-Cut Produce Wild edible plants have been used in cooking since ancient times. Recently, their value has improved as a result of the scientific evidence for their nutraceutical properties. Sanguisorba minor Scop. (salad burnet) plants were hydroponically grown and two consecutive cuts took place at 15 (C1) and 30 (C2) days after sowing. An untargeted metabolomics approach was utilized to fingerprint phenolics and other health-related compounds in this species; this approach revealed the different effects of the two cuts on the plant. S. minor showed a different and complex secondary metabolite profile, which was influenced by the cut. In fact, flavonoids increased in leaves obtained from C2, especially flavones. However, other secondary metabolites were downregulated in leaves from C2 compared to those detected in leaves from C1, as evidenced by the combination of the variable important in projections (VIP score > 1.3) and the fold-change (FC > 2). The storage of S. minor leaves for 15 days as fresh-cut products did not induce significant changes in the phenolic content and antioxidant capacity, which indicates that the nutraceutical value was maintained. The only difference evidenced during storage was that leaves obtained from C2 showed a lower constitutive content of nutraceutical compounds than leaves obtained from C1; except for chlorophylls and carotenoids. In conclusion, the cut was the main influence on the modulation of secondary metabolites in leaves, and the effects were independent of storage. Introduction Since ancient times, many wild edible plants have been used in cooking. Recently, their value has improved thanks to their proven nutraceutical properties [1][2][3]. In fact, some researchers have recognized wild edible plants as functional foods and as a new source of bioactive compounds that are beneficial to human health for their anti-inflammatory, antimicrobic, anticarcinogenic, cytotoxic and antiproliferative properties [4][5][6][7][8][9]. For example, a number of wild plants have been used in the diet including the stems and leaves of Sanguisorba minor, the fruit of Rosa canina, bellota acorns of Quercus ilex [5], leaves of Umbelicus rupestris (Salisb.) Dandy [10] and wild edible flowers [11]. Salad burnet (S. minor Scop.) is a wild edible species traditionally known for its edibility, and use in folk medicine, nowadays, it is also recognized for its potential as a nutraceutical species [12]. Plant Materials and Growth Conditions Seedlings of salad burnet were cultivated in a floating system at the University of Pisa in a greenhouse during the period from 15 June to 17 1 µM, Zn 2+ 5 µM, Mn 2+ 10 µM, Mo 3+ 1 µM. Electrical conductivity was 1.98 dS m −1 ; pH was adjusted to 5.7-6.0 with diluted sulphuric acid. The nutrient solution was continuously aerated. The plants were grown for 15 days after sowing and when plants had approximately 20 leaves, leaves over 5 cm were cut off at the base (C1). The same plants re-grew and after a further 15 days (30 days after sowing), leaves over 5 cm were cut off at the base (C2). After both cuts, leaves were sampled for metabolomics analyses. A portion of the leaves from both of the cuts were also processed as fresh-cut produce and stored at 4 • C in dark conditions in polyethylene tetraphtalate (PET) boxes (150 cm 3 , Comital Cofresco, Italy). Each box contained approximately 15 g. After being stored, the fresh-cut products were sampled (after 1, 2, 3, 6, 9, 13 and 15 days) to analyze their phenol, flavonoid and ascorbic acid content and the antioxidant activity. Extraction and Untargeted Metabolomics-Based Profiling of Fresh Plant Material Obtained from Two Consecutive Cuts Samples of the leaves derived from the two cuts were utilized for the extraction of secondary metabolites through a homogenizer-assisted extraction (Ultra-turrax; Ika T25, Staufen, Germany), according to Borgognone et al. [23]. A total of 1 g of leaves were homogenized in 10 mL of 80% methanol solution (v/v) acidified with 0.1% formic acid. The extracts were centrifuged at 6000 g for 15 min at 4 • C. The resulting solutions were filtered using 0.22 µm cellulose syringe filters in dark vials and stored at −18 • C until analysis. Each sample was analyzed in triplicate by ultra-high-pressure liquid chromatography (UHPLC) coupled with quadrupole-time-of-flight (QTOF) mass spectrometry (Agilent Technologies, Santa Clara, CA, USA). The experimental conditions for the screening of secondary metabolites in different plant matrices were optimized in previous work [24]. Briefly, the mass spectrometer was set to operate in SCAN mode, acquiring positive ions from a JetStream electrospray source (ESI +) in the range of 100-1200 m/z. The chromatographic separation was achieved in the reverse phase mode using an Agilent Zorbax Eclipse Plus C18 column (100 × 2.1 mm, 1.8 µm) and a mixture of water (phase A) and acetonitrile (phase B) as the mobile phase (both liquid chromatography-mass spectrometry grade). Besides, formic acid 0.1% (v/v) (Sigma-Aldrich, Milan, Italy) was added to both phases. The gradient went from 6% acetonitrile to 94% acetonitrile within 35 min, the flow rate was 0.22 mL min −1 and the injection volume was 6 µL. Raw data were then processed using the software Agilent Profinder B.07 and the "find-by-formula" algorithm, thus combining monoisotopic accurate mass and isotopic profile [25]. A custom database containing both phenolics (as reported in Phenol-Explorer 3.6; phenol-explorer.eu/) and sesquiterpene lactones was built and used as a reference for annotation, adopting a 5-ppm tolerance for mass accuracy. Some other compounds, which were reported as characteristic in the plant species targeted, were also mined in our raw data using the above-mentioned approach (Supplementary Materials). The annotation of secondary metabolites was carried out according to a LEVEL 2 of accuracy, as set out by the Metabolomics Standard Initiative [26]. Following annotation, the phenolic compounds were ascribed to different classes (according to Phenol-Explorer) and quantified using methanolic standard solutions prepared from an individual reference compound per each class, as previously reported [27]. These analyses were effectuated in fresh material after the first and second cut of the plants to characterize the profile of secondary metabolites in the salad burnet. Samples Preparation for Phytochemical Analysis of Stored Material Leaves were taken during storage (after 1, 2, 3, 6, 9, 13 and 15 days), homogenized and frozen in liquid nitrogen, and stored at −80 • C for biochemical analyses. Pigment Analysis Spectrophotometric analysis of pigments was performed by an Ultrospec 2100 Pro spectrophotometer (GE Healthcare Ltd., Little Chalfont, England) following the method described by Porra et al. [28], with minor modifications. Fresh samples (0.3 g) were extracted in 20 mL of acetone 80% and agitated in the dark at 4 • C for 3 days. The chlorophyll and carotenoid content were determined by the increase in absorbance at 663 nm for chlorophyll a, 648 nm for chlorophyll b and 470 nm for carotenoids against a blank solution of acetone 80%. Total chlorophylls and carotenoids were expressed as mg g −1 fresh weight (FW). Phenol and Flavonoid Extraction Samples (about 1 g FW) were homogenized with 4 mL of methanol solution 80% (v/v) by a sonicator (Digital ultrasonic Cleaner, DU-45, Argo-Lab, Modena, Italy) for 30 min, keeping the temperature in the range of 0-4 • C. Samples were centrifuged through a centrifuge (MPW-260R, MWP Med. Instruments, Warsaw, Poland) at 10,000 g for 15 min at 4 • C and supernatants were collected and centrifuged again in a 2 mL Eppendorf tube for 3 min at 7000 g. Extracts were stored at −80 • C before analysis. Total Phenolic Determination Total phenolic content was measured according to the procedure described by Dewanto et al. [29] with minor modifications. Briefly, 10 µL extract samples were mixed with 125 µL Folin-Ciocalteu reagent and 115 µL distilled water and allowed to react for 6 min. Then, 1.25 mL of Na 2 CO 3 7% (w/v) were added and samples were incubated for 90 min in the dark at room temperature. The increase in absorbance at 760 nm was measured against a blank solution (without sample). The results were expressed as mg gallic acid equivalents per g FW (mg GAE g −1 FW). Flavonoid Determination Total flavonoid content was determined according to Du et al. [30] with minor modifications. In a 2 mL Eppendorf tube, 100 µL sample extracts were added to 400 µL distilled water and 30 µL NaNO 2 5% (w/v). After 6 min at room temperature, 30 µL AlCl 3 ·6H 2 O 0.3 M and 30 µL distilled water were added to the mixture. After 6 min at room temperature, 400 µL NaOH 4% (w/v) and 40 µL distilled water were added to the extract. The increase in absorbance at 515 nm was measured against a blank solution (which contained all the reagents without extract). Total flavonoid content was expressed as mg catechin equivalents per g FW (mg CAE g −1 FW). In Vitro Antioxidant Activity Analysis In vitro antioxidant activity was determined on the same extract that was utilized for phenol and flavonoid analyses by using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical scavenging assay, as described by Brand-Williams et al. [31] with minor modifications. Sample extracts of 10 µL were added to 990 µL of DPPH solution 3.12 × 10 −5 M and incubated in the dark for 30 min at room temperature. The decrease in absorbance at 515 nm was measured against a blank solution (without extract). The results were expressed as mg Trolox equivalent antioxidant capacity per g FW (mg TEAC g −1 FW). Statistical Analysis Data are the mean ± standard deviation (SD) of 3 replicates in each assay. Biochemical data were analyzed by two-way analysis of variance (ANOVA) using storage and cut as sources of variation; the means were separated by Fisher's least significant difference (LSD) post-hoc test (p = 0.05). All statistical analyses were conducted using GraphPad (GraphPad, La Jolla, CA, USA) or the statistical software PASW Statistics 25.0 (SPSS Inc., Chicago, IL, USA). Metabolomics data were elaborated using the software Agilent Mass Profiler Professional B.12.06, as previously reported [27]. Compounds were filtered by abundance (only those compounds with an area > 5000 counts were considered), normalized at the 75th percentile and baselined to their corresponding median. Post-acquisition processing also included filtering by frequency, and retaining those compounds identified within 100% of replications in at least one treatment. Unsupervised hierarchical cluster analysis (HCA) was then carried out using the Euclidean similarity measure and Wards as the linkage rule. Thereafter, the metabolomic dataset was exported into SIMCA 13 (Umetrics, Malmo, Sweden), Pareto scaled and elaborated for supervised orthogonal partial least squares discriminant analysis (OPLS-DA). The presence of outliers was excluded according to Hotelling's T2, while cross-validation of the model was done using analysis of variance of the cross-validated residuals (CV-ANOVA) (p < 0.01) and permutation testing (N = 100). For each OPLS-DA built, the model parameters (goodness-of-fit (R 2 Y) and goodness-of-prediction (Q 2 Y)) were also inspected. Moreover, the variables importance in projection (VIP) compounds selection approach was applied (VIP > 1.3) and combined with a fold-change analysis (FC > 2) in order to evaluate those secondary metabolites significantly affected by cut. UHPLC-QTOF Mass Spectrometry Untargeted Profiling and Effect of the Two Cuts The secondary metabolites profile in leaves from both cuts of the salad burnet was investigated by using untargeted metabolomics (UHPLC-QTOF mass spectrometry) to depict the changes in the main secondary metabolites induced by the cuts. Overall, the untargeted approach allowed us to putatively identify 467 compounds in the leaves of S. minor. A list of the identified secondary metabolites is reported in the Supplementary Materials, together with the composite mass spectra (Table S1). The analyses confirmed the richness in polyphenols in this species as observed in previous research [17,18]. Annotated polyphenols were then ascribed to distinct phenolic classes, and then the changes between the two cuts were investigated (Table S2). The content of all flavonoid sub-classes increased in the leaves from C2 and flavones was the sub-class with the highest increase (FC value = 1.4). Among flavones, apigenin, baicalein and 7,3',4'-trihydroxyflavone were the most representative. Apigenin is abundant in common fruits such as grapefruit and orange, and vegetables such as onion and parsley [33]. The biological activity of this flavone in numerous mammalian systems is related to its antioxidant effects and its role in free radicals scavenging. However, it also shows anti-mutagenic, anti-inflammatory, antiviral and purgative effects [34]. The most important effect of this flavone is related to cancer prevention, because it induces apoptosis in human cancer cells and has an anti-proliferative effect on human cancer cells [33]. Apigenin has also demonstrated other positive features for human health, such as the reduction of plasma levels of low-density lipoproteins and the inhibition of platelet aggregation, thus, its presence in the diet could be very important [33][34][35]. Other studies have shown that baicalein can mitigate cell proliferation and diminish the production of collagen; also, it possesses multiple effective properties for the treatment of different diseases including cancer and injuries [36][37][38][39]. Lastly, 7,4'-dihydroxyflavone, also named 5-deoxyluteolin, has shown impressive antimycobacterial activity and its anti-inflammatory activity is under evaluation [40]. The comparison of C1 versus C2 leaves showed an increase in anthocyanins and flavones, although with moderate FC values. No significant variations were observed for the other phenolic classes, as reported in the Supplementary Materials (Table S2). Subsequently, both unsupervised and supervised multivariate statistical approaches were used to further investigate the differences in secondary metabolites in relation to the cut. The outputs of the unsupervised HCA heat map are provided in the Supplementary Materials (Table S3). The HCA heat maps highlighted the clear modification of the leaf secondary metabolite profiles in relation to the cut. Therefore, in order to explore these differences, supervised OPLS-DA modeling was used to underline those metabolites that were responsible of these variations. Interestingly, it was evident that the model clearly discriminated the two cuts, as shown in Figure 1. (Table S2). Subsequently, both unsupervised and supervised multivariate statistical approaches were used to further investigate the differences in secondary metabolites in relation to the cut. The outputs of the unsupervised HCA heat map are provided in the Supplementary Materials (Table S3). The HCA heat maps highlighted the clear modification of the leaf secondary metabolite profiles in relation to the cut. Therefore, in order to explore these differences, supervised OPLS-DA modeling was used to underline those metabolites that were responsible of these variations. Interestingly, it was evident that the model clearly discriminated the two cuts, as shown in Figure 1. The second latent vector of the OPLS-DA score plot was found to discriminate the two cuts accurately. The model showed high prediction ability, with a Q 2 Y parameter of 0.69. Besides, the OPLS-DA model was cross-validated and tested for outliers. Therefore, the variable selection method (VIP) was used to identify the compounds that were most responsible for the different leaf secondary metabolite profiles. These compounds with a VIP score higher than 1.3, are reported in Table S4 together with their LogFC value for the pairwise comparison of leaves from the second cut versus The second latent vector of the OPLS-DA score plot was found to discriminate the two cuts accurately. The model showed high prediction ability, with a Q 2 Y parameter of 0.69. Besides, the OPLS-DA model was cross-validated and tested for outliers. Therefore, the variable selection method (VIP) was used to identify the compounds that were most responsible for the different leaf secondary metabolite profiles. These compounds with a VIP score higher than 1.3, are reported in Table S4 together with their LogFC value for the pairwise comparison of leaves from the second cut versus the first cut. Secondary metabolites possessing a FC value > 2 were selected for this purpose. A high number of compounds effectively showed the differences between the two cuts (i.e., 76 secondary metabolites, including mainly flavonoids, phenolic acids and sesquiterpene lactones) ( Table S4). The compounds with the highest LogFC values, when considering the upregulated category, were the phenolic acid avenanthramide 2p (LogFC = 14.2) followed by a sesquiterpene lactone (i.e., artemissifolin; LogFC = 13.9). Phenolic acids have been shown to exert powerful biological activity [41][42][43]. Among avenanthramides, which are characteristic in oat [41][42][43][44][45][46][47], avenanthramide 2p modulates problematic events in β-catenin mediated transcriptional activation of Wnt target gene (fundamental for the survival of cells), the c-MYC proto-oncogene, and in this way reduces the proliferation of cervical cancer cells in vitro [48]. Concerning artemissifolin, this sesquiterpene is mainly found in the genus Centaurea [49,50], but its biological activity still needs to be clarified. In conclusion, the metabolomic screening of salad burnet leaves allowed us to identify new organic compounds that have not previously been reported in this species. In addition, leaves from the second cut exhibited an overall lower level of the main representative compounds, except for flavonoids and, in particular, flavones. Phytochemical Analyses in Leaves Stored as Fresh-Cut Product In order to generate knowledge about changes in the main nutraceutical compounds (chlorophylls, carotenoids, total phenols, flavonoids and ascorbic acid) and the total antioxidant activity of salad burnet leaves obtained from both the cuts, time-course measurements were carried out during the storage period (15 days) of the fresh-cut produce. Pigments Cut had a strong effect and a significant increase in chlorophyll content was detected in leaves from the second cut (Figure 2A). At harvest time, the difference in chlorophyll content between C1 and C2 was 53%, but this difference decreased after cold storage of the fresh-cut produce (29.5% at the end of the storage). This was attributable to the steep decline in chlorophyll detected in leaves obtained from C2. In leaves derived from C1, the chlorophyll content during storage was more stable than that detected in leaves from C2 ( Figure 2A). Moreover, the trend in the chlorophyll content of this species after storage was much higher than that reported in lettuce species, which are widely used as fresh-cut produce [49]. In fact, the chlorophyll content in lettuce was 0.4, 0.4, and 0.3 mg g −1 FW after 0, 7, and 14 days of storage, respectively; these values are much lower than those recorded in S. minor [57]. Similar to chlorophyll, carotenoid content was also higher in leaves derived from C1 than those from C2, and it was higher than that reported in fresh-cut lettuce species in cold storage [58]. Furthermore, the pattern of the carotenoid content remained unchanged during storage in salad Similar to chlorophyll, carotenoid content was also higher in leaves derived from C1 than those from C2, and it was higher than that reported in fresh-cut lettuce species in cold storage [58]. Furthermore, the pattern of the carotenoid content remained unchanged during storage in salad burnet leaves obtained from C1, whereas a slight increment was observed in C2 leaves during storage ( Figure 2B). However, carotenoid content decreased in lettuce after it was stored for 10 days [58]. Total Phenolic Content (TPC) At harvest, the phenolic content in salad burnet leaves largely exceeded the TPC measured in lettuce species [59,60]. In particular, TPC was found to be higher in the leaves from the first cut (32.4 mg g −1 FW) than in the leaves from the second cut (being 22.7 mg g −1 FW; p < 0.01). These results are also in agreement with the cumulative amounts of phenols that were found using the UHPLC-QTOF semi-quantitative approach (Table S5). During the storage, TPC decreased almost regularly in leaves from C1, even though the greatest reduction was observed on the first day of storage (Figure 3). Moreover, TPC in leaves obtained from C2 remained mostly constant during storage (Figure 3). Total Flavonoid Content The flavonoid content in salad burnet leaves was higher in leaves belonging to C1 as compare to those recorded in leaves from C2 (+76.1%; Figure 4). Storage had a negative impact on C1 leave In fact, the total flavonoid content strongly decreased after just one day of storage, as total phen content, but unlike these last compounds, the flavonoid content declined until the end of the storag period ( Figure 4). Notably, high variability was observed among samples from 1 to 9 days of storag The pattern for the leaves obtained from C2 was close to that found in lettuce, escarole and rocket salad during their storage as fresh-cut produce, even though the values of TPC were higher in S. minor than in those leafy vegetables (approximately 0.9, 0.10 and 4.5 µg g −1 FW in lettuce, escarole and rocket salad, respectively) [61]. Therefore, the TPC of S. minor was higher than that found in leafy species that are widely used as fresh-cut produce [61][62][63]. Total Flavonoid Content The flavonoid content in salad burnet leaves was higher in leaves belonging to C1 as compared to those recorded in leaves from C2 (+76.1%; Figure 4). Storage had a negative impact on C1 leaves. In fact, the total flavonoid content strongly decreased after just one day of storage, as total phenol content, but unlike these last compounds, the flavonoid content declined until the end of the storage period ( Figure 4). Notably, high variability was observed among samples from 1 to 9 days of storage. Differently, the total flavonoid content in leaves from C1 remained stable during storage (Figure 4). The total flavonoid content denotes the powerful nutraceutical value of this species, since many leafy species that are widely used as fresh-cut produce show lower total flavonoid content [64][65][66]. In fact, authors have recorded 0.25 mg CAE g −1 FW for lettuce Canasta, 0.15 mg CAE g −1 FW for chicory Catalogna, and 0.29 mg CAE g −1 FW for chicory Spadona, all lower values than those found for salad burnet leaves [65]. Antioxidants 2019, x, x FOR PEER REVIEW 10 of Total Ascorbic Acid (ASA) Content The leaves from C1 contained higher levels of ASA as compared to those from C2 ( Figure 5). leaves obtained from C1, ASA content increased significantly until the sixth day of storage when reached 3.57 mg g -1 FW ( Figure 5). Then, a decrease in ASA content was recorded, and at the end the storage period, it was similar to the values detected at harvest. Conversely, ASA content in leav from C2 had a more constant trend ( Figure 5). However, the ASA content of salad burnet leaves higher than that found in lettuce species that are widely used as fresh-cut produce [67,68]. In fac Barry-Ryan and O'Beirne [67] reported 0.25 mg ASA g -1 FW in iceberg lettuce and Bonasia et al. [6 found 0.15 mg ASA g -1 FW in rocket; lower values than those found for salad burnet leaves. Closed and open symbols represent first (C1) and second cut (C2), respectively. Each value is the mean ± standard deviation of 3 replicates. Means keyed with the same letter are not significantly different for p = 0.05 following two-way analysis of variance (ANOVA) with storage (S) and cut (C) as variability factors. ns: not significant; **: p < 0.01; ***: p < 0.001 for each factor and their interaction. Total flavonoid content values were expressed as catechin equivalents (CAE) mg g −1 FW. Total Ascorbic Acid (ASA) Content The leaves from C1 contained higher levels of ASA as compared to those from C2 ( Figure 5). In leaves obtained from C1, ASA content increased significantly until the sixth day of storage when it reached 3.57 mg g −1 FW ( Figure 5). Then, a decrease in ASA content was recorded, and at the end of the storage period, it was similar to the values detected at harvest. Conversely, ASA content in leaves from C2 had a more constant trend ( Figure 5). However, the ASA content of salad burnet leaves is higher than that found in lettuce species that are widely used as fresh-cut produce [67,68]. In fact, Barry-Ryan and O'Beirne [67] reported 0.25 mg ASA g −1 FW in iceberg lettuce and Bonasia et al. [68] found 0.15 mg ASA g −1 FW in rocket; lower values than those found for salad burnet leaves. the storage period, it was similar to the values detected at harvest. Conversely, ASA content in leaves from C2 had a more constant trend ( Figure 5). However, the ASA content of salad burnet leaves is higher than that found in lettuce species that are widely used as fresh-cut produce [67,68]. In fact, Barry-Ryan and O'Beirne [67] reported 0.25 mg ASA g -1 FW in iceberg lettuce and Bonasia et al. [68] found 0.15 mg ASA g -1 FW in rocket; lower values than those found for salad burnet leaves. In Vitro Total Antioxidant Activity In vitro antioxidant activity was higher in leaves from C2 than those from C1, whereas no effects were recorded in relation to their storage as fresh-cut produce ( Figure 6). In Vitro Total Antioxidant Activity In vitro antioxidant activity was higher in leaves from C2 than those from C1, whereas no effects were recorded in relation to their storage as fresh-cut produce ( Figure 6). The in vitro antioxidant capacity in leaves from both of the cuts and during storage was higher (3.23 mg TEAC g -1 FW in C1 leaves at harvest) than the in vitro antioxidant activity reported by other authors in leafy species widely used as fresh-cut produce [ Figure 6. In vitro antioxidant activity of Sanguisorba minor stored at 4 • C for 15 days as fresh-cut produce. Closed and open symbols represent first (C1) and second cut (C2), respectively. Each value is the mean (± SD) of 3 replicates. Means keyed with the same letter are not significantly different for p = 0.05 following two-way analysis of variance (ANOVA) with storage (S) and cut (C) as variability factors. ns: not significant; ***: p < 0.001 for each factor and their interaction. The lack of letters denotes the lack of significance in the interaction of the variability factors. In vitro antioxidant activity values were expressed as Trolox equivalent antioxidant capacity (TEAC) mg g −1 FW. The in vitro antioxidant capacity in leaves from both of the cuts and during storage was higher (3.23 mg TEAC g −1 FW in C1 leaves at harvest) than the in vitro antioxidant activity reported by other authors in leafy species widely used as fresh-cut produce [64,69,70]. Khanam et al. [64] reported values of in vitro antioxidant capacity determined by DPPH assay that averaged 0.69 mg TEAC g −1 FW in iceberg lettuce, 0.99 in Canasta lettuce, 2.44 in Continental lettuce and 1.01 in escarole. Viacava et al. [69] reported 0.75 mg TEAC g −1 FW in Butterhead lettuce, and Mampholo et al. [70] reported 0.4 mg TEAC g −1 FW in green lettuce varieties. For this reason, salad burnet leaves show promise as powerful antioxidant fresh-cut produce. Conclusions Sanguisorba minor showed a complex secondary metabolite profile that was influenced by the cut. Flavonoid content increased in leaves obtained from C2, especially the sub-class of flavones, which is characterized by anti-mutagenic, antioxidant, anti-inflammatory, anti-viral and purgative effects and is also highly important in cancer prevention [33][34][35][36][37][38][39]. Moreover, secondary metabolites were downregulated in leaves from C2 as compared to those detected in leaves from C1, as evidenced by the combination of the VIP score (VIP > 1.3) and the fold-change (FC > 2). During their storage as fresh-cut produce, salad burnet leaves did not show any remarkable changes in their phenolic profile and antioxidant capacity, a positive result for the maintenance of the nutraceutical value, especially considering the high levels of these compounds as compared to other leafy species that are widely used as fresh-cut produce [53,54,[59][60][61][62][63][64][65]. Also, the content of ascorbic acid, a pivotal antioxidant compound, strongly increased in the first days (until the third day) of storage and remained higher than values recorded in leafy species widely used as fresh-cut produce [67,68]. The only differences evidenced during storage were the lower constitutive content (T0) of nutraceutical compounds in leaves obtained from C2 than in leaves obtained from C1, except for chlorophylls and carotenoids. In conclusion, the cut was the main influence on the modulation of secondary metabolites in leaves, independently of the storage. This work shows that pre-and post-harvest factors can influence the nutraceutical value of product. The results show that the S. minor wild edible species is rich in bioactive compounds that are maintained during their storage as fresh-cut produce. The results suggest that this species could be a valid alternative to other leafy species commonly utilized for salad preparation, and also a promising species in a world in which food and flavor industries require new food ingredients for food supplements. Future research is necessary to investigate the possibility of increasing the nutraceutical content of this species by agronomical factors, for example, by managing the nutrient solution in hydroponic systems, and how to increase the stability of the nutraceutical values of S. minor between consecutive cuts.
2019-12-11T14:01:46.889Z
2019-12-01T00:00:00.000Z
209167800
s2orc/train
v2
Topological magnons in CrI$_3$ monolayers: an itinerant fermion description
Topological magnons in CrI$_3$ monolayers: an itinerant fermion description Magnons dominate the magnetic response of the recently discovered insulating ferromagnetic two dimensional crystals such as CrI$_3$. Because of the arrangement of the Cr spins in a honeycomb lattice, magnons in CrI$_3$ bear a strong resemblance with electronic quasiparticles in graphene. Neutron scattering experiments carried out in bulk CrI$_3$ show the existence of a gap at the Dirac points, that has been conjectured to have a topological nature. Here we propose a theory for magnons in ferromagnetic CrI$_3$ monolayers based on an itinerant fermion picture, with a Hamiltonian derived from first principles. We obtain the magnon dispersion for 2D CrI$_3$ with a gap at the Dirac points with the same Berry curvature in both valleys. For CrI$_3$ ribbons, we find chiral in-gap edge states. Analysis of the magnon wave functions in momentum space further confirms their topological nature. Importantly, our approach does not require to define a spin Hamiltonian, and can be applied to both insulating and conducting 2D materials with any type of magnetic order. Introduction Magnons are the Goldstone modes associated to the breaking of spin rotational symmetry. Therefore, they are the lowest energy excitations of magnetically ordered systems, and their contribution to thermodynamic properties, such as magnetization and specific heat, has been long acknowledged [1,2]. More recently, their role in non-local spin current transport through magnetic insulators has been explored experimentally [3] and there are various proposals to use them for information processing in low dissipation spintronics [4]. In this context, the prediction of topological magnons with chiral edge modes [5][6][7] opens new horizons in the emerging field of topological magnonics [8]. The recent discovery of stand-alone 2D crystals with ferromagnetic order down to the monolayer, such as CrI 3 [9], CrGe 2 Te 6 [10], and others [11], brings magnons to the centre of the stage, because of their even more prominent role determining the 4 On leave from Departamento de Física Aplicada, Universidad de Alicante, 03690 San Vicente del Raspeig, Spain. properties of low dimensional magnets. Actually, in 2D magnets at any finite temperature the number of magnons would completely quench the magnetization, unless magnetic anisotropy or an applied magnetic field breaks spin rotational invariance and opens up a gap at zero momentum [12,13]. Unlike in 3D magnets, the thermodynamic properties of 2D magnets are dramatically affected by the proliferation of magnons. This is the ultimate reason of the very large dependence of the magnetization on the magnetic field in materials with very small magnetic anisotropy, such as CrGe 2 Te 6 [10]. Magnons in CrI 3 attract strong interest and are the subject of some controversy. Experimental probes include inelastic electron tunnelling [14] and Raman spectroscopy [15,16] . In the case of bulk CrI 3 , there are also ferromagnetic resonance [17] and inelastic neutron scattering experiments [18]. Only the latter can provide access to the full dispersion curves E(k) = ℏω(k). There is a consensus that there are two magnon branches, expected in a honeycomb lattice with two magnetic atoms per unit cell. The lower branch has a finite minimum energy, ∆ Γ , at the zone centre Γ. This energy represents the minimal energy cost to create a magnon and plays thereby a crucial role. Different experiments provide radically different values for ∆ Γ , ranging from a fraction of a meV to 9 meV [16]. This quantity is related to the crystalline magnetic anisotropy energy that, according both to density functional theory (DFT) calculations [13,19,20] and multi-reference methods [21], is in the range of 1 meV. Inelastic Neutron scattering also shows [18] that, for bulk CrI 3 , the two branches of the magnon dispersions are separated by a gap. The minimum energy splitting occurs at the K and K ′ points of the magnon Brillouin zone (BZ). As in the case of other excitations in a honeycomb lattice with inversion symmetry, such as electrons and phonons, one could expect a degeneracy of the two branches at the Dirac cone, giving rise to Dirac magnons [22]. Interestingly, second neighbour Dzyaloshinskii-Moriya (DM) interactions are not forbidden by symmetry in the CrI 3 honeycomb lattice, and are known to open a topological gap [7], on account of mapping of ferromagnetic magnons with second neighbour DM in the honeycomb lattice into the Haldane Hamiltonian [23]. In contrast with a 'trivial' gap, the opening of a topological gap between the two branches a the Dirac points leads to a finite Berry curvature with the same sign in both valleys that, integrated over the entire BZ, leads to quantized Chern number and a non-vanishing transverse Hall conductivity at zero field [24]. In addition, topological gaps in bulk imply the emergence of in-gap edge states. The description of magnons in magnetic 2D crystals has been exclusively based in the definition of generalized Heisenberg spin Hamiltonians with various anisotropy terms, such as single ion and XXZ exchange [13], Kitaev [25], DM [7]. Once a given Hamiltonian is defined, the calculation of the spin waves is relatively straightforward, using linear spin wave theory based on Holstein-Primakoff representation of the spin operators [26]. The energy scales associated to these terms can be obtained both from fitting to DFT calculations of magnetic configurations with various spin arrangements [13] as well as to some experiments [18]. However, this method faces two severe limitations. First, the symmetry and range of the interactions that have to be included in the spin Hamiltonian. are not clear a priori. Second, in order to determine N energy constants, N + 1 DFT calculations forcing a ground state with a different magnetic arrangement are necessary and the values so obtained can depend on the ansatz for the Hamiltonian. The observation of the topological gap in bulk CrI 3 does not permit to determine the spin Hamiltonian, because two different types of anisotropic exchange are known to lead to topological magnons in honeycomb ferromagnets with off-plane magnetization. On one side, there are second neighbour DM interactions [7], not forbidden by symmetry in the CrI 3 honeycomb lattice. This coupling maps into the Haldane Hamiltonian [23]. On the other hand, first neighbour Kitaev interactions, claimed to be large in CrI 3 [17], lead to topological magnons in honeycomb lattices [27,28]. Here we circumvent this methodological bottleneck and describe magnons directly from an itinerant fermion model derived from first principles calculations. Our approach, that has been extensively used to describe magnons in itinerant magnets [29,30], is carried out in five steps. First, we compute the electronic structure of the material using DFT, without taking either spin polarization or spin-orbit coupling (SOC) into account. Second, we derive a tightbinding model with s, p, and d shells in Cr and s and p shells in iodine. The electronic bands obtained from this Hamiltonian are identical to those calculated from DFT (see the Methods section for further details). In the third step we include both SOC in Cr and I as well as on-site intra-atomic Coulomb repulsion in the Cr d shell. The resulting model is solved in a self-consistent mean field approximation [30]. In the fourth step, we compute the generalized spin susceptibility tensor χ(⃗ q, ω) in the random phase approximation (RPA). In the final step we find the poles of the spin susceptibility tensor in the (ω,⃗ q) space, that define the dispersion relation E n (⃗ q) = ℏω n (⃗ q) of the magnon modes, where n labels the different modes. More details about the each step are presented in the Methods section. Magnons from a Fermionic Hamiltonian As outlined in the introduction, our approach to calculate the magnon spectrum is carried out in five steps, which we now describe in detail. Step 1: DFT calculation. We compute the electronic structure of the material using DFT, without taking either spin polarization or spin orbit coupling (SOC) into account. The DFT calculation has been performed with the Quantum Espresso package [31,32]. We employed the Perdew-Burke-Ernzerhof functional [33] and the ionic potentials were described through the use of projected augmented wave pseudopotentials [34]. The energy cutoff for plane waves was set to 80 Ry. We used a 25 × 25 × 1 Monkhorst-Pack reciprocal space mesh [35]. Step 2: Extraction of the tight-binding Hamiltonian. Now we derive a tight-binding model with s, p, and d shells in Cr and s and p shells in Iodine. The electronic states of the CrI 3 monolayer are described by a model Hamiltonian The first term, describing the tight-binding Hamiltonian for s, p, d orbitals in Cr and s, p orbitals in I is given by Here, a † lµσ is the creation operator for an atomic-like orbital µ at site R l with spin σ =↑, ↓. The hopping matrix T µµ ′ ll ′ is extracted by the pseudo atomic orbital (PAO) projection method [36][37][38][39][40]. The method consists in projecting the Hilbert space spanned by the plane waves onto a compact subspace composed of the PAO. These PAO functions are naturally built into the pseudopotential used in the DFT calculation. The bands obtained from this tight-binding model are identical with those obtained from the spin un-polarized DFT calculation. In figure 1(a) we present the band structure of a CrI 3 monolayer as obtained from the same ab initio calculation from which we extracted the hopping matrix used our the susceptibility calculations. In the original DFT calculation (details are given above) spin polarization is suppressed and spin-orbit coupling is turned off. The resulting band structure is shown in figure 1(a), together with the bands obtained from the corresponding tight-binding Hamiltonian. 5 Step 3: inclusion of Coulomb repulsion and SOC terms. Here we add to the tight-binding Hamiltonian both a screened Coulomb repulsion term, and a local spin-orbit coupling (SOC) term, (4) The strength of the spin-orbit coupling is taken from the literature [41], ξ I = 0.6 eV. The screened Coulomb repulsion matrix elements I µµ ′ νν ′ (R l ) are approximated by a single parameter form, which is qualitatively equivalent to taking a spherically symmetric average of the interaction potential [42], We further assume the repulsion between electrons in s and p orbitals is negligible. Thus, only electrons occupying d orbitals at Cr atoms suffer electronelectron repulsion. The strength of the Coulomb repulsion is chosen as I = 0.7 eV, in order to reproduce the DFT magnetic moment of 3.18 µ B at each Cr site. The total magnetic moment per unit cell is 6µ B . The iodine sites acquire a small polarization, opposite to that of the Cr sites. Varying the strength of the Coulomb repulsion in the range 0.5-0.9 eV changes the Cr magnetic moments by more than 12%, while affecting only slightly (less than 2%) the total magnetic 5 A file with the hopping matrices used to produce the results shown in figure 1 is available as supplementary material. moment per unit cell. The spin-polarized groundstate of the system is obtained within a self-consistent mean-field approximation, in which all three components of the magnetization of each Cr atom within the unit cell are treated as independent variables [30]. For the CrI 3 monolayer we find, in agreement with experimental results and DFT calculations, that the ground-state magnetization is perpendicular to the monolayer. In figure 1(b), we show the tight-binding band structure after the inclusion of Coulomb repulsion (leading to spin polarization) and spin-orbit coupling. For comparison, we show in panel c of figure 1 the bands obtained from a DFT calculation with SOC and spin polarization. Step 4: Fermionic Spin susceptibility in the RPA. The magnon energies are associated with the poles of the frequency-dependent transverse spin susceptibility, where and The angular brackets ⟨· · · ⟩ represent a thermal average over the grand-canonical ensemble. The double time Green function χ ⊥ (R l , R l ′ , t) defined in equation (7) can be interpreted as the propagator for localized spin excitations created by the operator S − l ′ . In a system with translation invariance, its reciprocal space counterpart can be readily interpreted as the propagator for magnons with well-defined wave vector. The transverse spin susceptibility is calculated within a time-dependent mean-field approximation, which is equivalent to summing up all ladder diagrams in the perturbative series for χ ⊥ . These are the same Feynman diagrams that enter into timedependent DFT. In the presence of SOC, however, the transverse susceptibility becomes coupled to other three susceptibilities, which are related to longitudinal fluctuations of the spin density and fluctuations of the charge density. Thus, it becomes necessary to solve simultaneously the equations of motion for the four susceptibilities [30]. Step 5: Magnon dispersion relation. Finally, we locate the poles of the spin susceptibility tensor in the (ω,⃗ q) space, that define the dispersion relation E n (⃗ q) = ℏω n (⃗ q) of the magnon modes, where n labels the different modes. For a CrI 3 monolayer, which has two magnetic atoms per unit cell, there are two magnon modes. In a general geometry, the number of magnon modes equal the number of magnetic sites. For a system with translation invariance, the number of magnon modes equals the number of magnetic sites in a unit cell. Magnon normal modes In a system lacking periodicity, or with more than one magnetic atom per unit cell, the frequency-(and eventually wave vector-) dependent transverse spin susceptibility can be written as a matrix in atomic site indices, χ ⊥ ll ′ (ω). There are at least two useful interpretations for this matrix. One originates from its role as a response function in the linear regime, the other is related to its formal similarity to the single-particle Green function of many-body theory. χ ⊥ ll ′ (ω) as a linear response function When interpreted as a linear response function the transverse spin susceptibility yields the change in the transverse component of the spin moment δS + l at site l due to a transverse, circularly polarized external field b l ′ of frequency ω acting on site l ′ , We assume the system has N magnon normal modes, where N equals the number of non-equivalent magnetic atoms in the system. Each mode (m) is characterized by complex amplitudes ξ (m) l at the magnetic site l. A general motion of the transverse components of the spin can be written as a linear combination of the normal modes, Now consider an external field whose frequency and complex amplitudes match exactly those of a normal mode, In this case, the corresponding change in the transverse spin moment δS + l induced by the field should be proportional to the same normal mode, This shows that the normal modes are the eigenvectors of the susceptibility matrix. In principle, this procedure yields 'normal modes' for any arbitrary frequency of the external field. However, the 'true' normal modes are the ones for which the system responds resonantly. Thus, we can look at the imaginary part of the eigenvalues of χ ⊥ ll ′ (ω) as a function of frequency and associate their peaks with the frequencies of the normal modes. χ ⊥ ll ′ (ω) as the magnon singe-particle Green function In order to arrive at this interpretation we can make an analogy with the spin wave theory obtained from the linearized Holstein-Primakoff transformation [43]. There, after linearization, the bosonic operator that represents a spin excitation localized at Thus, if we write the definition of the transverse susceptibility replacing S + l by b l and S − l by b † , we arrive at a form that is completely analogous to that of the single particle Green function of many-body theory, Here, ⟨· · · ⟩ is a thermal average (or a ground state average at T = 0), θ(t) is the Heaviside unit step function. As in the linearized Holstein-Primakoff transformation, the magnons of our RPA theory are independent particles, described by an effective Hamiltonian H composed only of one-body terms. In that case, it is straightforward to show that the Fourier transform of the single-particle Green functions χ ⊥ ll ′ are the matrix elements of a matrix χ ⊥ related to the Hamiltonian matrix by Thus, the magnon normal modes of the system are the eigenvectors of the susceptibility matrix χ ⊥ (E * ) where E * are the magnon energies, associated with the peaks of the imaginary part of the eigenvalues of χ ⊥ . Berry curvature calculation The Berry curvature associated with a point in the reciprocal space is a good indicator of possible topological behaviour. Its integral over the whole BZ is called the Chern number, a topological invariant which can be used to classify band structures according to topological properties. In this section we describe a procedure to calculate a numerical approximation to the Berry curvature at an arbitrary point in the reciprocal space. In section 3 we present the Berry curvature of the CrI 3 magnons as evidence of their non-trivial topology. The Berry phase associated to a closed contour C in the momentum space ⃗ k is given by [44]: An efficient way to compute the Berry curvature at a given point ⃗ k 0 is to compute the Berry phase in a infinitesimal loop in the plane (k x , k y ) [45]. We parametrize the line integral with the variable θ, Now we note that the argument of the integral has to be purely imaginary, since ∂⟨Ψn(θ)|Ψn(θ)⟩ ∂θ = 0. We thus have: We discretize the integral and the derivative: We expand this expression: Now we use the fact that the overlap is close to 1 so that ϵ = ( ⟨Ψ n (θ)|Ψ n (θ j + ∆θ)⟩ − 1 ) is a small number. We use the expression log(1 + ε) ≃ ε and write: Figure 3. Analysis of the magnon wave function coefficients (equation (26)) for a CrI3 monolayer. Coefficients cA and cB for lower (a) and higher (b) energy branch along the Γ, K, K ′ line in the Brillouin zone. It is apparent that at the Dirac points K, K ′ , the spinor is sublattice polarized: the sign of the polarization changes as we change either the branch or the mode, following a braiding pattern, exactly like in the Haldane model. (c) Berry curvature, for both magnon branches, along the Γ, K, K ′ line in the Brillouin zone. For a given branch, the Berry curvature has the same sign in both valleys, that give the dominant contribution. The sign of the Berry curvature is opposite for both branches. Thus, the integrated Berry curvature is clearly finite, with opposite signs for the two branches. This expression is convenient for numerical evaluation, because random phases are eliminated, as all states appear twice as conjugated pairs. Therefore, random phases that inevitably occur in the numerical diagonalizations are cancelled. We now consider an infinitesimal loop of area 1 2 (∆k) 2 formed by 3 points, ⃗ k 0 , ⃗ k 1 = ⃗ k 0 + (∆k, 0), ⃗ k 2 = ⃗ k 0 + (0, ∆k). We now introduce the notation for the overlap to write the Berry phase in the loop as Thus, the Berry curvature is obtained as: Results and discussion The 2D CrI 3 magnon dispersion along the high symmetry directions of the BZ are shown in figure 2, calculated both with and without spin orbit coupling, ξ I . As expected for a unit cell with two magnetic atoms, we find two branches of magnons. At the Γ point, spin orbit coupling opens up a gap ∆ Γ , as expected [13]. At the K and K ′ points, the two magnon branches form Dirac cones when ξ I = 0, but a gap ∆ K,K ′ opens up, whose magnitude is an increasing function of the iodine spin orbit coupling. In order to assess the topological nature of the gap at K and K ′ points, we first examine the wave functions for the two modes along the Γ − K − K ′ line. The magnon wave functions can be written as linear combinations of spin flips across the Cr honeycomb lattice, with weights c A and c B on the A and B triangular sublattices: where n labels the branch. A distinctive feature of topological quasiparticles in the honeycomb lattice [23,46] is the braiding in momentum space of the sublattice components. In panels a and b of figure 3 we plot the coefficients c A (n,⃗ q) and c B (n,⃗ q), obtained from our itinerant fermion model, as ⃗ q traces the high symmetry directions of the magnon BZ. As ⃗ q goes from K to K ′ , for a given n, (c A , c B ) behaves as a spinor that goes from the north to the south pole, with the reverse behaviour for the other branch, exactly as in the Haldane model. We have verified that this pattern is reversed if the off-plane magnetization changes sign. Topological magnons have a finite Berry curvature that leads to non-zero Chern number when integrated over the entire BZ [5][6][7]. In figure 3(c) we show the Berry curvature along the high symmetry line Γ-K-K ′ in the BZ (see the Methods section for calculational details). The Berry curvature of a given mode peaks at the K and K ′ valleys, with the same sign. Therefore, we expect a non-zero Chern number and hence the existence of in-gap chiral edge modes 6 . Our calculations (figure 2(e)) give strong evidence that the topological gap is driven by the spin orbit coupling of iodine. Thus, the finite Berry curvature has to be produce by inter-atomic exchange mediated by the ligand. A very likely candidate is second neighbour DM interactions, that are known to result in topological magnons in honeycomb ferromagnets [7]. We now address the case of magnons in a CrI 3 ribbon, using the itinerant fermion description, in order to look for topological edge states. We consider a ribbon where the edge Cr atoms form an armchair pattern, to avoid non-topological modes that arise at zigzag edges. The unit cell used in the calculations has 40 Cr atoms, wide enough to prevent cross-talk between edges. Therefore, for a given value of the longitudinal wave vector q, there are 40 magnon modes. In order to avoid an extremely heavy calculation, we use the bulk fermionic tight-binding parameters for the ribbon, neglecting thereby changes in the electronic structure that may arise at the edges. As a result, the obtained value of ∆ Γ for the ribbon is ∼ 1 meV higher. A zoom of the resulting energy dispersion, around the Dirac energy, is shown in Figure 4(b). The red and blue diamonds indicate modes that are exponentially localized at either edge of the ribbon, as shown in figure 4(c). For comparison, we also show (in orange) the wave function coefficients of a magnon mode that is not localized at either edge. Our results strongly indicate the existence of localized modes at the CrI 3 edges, across the entire one dimensional BZ. Around the Dirac point these edge modes are chiral, and their energy is inside the gap. Their chirality is evidenced by the locking between spatial localization and propagation direction: all modes localized at a given edge have velocities with the same sign. Away from the Dirac point their dispersions are not linear due to the presence of long range exchange. Summary and conclusions By using an itinerant fermion theory, we have shown that magnons in CrI 3 can be computed without the use of spin models. We have presented strong evidence that magnons in monolayer CrI 3 are topological, namely the existence of gaps, controlled by spinorbit coupling, at the Dirac points, the finite Berry curvature with the same sign at all Dirac points and the braiding behaviour of the magnon eigenvectors along a line connecting different Dirac points. We supplement the evidence gathered in the monolayer by showing that a CrI 3 nanoribbon supports chiral states along its edges, in accord with the principle of bulk-edge correspondence. We would like to offer some perspective on the possible experimental detection of CrI 3 magnons' topological features. First, as a result of the finite Berry curvature, magnons contribute to the thermal Hall conductivity at zero magnetic field [24]. Second, a specific consequence of the quantized Chern number is the existence of edge modes. Our calculations show they have narrow spectral features. Therefore, their existence could be confirmed by inelastic electron tunnel spectroscopy carried out with a scanning probe [47] to determine the local density of states of spin excitations with atomic resolution. Our method to obtain the magnons directly from a microscopic electronic Hamiltonian derived from ab initio calculations is widely applicable to 2D materials and their heterostructures. The method can also be used to obtain spin excitations from non-collinear and non-coplanar ground states and to examine the stability of competing states, which can prove extremely useful in unveiling the nature of the magnetic ground state of Kitaev materials such as α-RuCl 3 .
2020-02-04T02:00:42.017Z
2020-01-31T00:00:00.000Z
211011000
s2orc/train
v2
Why one-size-fits-all vaso-modulatory interventions fail to control glioma invasion: in silico insights
Why one-size-fits-all vaso-modulatory interventions fail to control glioma invasion: in silico insights Gliomas are highly invasive brain tumours characterised by poor prognosis and limited response to therapy. There is an ongoing debate on the therapeutic potential of vaso-modulatory interventions against glioma invasion. Prominent vasculature-targeting therapies involve tumour blood vessel deterioration and normalisation. The former aims at tumour infarction and nutrient deprivation induced by blood vessel occlusion/collapse. In contrast, the therapeutic intention of normalising the abnormal tumour vasculature is to improve the efficacy of conventional treatment modalities. Although these strategies have shown therapeutic potential, it remains unclear why they both often fail to control glioma growth. To shed some light on this issue, we propose a mathematical model based on the migration/proliferation dichotomy of glioma cells in order to investigate why vaso-modulatory interventions have shown limited success in terms of tumour clearance. We found the existence of a critical cell proliferation/diffusion ratio that separates glioma responses to vaso-modulatory interventions into two distinct regimes. While for tumours, belonging to one regime, vascular modulations reduce the front speed and increase the infiltration width, for those in the other regime, the invasion speed increases and infiltration width decreases. We discuss how these in silico findings can be used to guide individualised vaso-modulatory approaches to improve treatment success rates. abnormalities. Under such oxygen-limiting conditions, glioma cells develop a wide variety of rescue mechanisms to survive and sustain proliferation. These include recruitment of new blood vessels driven by secretion of pro-angiogenic factors, modulation of cell oxygen consumption and activation of cell migration to escape from poorly oxygenated regions [8][9][10][11] . In particular, the ability of glioma cells to switch phenotype in response to metabolic stress is believed to have important implications for tumour progression and resistance to therapeutic agents. For instance, the mutually exclusive switching between proliferative and migratory phenotypes experimentally observed, also known as the migration/proliferation dichotomy or Go-or-Grow mechanism, is considered to significantly increase the invasive potential of glioma cells in response to low oxygen levels 4,10,[12][13][14] . However, the way in which the dynamical interplay between glioma cells and their microenvironment leads to development of hypoxic regions, as well as the overall impact of oxygen availability on tumour invasion are still not fully understood. A particularly important component of the tumour microenvironment is the vascular network. Accumulating evidence suggests the existence of various positive and negative feedback mechanisms between glioma cells and the vasculature. Indeed, gliomas are reported as highly vascularised neoplasias 15,16 , where excessive blood vessel formation is induced by a wide range of pro-angiogenic factors 17,18 . However, over-expression of pro-angiogenic factors produced by hypoxic glioma cells is commonly observed, which ultimately results in local vascular hyperplasia and focal areas of necrosis. Such functional and morphological abnormalities in the tumour-associated vasculature are common features of gliomas, with blood vessels of significantly larger diameters, higher permeability and thicker basement membranes than those found in the normal brain tissue 15 , see Fig. 1(A,B). Moreover, blood vessel occlusion has been reported to initiate a hypoxia/necrosis cycle influencing the dynamical balance between glioma cell migration and proliferation. In fact, several clinical and experimental observations suggest that vaso-occlusion could readily explain the rapid peripheral expansion and invasive behaviour of gliomas 19,20 . Vaso-occlusion can mainly occur due to increased mechanical pressure exerted on the blood vessels by tumour cells or induced by intravascular pro-thrombotic mechanisms 21,22 , see Fig. 1(C,D). Occluded or collapsed blood vessels could lead to perivascular hypoxia, necrosis and hypercellular zones referred to as pseudopalisades, which induce collective cell migration. Actually, these vascular occlusive events have been linked to waves of hypoxic glioma cells actively migrating away from oxygen-deficient and necrotic regions [19][20][21]23 . Since hypoxia-induced migration has been long recognised to support further glioma cell invasion, it may be crucial to investigate the overall effect of vaso-modulatory interventions on the tumour front speed and infiltration width. The high degree of angiogenesis and vascular pathologies observed in gliomas has been the target of several therapeutic vaso-modulatory strategies 24,25 . Clinical and preclinical findings suggest that angiogenesis inhibitors alone, with the potential to starve glioma cells, have limited efficacy in terms of tumour shrinkage, functional vasculature destruction and patient survival [26][27][28] . Furthermore, anti-angiogenic factors as inhibitors of neovascularisation are also restricted by transient effects and development of therapy resistance 29 . Instead, improved tumour vascularisation, either via normalisation or through a stress alleviation strategy based on reopening compressed blood vessels, is an emerging concept expected to reduce tumour hypoxia, improve perfusion, enhance the delivery of cytotoxic drugs and increase radiotherapy efficacy 24,[30][31][32] . Interestingly, recent evidence reveals that judicious application of an anti-angiogenic therapy may normalise the structure and function of the tumour vasculature 28,30,31 , where the success rate is schedule-and patient-dependent 33,34 . Although vasculature-targeting interventions could provide therapeutic benefits, further mechanistic insights into their influence on glioma cell dynamics are still needed to improve treatment outcomes 24,32 . Mathematical modelling has the potential to improve our understanding of the complex biology of gliomas and their interactions with the microenvironment, as well as it may help in the design of more effective and personalised treatment strategies [35][36][37][38][39][40][41][42][43] . Several mathematical models have been developed to identify mechanisms and factors that facilitate proliferation and migration of glioma cells 16,38,[44][45][46][47][48][49][50][51][52][53] , as well as to explore processes related to malignant progression [54][55][56] . Most of these models have been formulated to examine glioma growth and invasion based exclusively on cellular diffusion and proliferation rates [44][45][46][47]49 . Recently, models including the influence of different tumour microenvironmental factors such as hypoxia, necrosis and angiogenesis have been also proposed 16,38,53 . However, the impact of vascular occlusive events or vascular normalisation on glioma invasion, considering the Go-or-Grow mechanism, has not been addressed so far. In this work, we propose a mathematical model to investigate the reasons for which vaso-modulatory interventions often fail to control glioma invasion. In particular, we focus on the interplay between the migration/proliferation dichotomy of glioma cells and variations in the functional tumour vasculature. The aim is to generate novel insights into the impact of vaso-modulatory interventions on tumour front speed and infiltration width, as well as to discuss the therapeutic potential of a combination of vasculature-targeting strategies with other treatment protocols for personalized medicine. We begin by defining the biological assumptions taken into account when developing our glioma-vasculature interplay model. Then, we study the effects of modulations of cell oxygen consumption and vaso-occlusion rates on glioma invasion. We show that one-size-fits-all vaso-modulatory interventions should be expected to fail to control glioma invasion, since there is a trade-off between tumour front speed and infiltration width. The model provides a better understanding of glioma-microenvironment interactions and is suited for analysing the potential success or failure of vaso-modulatory treatments. We conclude by discussing the main implications of our model in the design of novel approaches for individualised therapy. Methods The glioma-vasculature interplay model. We develop a mathematical model that describes the growth of vascularised gliomas focusing on the interplay between the migration/proliferation dichotomy and vaso-occlusion at the margin of viable tumour tissue. The system variables are the density of glioma cells ρ(x, t) and functional tumour vasculature v(x, t), as well as the concentrations of oxygen σ(x, t) and pro-angiogenic factors a(x, t) in the tumour microenvironment, where   ∈ × x t ( , ) d and d is the dimension of the system. Figure 2(A) shows a schematic representation of the system interactions and model assumptions, which are summarised as follows: [A1] Glioma cells switch phenotypes between proliferative (normoxic) and migratory (hypoxic) depending on the oxygen concentration in the tumour microenvironment 4,10,12-14 . [A8] Prothrombotic factors and increased mechanical pressure in regions of high glioma cell density induce blood vessel occlusion and collapse 19,23,58,60 . Density of glioma cells, ρ(x, t). Based on the migration/proliferation dichotomy 4,10,12-14 , we assume that glioma cells switch between two different phenotypes, migratory (hypoxic) ρ 1 (x, t) and proliferative (normoxic) ρ 2 (x, t), depending on the concentration of oxygen in the tumour microenvironment σ(x, t). More precisely, we consider two linear switching functions, f 21 (σ) = λ 1 − σ and f 12 (σ) = λ 2 σ, that represent the rate at which glioma cells change from migratory to proliferative and vice versa, respectively. Although there is experimental evidence of a positive correlation between oxygen availability and cell proliferation, the exact functional form of the oxygen-dependent phenotypic switching remains unknown. Accordingly, we consider the simplest case, i.e. a linear switching between proliferative and migratory phenotypes, in line with previous studies 61,62 . The parameters λ 1 and λ 2 are positive constants, see the Supplementary Material for further details. Cell motility is modelled as a diffusive process mimicking the net infiltration of glioma cells into the surrounding brain tissue, while a logistic growth term is considered for tumour cell proliferation. The system of equations for the migratory and proliferative glioma cells is given by Scientific RepoRts | 6:37283 | DOI: 10.1038/srep37283 where the temporal t and spatial x coordinates in the arguments of variables have been omitted for notational simplicity. D ρ and b ρ are the diffusion and proliferation coefficients of migratory and proliferative glioma cells, respectively. N represents the brain tissue carrying capacity, i.e. the maximum number of cells that can be supported by the environment. The parameters D ρ , b ρ and N are positive constants. The system (1)-(2) can be reduced to a single equation for the total density of glioma cells ρ = ρ 1 + ρ 2 by assuming that f 12 (σ)ρ 1 = f 21 (σ)ρ 2 . This is a plausible assumption since intracellular processes, such as signalling pathways regulating the phenotypic switch, operate at much shorter time scales than cell migration and proliferation. Thus, we assume that phenotype switching is a mechanism faster compared to cell division and motility, which allows to express ρ 1 and ρ 2 as a function of ρ in the following form where we have that 21 12 Summing equations (1) and (2), and substituting the expressions above for ρ 1 and ρ 2 , the equation for the total density of (migratory and proliferative) glioma cells ρ(x, t) is given by 2 where the oxygen-dependent functions α(σ) and β(σ) are defined as follows Then, taking into account that α(σ) + β(σ) = 1, we can rewrite equation (3) in the following form Notice that equation (6) is a generalisation of the widely studied Fisher-Kolmogorov model which describes glioma growth and invasion 55,63 . The nonlinear terms α(σ) and β(σ) in equation (6) modulate the rates of glioma cell diffusion and proliferation according to oxygen availability. Under hypoxic conditions cell diffusion increases, while proliferation decreases, i.e. glioma cells become more migratory and less proliferative. On the contrary, at normal oxygen levels (normoxic conditions) glioma cells become more proliferative and less migratory. Let σ 0 > 0 be the physiological oxygen concentration in the normal brain tissue. Then, by normalising D ρ = D/α(σ 0 ) and b ρ = b/β(σ 0 ) the classical Fisher-Kolmogorov equation is recovered under the assumption of a constant oxygen concentration in the tumour microenvironment, given by 2 where D and b are positive parameters that represents the intrinsic diffusion and proliferation rates of glioma cells, respectively. We remark that, equation (7) has been extensively used to predict untreated glioma kinetics based on patient-specific parameters from standard medical imaging procedures 16,49,55,64 . Furthermore, the Fisher-Kolmogorov equation has been also considered to estimate glioma recurrence after surgical resection 50 and simulate of tumour responses to conventional therapeutic modalities such as chemo-48 and radiotherapy 65 . Pro-angiogenic factor concentration, a(x, t). Neovascularisation in tumours takes place when pro-angiogenic factors overcome anti-angiogenic stimuli. However, in gliomas there is evidence of a wide range of pro-and anti-angiogenic factors involved, each of them acting through different vascularisation mechanisms 15,24,28 . While not explicitly considering the vascular endothelial growth factor (VEGF) or any other specific pro-angiogenic chemokine, we assume a generic effective pro-angiogenic factor concentration at quasi-steady state. In fact, we suppose that an over-expression of pro-angiogenic factors instantaneously promotes the formation of functional tumour vasculature v(x, t). We further consider that pro-angiogenic factors are exclusively produced by glioma cells under hypoxic conditions at a rate proportional to the tumour cell density, and therefore neglecting hypoxia-independent pathways. In addition, endothelial cells forming the vascular network uptake pro-angiogenic factors which also undergo natural decay. The equation for the effective pro-angiogenic factor concentration a(x, t) is given by where the temporal t and spatial x coordinates in the arguments of variables have been omitted for notational simplicity. D a is the diffusion coefficient of pro-angiogenic factors. Assuming the quasi-steady state approximation of equation (8), we have that The positive parameters k 1 , k 2 and k 3 represent the production, consumption and natural decay rates of pro-angiogenic factors, respectively, where σ σ < < ⁎ 0 a 0 is the hypoxic oxygen threshold for their production by glioma cells. where θ is a positive parameter that controls the steepness of θ H at σ σ − ⁎ ( ) a . More precisely, θ H models the production of pro-angiogenic factors by glioma cells when the oxygen concentration σ is lower than the hypoxic oxygen threshold σ ⁎ a . Density of functional tumour vasculature, v(x, t). Histopathological studies have shown that the vascular structure and function in brain tumours is markedly abnormal 17,18,58 . Gliomas, and particularly glioblastomas, are known to have blood vessels of increased diameter, high permeability, thickened basement membranes and highly proliferative endothelial cells 15 , see Fig. 1(B). Due to such abnormalities, a significant fraction of the tumour-associated vasculature does not constitute functional blood vessels 15 . Based on these facts, we only consider functional vascularisation instead of modelling the complete tumour vascular network. Accordingly, we assume that the density of functional tumour vasculature is a dimensionless and normalised quantity with values in the interval [0, 1]. The normal density of functional blood vessels in the normal brain tissue is taken as v = 1/2. Thus, the limit case v = 0 represents an avascular tissue, while on the contrary v = 1 describes a hypothetical scenario characterised by excessive vascularisation. Blood vessels in gliomas are not stable, being continuously formed, occluded and destroyed. Neovascularisation takes place by different angiogenic and vasculogenic processes induced by complex signalling mechanisms that are not well understood 11,66,67 . For simplicity, we assume that tumour blood vessels are created when pro-angiogenic factors prevail anti-angiogenic stimuli, i.e. for a > 0, leading to the formation of new functional vasculature according to a logistic growth term. The rate at which functional tumour vasculature is generated follows the Michaelis-Menten kinetics depending on the pro-angiogenic factor concentration, where a constant dispersal rate of endothelial cells (vasculature) is assumed. Notice that the Michaelis-Menten term is commonly used to model a saturating response at high doses in biological systems 22,63,68 . On the other hand, we consider that mechanical or chemical cues in regions of high glioma cell density induce blood vessel occlusion or collapse 19,23,60 . Vaso-occlusion is then modelled by a power law dependence on the density of glioma cells. The equation for the density of functional tumour vasculature v(x, t) is given by where again the temporal t and spatial x coordinates in the arguments of variables have been omitted for notational simplicity. D v is the diffusion coefficient representing the net dispersal of tumour vasculature, g 1 is the formation rate of functional blood vessels, μ is the pro-angiogenic factor concentration at which g 1 is half-maximal, g 2 is the vaso-occlusion rate and n is a parameter that regulates the degree of blood vessel occlusion depending on the density of glioma cells. The vaso-occlusion term, g 2 vρ n , models the mechanical pressure exerted on blood vessels in regions of high glioma cell density, see the Supplementary Material for further details. When the intratumoural cellular pressure exceeds a critical threshold, massive tumour blood vessel collapse occurs 32,69 . However, prior to this critical stress threshold, blood vessel collapse is moderate 69 . In particular, we assume that vaso-occlusion only occurs for glioma cell densities greater than N/2, where N is the brain tissue carrying capacity 60 . The parameters D v , g 1 , μ, g 2 and n are positive constants. Plugging equation (9) for the effective pro-angiogenic factor concentration into equation (11), and assuming that the decay rate of a is much smaller than the uptake/internalisation rate by endothelial cells, i.e. k 3 ≪ k 2 70,71 , we have that v v a v a n 2 1 2 where K = μk 2 /k 1 represents the concentration of pro-angiogenic factors at which the formation rate of functional tumour vasculature is half-maximal, see the Supplementary Material for more details. Oxygen concentration, σ(x, t). Oxygen is delivered to the brain tissue via functional blood vessels, spreads into the tumour bulk and is consumed by glioma cells. Transport of oxygen within tissues occurs by diffusion and convection 72 . For simplicity, we neglect the convective contribution and only consider that after transvascular exchange oxygen molecules move exclusively by diffusion. The delivery of oxygen to the tumour is modelled by assuming that the supply rate is proportional to the functional vasculature and the difference between the physiological oxygen concentration in the normal brain tissue σ 0 and that in the tumour interstitium. These assumptions result in the equation for the oxygen concentration σ(x, t) given by where the temporal t and spatial x coordinates in the arguments of variables have been omitted for notational simplicity. D σ is the oxygen diffusion coefficient, h 1 is the permeability coefficient of functional vasculature and h 2 is the oxygen consumption rate by glioma cells. The parameters D σ , h 1 , σ 0 and h 2 are positive constants. Notice that similar assumptions have been previously considered to model oxygen dynamics in vascular tumour growth 22 . Model formulation, boundary and initial conditions. The proposed glioma-vasculature interplay model comprises a system of coupled partial differential equations given by v v a v a where the oxygen-dependent functions α(σ) and β(σ) are given by equations (4)-(5), respectively. The system (14)- (16) is closed by imposing the following initial conditions where the positive parameters ρ 0 , σ 0 and v 0 are the initial density of glioma cells spatially distributed in a segment of length ε, the density of functional tumour vasculature and the oxygen concentration, respectively. The positive parameter γ controls the steepness of γ H at (x − ε) with ε > 0, and L > 0 is the length of the one-dimensional computational domain. In addition, we consider an isolated host tissue in which all system behaviours arise solely due to the interaction terms in equations (14)- (16). This assumption results in no-flux boundary conditions of the form where T f > 0 is an arbitrary simulation time, i.e. the end of simulations. The conditions above also imply that no cell or molecule leaves the system through the domain boundaries. Modelling hierarchy. The glioma-vasculature interplay model (14)-(16) referred to as Model III, is a generalisation of two simpler models which are also of interest for the study of glioma growth and invasion. As shown in Fig. 2(B), such simpler models can be obtained under the assumptions of a constant density of functional tumour vasculature v(x, t) = v 0 (Model II), and also a constant oxygen concentration σ(x, t) = σ 0 (Model I). More precisely, Model II is obtained from Model III by setting g 1 = g 2 = 0 in equation (15), i.e. assuming neither formation nor occlusion/collapse of tumour blood vessels. In turn, Model I is obtained from Model II by setting h 2 = 0 in equation (16), i.e. assuming a constant oxygen concentration in the tumour microenvironment. Model I corresponds to the classical Fisher-Kolmogorov equation (7), for which a large number of theoretical and simulation results have been reported 55,63 . Model II given by equations (14) and (16) contains an extended version of the Fisher-Kolmogorov equation with nonlinear glioma cell diffusion and proliferation terms. Both nonlinearities depend on the oxygen concentration in the tumour microenvironment, which is governed by a reaction-diffusion equation with linear diffusion and nonlinear reaction terms. Notice that reaction-diffusion is a process in which more than one component, i.e. chemical species and/or population of cells, are assumed to diffuse over a surface and react with each other. In addition, the dynamic of glioma cells is modelled by considering the migration/proliferation dichotomy (Go-or-Grow mechanisms). Since the supply of oxygen rate in Model II is assumed constant, the blood perfusion can be considered stable and we therefore neglect tumour-induced vascular pathologies. The latter is a reasonable assumption, particularly for low-grade gliomas, where an Scientific RepoRts | 6:37283 | DOI: 10.1038/srep37283 abnormal vascular structure is not prominent 16 . A natural extension of Model II is to consider tumour-associated vascularisation dynamics. This is precisely what defines Model III, which is used to investigate the effects of vaso-modulatory interventions on glioma invasion. Taking into account that Model I has been extensively studied, we begin with the analysis of Model II as an intermediate step towards analysing Model III, see Fig. 2(B). In particular, we focus on the effects of variations in the glioma cell oxygen consumption and vaso-occlusion rates on tumour front speed and infiltration width. In the Supplementary Material we provide details about the numerical implementation of the model, as well as additional simulation results. Model observables. We characterise glioma invasion by the tumour front speed and infiltration width, see Figure S1 in the Supplementary Material. The tumour front speed is estimated by the change rate of the point of maximum slope in ρ(x, t) at the end of simulations T f . In turn, the infiltration width is defined by the difference between the points where glioma cell density is 80% and 2% of the maximum cellular density at simulation time T f . These specific features of tumour invasion have been reported crucial to determine glioma malignancy and predict therapeutic failure 16,50,55 . Unlike the classical Fisher-Kolmogorov equation (7), in our glioma-vasculature interplay invasion model (14)-(16) cellular processes are regulated by oxygen availability. Therefore, we distinguish the intrinsic glioma cell diffusion D and proliferation b rates from the effective rates that depend on the oxygen concentration in the tumour microenvironment. The effective diffusion D eff and proliferation b eff rates of glioma cells are defined as follows where L is the length of the one-dimensional domain of simulation. Notice that D ρ = D/α(σ 0 ) and b ρ = b/β(σ 0 ), where D and b are the intrinsic glioma cell diffusion and proliferation rates, respectively. We then investigate the dependence of D eff and b eff , as well as the tumour front speed and infiltration width, at simulation time T f on different values of parameters h 2 (glioma cell oxygen consumption) and g 2 (vaso-occlusion). Model parameterisation. Parameter values considered in the model simulations are taken from published data wherever possible or estimated to approximate physiologic conditions based on appropriate physical and biological arguments, see Table 1 and the Supplementary Material for more details. For parameters of special interest, a wide range of values is considered to explore their effects on glioma growth and invasion. (17) and (18) respectively, for tumours characterised by different combinations of the intrinsic glioma cell features D and b. Model simulations in Fig. 3(A,B) are obtained under the assumption of a constant density of functional tumour vasculature, i.e. neither formation nor occlusion/collapse of tumour blood vessels, for increasing oxygen consumption rates by glioma cells. In turn, Fig. 4(A,B) shows simulation maps for a constant rate of oxygen consumption by tumour cells, considering tumour vascularisation dynamics and increasing vascular occlusive events. Comparative simulation maps in Figs 3(A,B) and 4(A,B) illustrate that an arbitrary increase in either the rate at which glioma cells consume oxygen h 2 or vaso-occlusion g 2 results in more diffusive and less proliferative tumours. The model supports that at high h 2 and g 2 values, the oxygen concentration in the tumour microenvironment significantly decreases, which may result in hypoxia and necrosis. The lack of oxygen limits the proliferative capacity of glioma cells, and in turn enhances the hypoxia-induced cell migration to better-oxygenated brain tissue areas. In particular, variations in the cell oxygen consumption and vaso-occlusion rates are predicted to have a major impact on highly infiltrative and/or rapidly growing gliomas. Thus, the precise way in which such cellular and microenviromental changes affect the overall invasive potential of tumours can be expected to depend on the specific intrinsic glioma cell features. Cell oxygen consumption changes reveal a critical proliferation rate for glioma invasion. Analysis of the Model II, i.e. under the assumption of a constant density of functional tumour vasculature, reveals that variations in the rate at which glioma cells consume oxygen h 2 produce opposing effects on the tumour front speed. More precisely, Fig. 3(C) shows that there exists a critical glioma cell proliferation rate b * for which the front speed in tumours characterised by b > b * decreases at higher values of h 2 , while on the contrary tumours with b < b * invade faster displaying diffusely infiltrative growth patterns. Assuming that the tumour front speed is proportional to the product of effective diffusion and proliferation rates, we can readily explain the aforementioned simulation results for variations of h 2 . On one hand, in tumours with glioma cell proliferation rates b above the critical threshold b * , the effective migration and proliferation mechanisms compensate each other, leaving almost-invariant the speed of the invading front. On the other hand, in the cases of tumours with b < b * , while the effective proliferation rate is not significantly affected, the migratory activity of glioma cells is higher for increasing values of h 2 , which results in faster tumour front propagation speeds. The flatness/steepness of the tumour front is proportional to a ratio of effective glioma cell diffusion and proliferation rates. When oxygen in the microenvironment is not limited, highly diffusive tumours evolve with large and flat fronts, whereas increased glioma cell proliferation results in short and steep fronts. However, under oxygen-limiting conditions the shape of the evolving tumour front is markedly influenced by the specific rate at which glioma cells consume oxygen. Figure 3(D) shows that variations in the rate of oxygen consumption produce the same overall effects on the infiltration width. Comparative simulation maps in Fig. 3(D) reveal that whatever the intrinsic glioma cell features, an arbitrary increase (decrease) in the oxygen consumption rate leads to more (less) invasive tumours. Indeed, the effective proliferation capacity of glioma cells is reduced due to increasing oxygen consumption rates, and in turn hypoxia-induced cell migration is enhanced, resulting in more aggressive, infiltrative tumour growth patterns. Modulation of vaso-occlusion reveals a critical proliferation/diffusion ratio for glioma invasion. Simulations of the Model III reveal that for increasing vaso-occlusion rates g 2 , the tumour front speed is differently affected depending on the intrinsic diffusion and proliferation rates of glioma cells. In addition to the modulatory effects of oxygen availability on glioma growth and invasion, these processes are also influenced by vascularisation mechanisms. Comparative simulation maps in Fig. 4(C) evidence that in tumours with the intrinsic cell features D and b inside a region delimited by a critical rate b + and an approximate ratio between diffusion and proliferation rates Λ + = b/D, the invading front moves faster as g 2 increases. Besides, the front speed slightly decreases or remains almost constant in the rest of tumours, i.e. with parameter values of D and b outside of such region. In particular, tumours characterised by b < b + evolve at low cellular density and thus vascular occlusive events due to increased mechanical pressure by glioma cells hardly occur. On the other hand, increasing vaso-occlusion rates in tumours with b > b + enhances the effective cell migration towards better vascularised brain tissue regions. Although vascular occlusion limits the proliferative activity of glioma cells, faster tumour front speeds are predicted as long as the triggered migratory activity dominates over cell proliferation. The infiltration width in tumours with b < b + is almost unaffected for increasing vaso-occlusion rates as shown in Fig. 4(D). However, tumours characterised by b > b + are also separated by an approximated linear relation between D and b with respect to variations in the infiltration width. In particular, more occlusion of the blood vessel results in larger flat fronts in tumours with cell proliferation/diffusion ratios above the critical value Λ + for b > b + , while the infiltration width is reduced in the rest of tumours. Discussion In this work, we proposed a deterministic mathematical model of glioma growth and invasion that is formulated as a system of reaction-diffusion partial differential equations. Our glioma-vasculature interplay model accounts for the dynamics of normoxic and hypoxic glioma cells based on the Go-or-Grow mechanism which is in turn influenced by the functional tumour vasculature and the concentration of oxygen in the microenvironment. In particular, we focused on the effect of variations in the glioma cell oxygen consumption and vascular occlusion on prognostically-relevant characteristics of tumour invasion, i.e. the front speed and infiltration width. The main model results are summarised in Fig. 5. The model analysis revealed that increasing glioma cell oxygen consumption and vaso-occlusion rates results in more diffusive and less proliferative tumours. In both scenarios, the average oxygen concentration in the Table 1. Scientific RepoRts | 6:37283 | DOI: 10.1038/srep37283 tumour microenvironment decreases, which limits glioma cell proliferation and enhances hypoxia-induced migration. This is in line with previous clinical and histopathological observations that hypoxia strongly correlates with glioma malignancy 7 , as well as triggers tumour cell migration towards better oxygenated regions leading to pseudopalisade formation [19][20][21]23 . However, the extent to which such oxygen-mediated cell responses to blood vessel occlusion influence glioma invasion depends on the specific intrinsic tumour features. Variations in the vaso-occlusion rate evidenced the existence of a critical ratio between diffusion and proliferation rates that separates glioma invasive behaviours in different regimes, see Fig. 5(B). This result is obtained for tumours characterised by sufficiently high cellular proliferation rates in which variations in the oxygen concentration, due to vascular occlusion or normalisation, significantly influence glioma cell dynamics. In such cases, variations in the vascular function are predicted to produce opposing effects on the tumour front speed and infiltration width. Moreover, we found that depending on the intrinsic tumour features two distinct regimes are identified, where the glioma invasive behaviour in response to vaso-modulatory interventions is completly different. A pro-thrombotic treatment is predicted to increase the front speed, but in turn reduces the infiltration capacity, of tumours characterised by a cell proliferation/diffusion ratio below the critical threshold. On the contrary, tumours in the other parameter regime, and under the same vaso-modulatory strategy, become increasingly infiltrative and slowly growing. Analogously, vascular normalisation is predicted to induce opposing effects on glioma invasion for the corresponding parameter regimes. Recently, it has been shown that the migration/proliferation dichotomy can introduce a critical threshold on the glioma cell density that separates tumour growth and extinction dynamics, a phenomenon called Allee effect 14 . Interestingly, we also found critical parameter values that distinguish between different glioma invasive Table 1. patterns with respect to variations in the cell oxygen consumption and vaso-occlusion rates. This is an emergent consequence of the Go-or-Grow plasticity, since in its absence (Model I) critical behaviours are not observed. Assuming or not tumour vascularisation dynamics, the Go-or-Grow induced criticality is expressed either in the form of a proliferation/diffusion ratio Λ + = b/D for b > b + or a critical proliferation rate b * of glioma cells, respectively. More precisely, the critical thresholds b * and Λ + for b > b + separate tumour behaviours in regimes where the front speed and infiltration width are differently affected by changes in the glioma cell oxygen consumption and vaso-occlusion rates. These findings highlight the importance of further investigating the therapeutic potential of targeting the Go-or-Grow phenomenon as a strategy to reduce glioma cell migration. Based on our model results, we can argue that one-size-fits-all vaso-modulatory interventions should be expected to fail to control glioma invasion due to the complexity of the mechanisms involved and inter-patient heterogeneity. This study supports the value of personalised medicine and provides a simplified, but useful modelling framework with predictive potential based on a precise tumour profiling from possible biopsy measurements and medical imaging. In particular, patient-based estimation of tumour cell proliferation and diffusion rates would be crucial components of such future tailored approaches to individualise treatment selection for glioma patients. We believe that this work substantially expands the theoretical concepts of the invasive behavior of gliomas, suggesting that any vasculature-targeting therapeutic intervention will inevitably lead to a trade-off between the tumour front speed and infiltration width. This result suggests that vaso-modulatory interventions should be embedded in a personalised combination of different treatment protocols, in which anti-angiogenesis might be integrated with individually adjusted strategies targeting cell proliferation, metabolic transformation or immune responses. For instance, in the case of gliomas characterised by a cell proliferation/diffusion ratio above Λ + = b/D for b > b + , a pro-thrombotic or an anti-vasogenic therapeutic technique may reduce the tumour front speed but at the same time leads to highly infiltrative behaviours, which makes this treatment strategy rather inappropriate. However, normalisation of the tumour blood vessels may result in faster growing gliomas with compact, less invasive morphologies. Thus, surgical resection could be considered to remove such compact tumours. In turn, the benefits of conventional treatment modalities such as chemo-, radio-and immunotherapy might significantly increase in well-vascularised and therefore normally oxygenated tumours 24,[30][31][32] . Thus, an accurate tumour patient stratification during clinical decision-making is crucial for the efficacy of vasculature-targeting therapies, either inducing tumour blood vessel deterioration or normalisation. We conclude by pointing out a number of related future research directions, as well as discussing some limitations of this work. Although in our model the vaso-occlusion term in equation (15) is rather phenomenological and more accurate modelling might be required, we think that these in silico findings provide new insights into the impact of functional vascular changes on glioma invasion. Furthermore, the migration/proliferation dichotomy of glioma cells has been modelled in the simplest possible way and more informed formulations depending on other tumour-related factors should be considered. In turn, intratumoral genetic diversity is not directly considered, but instead we take into account phenotypic diversity depending on oxygen availability, which has long been recognized as an important therapeutic factor. The latter is supported by evidence that genetic diversity is tumour-subtype specific and not significantly affected during treatment, while phenotypic heterogeneity is significantly different before and after therapy 73 . For simplicity, we carried out simulations in one spatial dimension but the model analysis can be extended to higher dimensions. Qualitative deviations from the one-dimensional case can only be expected if the model's radial symmetry breaks down via an interface instability. In a two-dimensional continuous version of the Go-or-Grow model no interface instability was observed 74 , i.e. the system grows in a radially symmetric way. Although, our system involves additional external fields such as the functional tumour vasculature, preliminary results have shown no qualitative deviations from the one-dimensional case for a continuous vascular field. Despite the fact that our model involves a large number of parameters, their values were selected independently from each other based on published experimental data. For those parameters estimated, we verified that variations in their values do not affect the general conclusions of this study. At this stage, we restrict the modelling strategy to investigate the effects of vasculature-targeting interventions on glioma invasion, however we are aware that further cell intrinsic and extrinsic factors may play a crucial role. In fact, we also intend to explore the interactions between glioma and immune cells influenced by vascularisation mechanisms as an additional level of complexity given the potential benefits of immunomodulatory therapies 42,43 . In particular, tumour-associated macrophages are plastic cells involved in relevant mechanisms such as angiogenesis and cell migration, that can exhibit protumour phenotypes promoting immune evasion and metastasis. Therefore, modelling the dynamics and function of macrophages in tumour progression may highlight new targets to develop more effective therapies, which is particularly relevant in the light of recent advances in the molecular classification of gliomas 75 . We strongly believe that mathematical modelling offers a useful integrative approach for conventional radiological, biopsy and molecular tumour characterisation, potentially allowing for the prediction of treatment outcomes and translation into the clinical decision-making process.
2016-04-18T10:56:59.000Z
2016-04-18T00:00:00.000Z
438200
s2orc/train
v2
The Risk and Benefits of Applying Artificial Intelligence in Business Discussions
The Risk and Benefits of Applying Artificial Intelligence in Business Discussions . Artificial intelligence (AI) is the development and implementation of algorithms to create a dynamic computing environment replicating human intelligence's fundamental processes. To make a computer think and act like a human requires three essential components: a computing system, data and data management, and powerful artificial intelligence algorithms [1]. With the development of AI technology, more and more companies are turning to AI, for example, to assist them in negotiations. Artificial intelligence is a powerful tool for the mathematical analysis of negotiation situations [2]. Negotiation is a process of social interaction and communication about the distribution and redistribution of power. Traditionally, negotiation assistance has been based on normative and prescriptive research, with analysts and experts as users. Electronic negotiation systems provide services to negotiators, addressing their needs rather than guiding their operations to comply with logic and optimization, as is common in software engineering [3]. Therefore, it was found that artificial intelligence is, to some extent, helpful in negotiation. This study will use some examples, like Pactum, to examine the advantages and disadvantages of AI in business negotiations, and in the end, it will suggest some solutions for the negative impacts. Introduction From simple transactions to major multinational business talks, many events in life necessitate the use of negotiation. Negotiation is the process of achieving an agreement or finding a compromise while avoiding conflicts and disputes. Business Negotiation is a kind of negotiation, and it is a process between two or more parties (each with its own aims, needs, and viewpoints) seeking to discover common ground and reach an Agreement to settle a matter of mutual concern, resolve a conflict and exchange value. "Whether you want to be an entrepreneur or work your way up to be a partner in a consulting firm, negotiating representation is a life skill and global currency," says Niro Sivanathan [4]. Meanwhile, according to Forrester, over two-thirds of financial businesses have deployed or are in the process of using AI in areas ranging from customer insight to IT efficiency [5]. Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind. Artificial intelligence is extensively employed in today's digital world, and more and more businesses are turning to it to help them negotiate. Many researchers have investigated automated negotiation, largely to examine its potential or to study its conceptual models. However, greater scientific investigation is needed, and there are no specific examples of its implementation. The overview paper will explore real-world instances of automated negotiation and go deeper into the implications and pitfalls of automated negotiation. Automated negotiations have several advantages, including better win-win transactions and less time, money, and stress [6]. In an interview with science, Baarslag claims that there is a large amount of information to prepare before a negotiation, that humans frequently fail to reach the best agreement that a computer can, and that computer negotiations can be done very quickly and used by autonomous negotiators to bargain in the market, buy a house, hold a meeting, or resolve a political impasse [7]. Pactum, as the case focused in this article, is a platform that employs artificial intelligence to automate supplier contract negotiations for organizations, and its 'negotiation-as-a-service' platform is employed by several multinational corporations [8]. The examination of Pactum software in various firms will demonstrate that AI is capable of assisting businesses in commercial negotiations. The goal of this article is to examine how the usage of artificial intelligence in today's society influences business negotiations, as well as to try to discover specific approaches to reap the benefits of artificial intelligence while minimizing its negative effects on business. The major goal is to identify the benefits and drawbacks of AI in business negotiations by collecting and analyzing information. The application of artificial intelligence in negotiating In recent years, artificial intelligence technology has developed rapidly, and AI has gone from being impossible to appearing around our life now. From voice assistants to service robots to "Alpha Dog" sweeping the Go world, AI is becoming increasingly widespread, and negotiating robots are becoming available [7]. Although the technology for negotiating with AI is not yet perfect, using AI to negotiate provides greater advantages. Negotiating software has quite a long history; a few negotiating support systems started appearing in the early 1980s, like Inspire and Negoisst [9]. Pactum is a platform that uses artificial intelligence to automatically negotiate supplier contracts for companies, using artificial intelligence and a chatbot interface to automate contract negotiations for some of the world's large companies, such as Walmart and Wesco International. By using the company's software, large customers like Walmart are able to generate between 2.8% and 6.8% profitability from each supplier deal negotiated [10]. Johnathan Mell from the University of Central Florida says, "AI as an adviser on your shoulder, whispering in your ear, I think they are lying, you should push harder" [9]. Various academics have developed models and discovered that in negotiations, robots could compute methods that allow negotiators to know exactly where they stand at this moment in terms of strengths and weaknesses and to determine whether or not to continue with the discussions [11]. One can speculate on what might happen if artificial intelligence could negotiate commercial contracts better than humans, but no one has taken action. Pactum, an AI-based platform provider, has put this to the test. With an $11 million Series A round of investment, the platform, which enables multinational corporations to automate customised business conversations at scale, entered the battle. The benefits of artificial intelligence in negotiation There are several benefits to using ai in negotiations. Firstly, AI increases the efficiency of negotiations, effectively reduces the error rate, and is not influenced by human emotions. Traditional negotiation, whether done in person, by mail, or by phone, is sometimes difficult to handle, prone to misunderstanding, and time-consuming [11]. Walmart will utilize Pactum's artificial intelligencebased tool to fix the problem, although admitting that negotiating the best terms with hundreds of thousands of suppliers is challenging. In principle, the technology will assist Walmart in negotiating the best rates with more suppliers. Other large corporations will choose Pactum to assist them in negotiating since artificial intelligence can perform things that people cannot readily do and can help make negotiations more efficient. According to a KPMG study, inefficient contracts may cost organizations between 17 and 40% of the value of a specific agreement [12]. Pactum contacts suppliers and negotiates terms using a chatbot interface after first understanding each other's demands [13]. Secondly, AI has no expressions or tone of voice, so the other party cannot tell whether what is said is true from micro-expressions and tone of voice. In negotiations, micro-expressions are most likely to betray a person's mental activities, and the other party can judge the mental activities from these details, which can be reduced if the negotiations are conducted by robots, leading to failure. According to research, even seasoned experts have varied and conflicted attitudes about negotiating. HBS professor Alison Wood Brooks discovered that worried negotiators make more modest first offers, have lower expectations of the transaction, and leave early in discussions [14]. Microexpression processing is a difficult thing for humans to do. Robots might be a suitable solution to this problem. The other person cannot discern emotions by their facial expressions. Even if someone has excellent negotiating abilities, he cannot employ them in front of a robot. Thirdly, AI can be programmed to quickly model the psychology of the negotiating partner so that it can perform a precise knockout blow and engage in "psychological warfare" with the other side. Before the negotiation, the robot is familiar with all elements of the other party's information, and it can also watch the other party throughout the negotiation in order to swiftly analyze the other party's psychological activity and assist humans in breaking through the other party's psychology. For example, after determining the value of a large company and the range of concessions the software must make, Pactum's software then uses a chatbot interface to negotiate with the supplier. It asks a series of questions that prompt the other party to reveal its preferences. In the chat, the bot also incorporates psychological strategies that reflect the conversation to the other person. The main advantage is that the chatbot asks questions that force the customer to choose between alternatives, and the system decides on a 'best price' based on a chart weighing the concerns of both parties. Finally, AI can negotiate for long periods of time, making it more likely that the deal will succeed. There is no chance of a deal failing because the parties are physically exhausted from a stalemate. Negotiators must spend a significant amount of time and effort organising information, and the expectations of human bargaining abilities are far higher, whereas artificial intelligence can assist us. Companies can use Pactum software to set negotiation objectives such as the price, delivery, payment dates, and quality assurance terms the company wants, basic information such as the ideal price for the deal, and red lines the company cannot cross. Pactum, which raised 11 million dollars in Series A funding led by Atomico, makes contract negotiations less laborious and enables companies to go beyond a few key terms that typically revolve around price brown-out payments [8]. The system learns during each negotiation to make the next one better. The only downside to the system is that it is not human, so its language processing system may be somewhat incomprehensible and irritate the other person during a chat with the bot. The disadvantages of artificial intelligence in negotiations From the above, the use of AI in business negotiation can bring many positive effects, so this essay will continue to analyze the causes of these adverse effects in order to reduce these negative effects. Wu Jun's investigation illustrates that AI has been able to have self-awareness as well as empathy, but this technology has not been fully realized due to social and ethical issues [15]. Hence, for the time being, the inability to reach empathy will remain a reason why AI cannot be widely utilized in business negotiations. The second point is that if humans rely entirely on AI for negotiation, mechanistic programming may lead to negotiation failure. Data are entered before the negotiation, but it is challenging to make ad hoc data changes while the negotiation is in progress. Because of the limitations of technology, it will be difficult to ensure there will be none of the problems happening during the action. The third point is that there are only two purposes for using AI in business negotiations: one is to eliminate manual labour, and the other is to prevent the impulse and whim of negotiators. The complete reliance on AI may lead to being caught by the other side's flaws because, so far, AI is unable to autonomously judge the actual importance of each interest to its side, then inevitably make the results unsatisfactory in the negotiations. So based on this background, it is difficult for people to rely on AI to complete the whole negotiation, and the purpose of saving labor becomes more difficult to achieve. The inability of AI to judge the subtext of people's communication will also become a significant drawback. In summary, it can easily see that the way to control the adverse effects is to adjust the degree of reliance on AI for business negotiations according to the actual situation. As technology evolves, AI will become more and more capable, and even though they are not yet able to accurately analyze human emotions, it is possible that they will be able to do so in the future. Then, move on to examine the different instances where it is most beneficial to rely on AI to negotiate to different degrees. First, these instances can be divided into roughly these categories. Different categories of examples for using AI The first type is the complete inability to rely on artificial intelligence. This is the case where there are a lot of human emotions and future developments involved because AI cannot accurately judge the emotions between people and cannot assess future needs, so it is difficult to play in this kind of thing. For example, it is like a trade transaction between countries. Country A wants to export cucumbers to country B during the summer collection, and country B does not agree at first because he already does not have a very high demand for cucumbers. However, when country A found that country B had a large number of radishes in the winter, it offered to buy a large number of radishes in the winter in exchange, so country B had a reason to agree to the deal. Through much information will find today's level of technology, artificial intelligence is not too good to reasonably deal with things that have about the future, so this kind of instance, in general, can only rely on a human to complete. The second type is semi-dependent on artificial intelligence. This kind is easier to understand, as an example, to suggest. Under the influence of today's epidemic situation, many large companies have passed the online interview program. So there are a number of people who will use AI to assist in negotiating salaries with their bosses after passing an interview. In these cases, AI cannot replace human beings because of emotional conflict. However, it can still make some rational suggestions to help people reach a mutually beneficial negotiation result. In detail, when the salary has reached the maximum limit that the boss can give, then the AI may try to get more benefits for people in other areas according to their specific needs; for example, one of their needs is family is the first in people's mind, then, when people cannot get a higher salary, the AI will suggest to people to ask the boss to work at home for a day periodically. Type 3: Complete dependence on AI. There are not many instances of this because there are few one-shot deals in commercial trade. There is such a case on the eBay platform: "sniping agents". This program allows buyers to set a maximum price and then purchase the desired item at a reasonable price. This is a typical example of a business negotiation that relies entirely on artificial intelligence. Among these three methods of using AI, the full use of AI is the most beneficial because it saves the labor consumed, and the previously discussed drawbacks are noted to be improved when the technology allows it. Conclusion In conclusion, there are many problems with AI at this stage, and it is unlikely that AI will be able to replace humans in negotiations in the next five years. The benefits of AI can be used to assist in our negotiations and thus increase our chances of success. From the above analysis, it can be found that analyze the causes of these negative effects. Humans can provide data input to the robot, allowing the robot to fully understand the other party's situation before negotiating, as well as making predictions, which reduces the human workload. Make good use of the robot's expressionlessness as a feature in negotiations to prevent the other side from attacking us, and quickly analyse the other side's psychology to attack with precision. By applying different levels of AI in different situations, the advantages can be made more profitable at a time when AI is still limited by the state of the technology.
2022-10-27T15:11:33.786Z
2022-10-24T00:00:00.000Z
253137710
s2orc/train
v2
Safety and efficacy of EB15 10 (Bacillus subtilis DSM 25841) as a feed additive for piglets (suckling and weaned), pigs for fattening, sows in order to have benefits in piglets, sows for reproduction and minor porcine species
Safety and efficacy of EB15 10 (Bacillus subtilis DSM 25841) as a feed additive for piglets (suckling and weaned), pigs for fattening, sows in order to have benefits in piglets, sows for reproduction and minor porcine species Abstract Following a request from the European Commission, the EFSA Panel on Additives and products or Substances used in Animal Feed (FEEDAP) was asked to deliver a scientific opinion on the safety and efficacy of EB15 10 for all pigs. The additive is a preparation containing viable spores of a strain of Bacillus subtilis intended for use in feed at the proposed dose of 5 × 108 CFU/kg complete feedingstuffs and in water for drinking at 1.7 × 108 CFU/L. The additive exists in two forms, EB15 and EB15 10, and has been previously characterised by the FEEDAP Panel. B. subtilis is considered by EFSA to be suitable for the qualified presumption of safety (QPS) approach to establishing safety. The active agent fulfils the requirements, and consequently, the additive was presumed safe for the target animals, consumers of products from treated animals and the environment. Given the proteinaceous nature of the active agent, the additive should be considered a potential respiratory sensitiser. In the absence of data, the FEEDAP Panel cannot conclude on the irritancy potential of the additive to skin and eyes or its dermal sensitisation. The data made available by the applicant allowed the Panel to conclude that the additive in either form has a potential to be efficacious as a zootechnical additive when added to feed for piglets (suckling and weaned), pigs for fattening and sows (excluding a benefit from sows to suckling piglets) at 5 × 108 CFU/kg (corresponding to 1.7 × 108 CFU/L water) The conclusions on the efficacy were extrapolated to all Suidae species. Acknowledgments: The Panel wishes to acknowledge the contribution of Elisa Pettenati to this opinion. Legal notice: Relevant information or parts of this scientific output have been blackened in accordance with the confidentiality requests formulated by the applicant pending a decision thereon by the European Commission. The full output has been shared with the European Commission, EU Member States and the applicant. The blackening will be subject to review once the decision on the confidentiality requests is adopted by the European Commission. This is an open access article under the terms of the Creative Commons Attribution-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited and no modifications or adaptations are made. establishes the rules governing the Community authorisation of additives for use in animal nutrition. In particular, Article 4(1) of that Regulation lays down that any person seeking authorisation for a feed additive or for a new use of a feed additive shall submit an application in accordance with Article 7. The European Commission received a request from Chr Hansen A/S 2 for authorisation of the product EB15 (Bacillus subtilis DSM 25841), when used as a feed additive for pigs for fattening, sows for reproduction, sows in order to have benefit in piglets, piglets (suckling and weaned) and other minor porcine species (category: zootechnical additives; functional group: digestibility enhancers). According to Article 7(1) of Regulation (EC) No 1831/2003, the Commission forwarded the application to the European Food Safety Authority (EFSA) as an application under Article 4(1) (authorisation of a feed additive or new use of a feed additive). EFSA received directly from the applicant the technical dossier in support of this application. The particulars and documents in support of the application were considered valid by EFSA as of 16 October 2018. According to Article 8 of Regulation (EC) No 1831/2003, EFSA, after verifying the particulars and documents submitted by the applicant, shall undertake an assessment in order to determine whether the feed additive complies with the conditions laid down in Article 5. EFSA shall deliver an opinion on the safety for the target animals, consumer, user and the environment and on the efficacy of the product EB15 (Bacillus subtilis DSM 25841), when used under the proposed conditions of use (see Section 3.1). Additional information The additive EB15 10 is a preparation containing viable spores of Bacillus subtilis DSM 25841 that has not been previously authorised as a feed additive in the European Union. The EFSA Panel on Additives and Products or Substances used in Animal Feed (FEEDAP) adopted an opinion on the safety and efficacy of EB15 10 (Bacillus subtilis DSM 25841) as a feed additive for weaned piglets and minor porcine species (EFSA FEEDAP Panel, 2018). 2. Data and methodologies Data The present assessment is based on data submitted by the applicant in the form of a technical dossier 3 in support of the authorisation request for the use of EB15 10 as a feed additive. The European Union Reference Laboratory (EURL) considered that the conclusions and recommendations reached in the previous assessment are valid and applicable for the current application. 4 Methodologies The approach followed by the FEEDAP Panel to assess the safety and the efficacy of EB15 10 (Bacillus subtilis DSM 25841) is in line with the principles laid down in Regulation (EC) No 429/2008 5 and the relevant guidance documents: Guidance on the assessment of the efficacy of feed additives (EFSA FEEDAP Panel, 2018). Assessment EB15 10 is preparation of viable spores of a strain B. subtilis intended to be used as a zootechnical additive (functional group: digestibility enhancers) for all pigs and minor porcine species. Characterisation The additive EB15 10 is a preparation of a non-genetically modified strain deposited in the Deutsche Sammlung von Mikroorganismen und Zellkulturen with the accession number DSM 25841. In a previous assessment, the FEEDAP Panel (EFSA FEEDAP Panel, 2018) characterised the strain and the additive and no new information has been provided. The strain was taxonomically identified as B. subtilis by molecular techniques, it was shown to be susceptible to the relevant antibiotics and not toxigenic. The additive under assessment EB15 10 has the same composition (spores concentrate, calcium carbonate (96 %) and an anticaking agent (kieselghur, 6 1%) and method of manufacture as those considered in the previous application. It ensures a minimum guaranteed concentration of 1.25 9 10 10 colony forming units (CFU) per gram of additive. The applicant mentioned in the dossier a second formulation called EB15 with a minimum concentration of 1.25 9 10 9 CFU/g additive. The data pertaining to composition, physical properties and stability submitted in the previous application dossier still apply. The additive is intended to be used in feed and water for drinking for piglets (suckling and weaned), pigs for fattening, sows (for reproduction or to have a benefit in piglets) and other minor porcine species. The additive is to be used at 5 9 10 8 CFU/kg complete feed or 1.7 9 10 8 CFU/L of drinking water in all cases. Safety The bacterial species B. subtilis is considered by EFSA to be suitable for the qualified presumption of safety (QPS) approach to establishing safety for the target species, consumers and the environment (EFSA, 2007;EFSA BIOHAZ Panel, 2017). This approach requires the identity of the strain to be conclusively established, evidence that the strain is not toxigenic and that it does not show resistance to antibiotics of human and veterinary importance. In a previous opinion (EFSA FEEDAP Panel, 2018), the identification of the strain and compliance with the QPS qualifications were confirmed. Therefore, the Panel concluded that Bacillus subtilis DSM 25841 can be presumed safe for target animals, consumers of products derived from animals fed the additive and the environment. The substances used in the formulation of the additive would not modify these conclusions. No new information has been made available that would lead the FEEDAP Panel to reconsider the conclusion previously drawn. Moreover, the use of the additive in the new target species/categories would not introduce hazards/risks not already considered. In the previous opinion (EFSA FEEDAP Panel, 2018), the Panel concluded that the additive should be considered a potential respiratory sensitiser. In the absence of data, the FEEDAP Panel could not conclude on the irritancy potential of the additive to skin and eyes or its dermal sensitisation. No new information supporting safety of the additive for the user has been submitted in the current application. Efficacy for weaned piglets The data submitted in the present application were already evaluated in a previous assessment (EFSA FEEDAP Panel, 2019). Based on the results of a statistical analysis pooling the data of four trials the Panel concluded that the additive has the potential to be efficacious as a zootechnical additive in weaned piglets at 5 9 10 8 CFU/kg complete feed (corresponding to 1.7 9 10 8 CFU/L of drinking water). The application was made for Suidae in all productive stages and studies in weaned piglets, pigs for fattening and sows (for two cycles) were submitted. The two forms of the additive are considered to be equivalent when added at the same level to water/feed. The studies provided in weaned piglets had been assessed previously by the FEEDAP Panel and it was concluded that the additive has a potential to be efficacious as a zootechnical additive in weaned piglets at the recommended level of 5 9 10 8 CFU/kg complete feed (corresponding to 1.7 9 10 8 CFU/L water). This conclusion can be extended to the use of the additive in feed for suckling piglets. The studies submitted in sows showed that the additive has a potential to be efficacious to improve the reproductive performance of sows at 5 9 10 8 CFU/kg complete feed (corresponding to 1.7 9 10 8 CFU/L water). However, the results would not support the efficacy of the additive when administered to sows in order to have benefits in piglets. The results of the studies submitted in the current dossier are not sufficient to conclude on the efficacy of the additive in pigs for fattening. However, the Panel considers that since efficacy has been established in weaned piglets and sows the efficacy in pigs for fattening can be assumed without the need for further data. Therefore, the Panel considers that the additive has the potential to be efficacious as a zootechnical additive in pigs for fattening at 5 9 10 8 CFU/kg complete feed (corresponding to 1.7 9 10 8 CFU/L water). Considering that additive can be assumed to have similar effects in all Suidae species, the above conclusions are extrapolated to include all Suidae in all life stages (excluding a benefit from sows to suckling piglets). 3.4. Post-market monitoring The FEEDAP Panel considers that there is no need for specific requirements for a post-market monitoring plan other than those established in the Feed Hygiene Regulation 18 and Good Manufacturing Practice. Conclusions The additive is safe for target animals, consumers of products derived from animals fed the additive and the environment. The additive should be considered a potential respiratory sensitiser. The FEEDAP Panel cannot conclude on the irritancy potential of the additive to skin and eyes or its dermal sensitisation. The additive, in either form, is efficacious for all Suidae in all productive stages (excluding a benefit from sows to suckling piglets) at 5 9 10 8 CFU/kg complete feed (corresponding to 1.7 9 10 8 CFU/L water). Documentation as provided to EFSA/Chronology Date Event
2019-11-14T17:08:58.750Z
2019-11-01T00:00:00.000Z
209267610
s2orc/train
v2
A structural optimization algorithm with stochastic forces and stresses
A structural optimization algorithm with stochastic forces and stresses We propose an algorithm for optimizations in which the gradients contain stochastic noise. This arises, for example, in structural optimizations when computations of forces and stresses rely on methods involving Monte Carlo sampling, such as quantum Monte Carlo or neural network states, or are performed on quantum devices which have intrinsic noise. Our proposed algorithm is based on the combination of two key ingredients: an update rule derived from the steepest descent method, and a staged scheduling of the targeted statistical error and step-size, with position averaging. We compare it with commonly applied algorithms, including some of the latest machine learning optimization methods, and show that the algorithm consistently performs efficiently and robustly under realistic conditions. Applying this algorithm, we achieve full-degree optimizations in solids using ab initio many-body computations, by auxiliary-field quantum Monte Carlo with planewaves and pseudopotentials. A new metastable structure in Si was discovered in a mixed geometry and lattice relaxing simulation. In addition to structural optimization in materials, our algorithm can potentially be useful in other problems in various fields where optimization with noisy gradients is needed. Geometry optimization is the procedure to locate the structure with energy or free-energy minimum in a solid or molecular system given the atomic compositions. Such a local or global minimum state is usually a naturally existing structure under common or extreme conditions. As an essential ingredient in materials discovery and design, structural search and geometry optimization have important applications from quantum materials to catalysis to protein folding to drug design, covering wideranging areas including condensed matter physics, materials science, chemistry, biology, etc. The problems involved are fundamental, connecting applied mathematics, algorithms, and computing with quantum chemistry and physics. With the rapid advent of computational methods and computing platforms, they have become a growing component of the scientific research repertoire, complementing and in some cases supplementing experimental efforts. The vast majority of geometry optimization efforts to date have been performed with an effective ion-ion potential (force fields) [1,2], or ab initio molecular dynamics based on density-functional theory (DFT) [3][4][5][6]. Force fields are obtained empirically from experimental data, or derived from DFT calculations at fixed structures, or learned from combinations of theoretical or experimental data. Geometry optimization using force fields is computationally low-cost and convenient, and allows a variety of realistic calculations to be performed. The development of ab initio molecular dynamics [7] signaled a fundamental step forward in accuracy and predictive power, where * schen24@email.wm.edu † szhang@flatironinstitute.org the interatomic forces are obtained more accurately from DFT on the fly, allowing the structural optimization to better capture the underlying quantum mechanical nature. With either force fields or ab initio DFT, the total energy and forces can be obtained deterministically without any statistical noise, and a well tested set of optimization procedures have been developed and applied. In many quantum materials, however, Kohn-Sham DFT is still not sufficiently accurate, because of its underlying independent-electron framework, and a more advanced treatment of electronic correlations is needed to provide reliable structural predictions. Examples of such materials include so-called strongly correlated systems, which encompass a broad range of materials with great fundamental and technological importance. One of the frontiers in quantum science is to develop computational methods which can go beyond DFT-based methods in accuracy, with reasonable computational cost. Progress has been made from several fronts, for example, with the combination of DFT and the GW [8], approaches based on dynamical mean field theory (DMFT) [9], quantum Monte Carlo (QMC) methods [10][11][12][13], quantum chemistry methods [14,15], etc. For instance, the computation of forces and stresses with plane-wave auxiliary field quantum Monte Carlo (PW-AFQMC) [11,16] has recently been demonstrated [17], paving the way for ab initio geometry optimization in this many-body framework. One crucial new aspect of geometry optimization with most of the post-DFT methods is that information of the potential energy surface (PES) obtained from such approaches contain statistical uncertainties. The post-DFT methods, because of the exponential scaling of the Hilbert space in a many-body treatment, often involve stochastic sampling. This includes the various classes of QMC methods, but other approaches such as DMFT may also contain ingredients which rely on Monte Carlo sampling. Neural network wave function approaches [18,19] also typically involve stochastic ingredients. Additionally if the many-body computation is performed on a quantum device [20,21] noise may also be present. Geometry optimization under these situations, namely with noisy PES information, presents new challenges, and also new opportunities. As we illustrate below, the presence of statistical noise in the computed gradients can fundamentally change the behavior of the optimization algorithm. On the other hand, the fact that the size of the statistical error bar can be controlled by the amount of Monte Carlo sampling affords opportunities to tune and adapt the algorithm to minimize the integrated computational cost in the optimization process. To date work on structural optimization with noisy PES has not been widespread. One class [22][23][24] focuses on applications using variational Monte Carlo (VMC), which allows computations of forces and Hessians besides the total energy, mostly applying standard optimization algorithms. Another class focuses on using total energies for exploring the PES [25,26], since computing forces and other gradients remains challenging in QMC, especially with projection methods beyond VMC. General and more systematic applications of structural optimization in correlated materials will in all likelihood require going beyond VMC, and effectively exploiting accurate forces and other gradients to efficiently scale up to high dimensions. The present work investigates optimization algorithms with this as the background. In principle, a number of algorithms widely used in the machine-learning community can be adopted to the geometry optimization problem. However, we find that, in a variety of realistic situations under general conditions, the performance of these algorithms is often sub-optimal. Given that the many-body computational methods tend to have higher computational costs, it is essential to minimize the number of times that force or stress needs to be evaluated, and the amount of sampling in each evaluation, before the optimized structure is reached. In this paper, we propose an algorithm for optimization when the computed gradients have intrinsic statistical noise. The algorithm is found to consistently yield efficient and robust performance in geometry optimization using stochastic forces and stresses, often outperforming the best existing methods. We apply the method to realize a full geometry optimization using forces and stresses computed from PW-AFQMC. In analyzing and testing the method, we unexpectedly discovered a new orthorhombic Cmca structure in solid silicon. The rest of the paper is organized as follows. In Sec. II we give an overview of our algorithm and outline the two key components. This is followed by an analysis in Sec. III, with comparisons to common geometry optimization algorithms, including leading machine learning algorithms. In Sec. IV we apply our method to perform, for the first time, a full geometry optimization in solids using PW-AFQMC. We then describe the discovery of the new structure in Si in Sec. V, before concluding in Sec. VI. II. ALGORITHM OVERVIEW A noisy gradient, such as an interatomic force evaluated from a QMC calculation, can be written as where F is the true force, andF is the (expectation) value computed by the numerical method with stochastic components. The vector ε denotes stochastic noise, for example the statistical error bar estimated from the QMC computation. In the case of a sufficiently large number of Monte Carlo samples (realized in most cases but not always), the central limit theorem dictates that the noise is given by a Gaussian where i denotes a component of the gradient (e.g. a combination of the atom number and the Cartesian direction in the case of interatomic forces), and s i ∝ N −1/2 s is the standard deviation, which can be reduced as the squareroot of the number of effective samples N s . The computational cost is typically proportional to N s . Our algorithm consists of two key components. Inside each step of the optimization, we follow an update rule using the currentF, which is a fixed-step-size modification of the gradient descent with momentum method [27,28], which we will refer to as "fixed-step steepest descent" (FSSD). Globally, the optimization process is divided into stages, each with a target statistical error s for ε (hence controlling the computational cost per gradient evaluation) and specific choice of step size, called a staged error-targeting (SET) workflow. The SET is complemented by a self-averaging procedure within each stage which further accelerates convergence. We outline the two ingredients separately below, and provide analysis and discussions in the following sections. A. The FSSD update rule The SET approach discussed in Sec. II B defines the overall algorithm. Each step inside each stage of SET is taken with the FSSD algorithm, which works as follows. Let n denote the current step number, and x n denote the atom positions at the end of this step. Here, x n is an N d -dimensional vector, with N d being the degree of freedom in the optimization. (1) Calculate the force at the atomic configuration from the previous step: F n−1 = −∇ (x n−1 ). (In the case of quantum many-body computations, the loss function is the ground-state energy E, and the force is typically computed as the estimator of an observable directly, for example via the the Hellmann-Feynman theorem [17].) (2) The search direction is then chosen as where d n−1 is the the displacement direction of the step (n − 1), which encodes the forces from past steps and thus serves as a "historic force." We experiment with the choice of the parameter α (see Appendix D), but typically set it to α = 1/e. (3) The displacement vector is now set to the chosen direction from (2), with step size L which is fixed throughout the stage: (4) Obtain the new atom position vector, x n = x n−1 + ∆x n . Account for symmetries and constraints such as periodic boundary conditions or restricting degree of freedoms as needed. B. SET scheduling approach The staged error-targeting workflow (SET) can be described as follows: (1a) Initialize the stage. At the beginning of each stage of SET, the step count n is set to 1, and an initial position x 0 is given, which is either the input at the beginning of the optimization or inherited from the previous stage [see (5) below]. We also set d 0 = 0 in (2) of Sec. II A (thus the first step within each stage is a standard steepest descent). (1b) Use a fixed step size L, and target a fixed average statistical error bar s for the force computation throughout this stage. The values of L and s are either input (first stage at the beginning of the optimization) or set at the end of the previous stage [see (5) below]. From s we obtain an estimate of the computational resources needed, C(s) ∝ s −2 for each force evaluation, which helps to set the run parameters during this stage (e.g. population size and projection time in AFQMC). We have used the average ia s ia /N d for s, but clearly other choices are possible. To initialize the optimization we have typ- . For s, we have typically used an initial value of ∼20% of the average of each component of the initial force. These choices are ad hoc and can be replaced by other input values, for example, from an estimate by a less computationally costly approach such as DFT. (2) Do a step of FSSD with the current step-size L and the rationed computational resources C(s). This consists of the steps described in Sec. II A. (3) Perform convergence analysis if a threshold number of steps have been reached. Our detailed convergence analysis algorithm is discussed in Appendix E. (4) If the convergence is not reached in (3), loop back to (2) for the next step within this stage; otherwise, the analysis will reveal a previous step count m (m < n) where the convergence was reached. Take the average of {x m , x m+1 , . . . , x n } (see Appendix E) to obtain the final position of this stage,x. (5) If overall objective of optimization is reached, stop; otherwise, set x 0 =x, modify L and s, and return to (1). For the latter, we typically lower s and L by the same ratio. III. ALGORITHM ANALYSIS In this section we analyze our algorithm, provide additional implementation details, describe our test setups, and discuss additional algorithmic issues and further improvements. From the update-rule prospective, we make a comparison in Sec. III A between the FSSD and common line-search [29][30][31][32][33][34] based algorithms (steepest descent [35] and conjugate gradient [36][37][38][39][40]), as well as several optimization algorithms widely used in machine learning (RMSProp [41], Adadelta [42], and Adam [43]). Then in Sec. III B we analyze SET, illustrate how position averaging and staged scheduling improve the performance of the optimization procedure, and discuss some potential improvements. To facilitate the study in this part, we create DFTmodels to simulate actual many-body computations with noise. We consider a number of real solids and realistic geometry optimizations, but use forces and stresses computed from DFT, which is substantially less computationally costly than many-body methods. Synthetic noise is introduced on the forces, defining ε according to the targeted statistical errors of the many-body computation, and samplingF = {F i } from N (F i , s 2 ), where {F i } are the corresponding forces or stresses computed from DFT. As indicated above, we have chosen the noise to be isotropic in all directions based on our observations from AFQMC, but this can be generalized as needed. The DFT-model replaces the many-body computation, and is called to produce {F i } as the input to the optimization algorithm. This provides a controlled, flexible, and convenient emulator for systematic studies of the performance of the optimization algorithm. A. FSSD vs. line-search and ML algorithms In the presence of noise in the gradients, standard line-search algorithms such as steepest descent and conjugate gradient can suffer efficiency loss or even fail to find the correct local minimum. (See Appendix A for an illustration.) Many machine learning (ML) methods, which avoid line-search and incorporate advanced optimization algorithms for low-quality gradients, are an obvious choice as an alternative in such situations. Our expectation was that these would be the best option to serve as the engine in our optimization. However, to our surprise we found that the FSSD was consistently com-petitive with or even out-performed the ML algorithms in geometry optimizations in solids. Below we describe two sets of tests in which we characterize the performance of FSSD in comparison with other methods. For line-search methods, we use the standard steepest descent, and the conjugate gradient with a Polak-Ribiére formula [39], which showed the best performance within several conjugate gradient variants in our experiments. For the ML algorithms we choose three: RMSProp, Adadelta, and Adam, which are well-known and generally found to be among the best performing methods for a variety of problems. For each algorithm, we have experimented with the choice of step size or learning rate in order to choose an optimal setting for the comparison. (Details on the parameter choices can be found in Appendix D.) Figure 1 shows a convergence analysis of FSSD and other algorithms in solid Si (in which the targeted minimum is the so-called β-tin structure, reached under a pressure-induced phase transition from the diamond structure, as illustrated in the top panel; see details in Appendix B). Three random runs are shown for each method. As seen in panel (b), the performance of linesearch methods, in which one line-search iteration can take several steps, is lowered by the statistical noise. The convergence of FSSD is not only much faster but also more robust than the two line-search methods. The ML algorithms are shown in panel (c). RMSProp shows slightly worse convergence speed and quality than FSSD. These methods have conceptual similarities: both involve averaging over gradient history, and both become a fixed-step approach when this averaging is turned off. Adadelta has excellent convergence quality, but slower convergence. Adam performs significantly worse than the other algorithms here. We next compare FSSD and the three ML algorithms in a two-dimensional solid, the MoS 2 monolayer, which has an interesting energy landscape: the global minimum (2H) and a nearby local minimum (1T) are separated by a ridge, as depicted in Fig. 2 (system details in Appendix B). We observe that the original ML algorithms all lead to the local minimum structure, while FSSD finds the global minimum. We then modified the ML algorithms and introduced a "by-norm" variant (details in Appendix D). As shown in Fig. 2, this resulted in different behaviors from the original "element-wise" algorithms, crossing over the ridge and finding the global minimum instead. These "by-norm" algorithms, similar to FSSD, follow paths that are almost perpendicular to the contour lines, which lead to the global minimum in this setup. It is worth emphasizing that the observation here should not be taken as a general conclusion over any energy landscape. The proximity of the initial structure to the convergence boundary is a key factor, but the markedly different behaviors from the different variants are still interesting to note. The convergence speed of each method in MoS 2 can be seen on the contour plot, where each arrow represents a single optimization step; a more direct comparison is shown in panel (c). FSSD remains the fastest method, again closely followed by RMSProp and Adadelta. These tests also confirm the characteristics of the ML algorithms seen in the Si test: RMSProp is similar to FSSD, and shows relatively fast convergence on the shortest route; Adadelta optimizes efficiently on steep surfaces but reduces the step size more drastically when entering a "flatter" landscape, which slows down its final convergence; due to its inclusion of the first momentum Adam produces a path that is more like a damping dynamics, delaying its convergence speed. Step # Step # B. Performance and analysis of SET When the FSSD is applied under the SET approach, a qualitative leap in capability and efficiency is achieved. In Fig. 3, we illustrate their integration and demonstrate the efficiency gain by their synergy, using the example of optimization in MoS 2 . In panel (a), a simple two-stage scheduling is applied in SET. The convergence process is shown for five optimization runs. In each stage, the end of each run is indicated by filled symbols. The automatic script also identifies, after the fact, an initial position of convergence, as described in (4) in Sec. II B; the average of this position in each stage is indicated by the empty diamond. A clear lag is seen between the two, leaving a considerable number of steps for position averaging in each run. Position averaging ensures that these steps are not wasted but effectively utilized. This is reflected by the drastically better initial positions in Stage II than the corresponding end positions in Stage I, as seen in the lowering of the error in the energy. One of the runs (green curve) is discontinued after Stage I, because it is trapped in a local minimum, as identified by the clustering of the converged positions from all the runs. In stage II, the step size and error target are both reduced by a factor of 10. Panel (b) in Fig. 3 shows the convergence plot without SET. The step size and error target are fixed at the values used in Stage II above, so that the same convergence quality is achieved as in (a). We see that all five runs converge in this setting. From Panel (c), which compares the computational costs between (a) and (b), we see that the two-stage SET procedure resulted in a 90% saving, or ten-fold gain in efficiency in the optimization. There are two key ingredients in the SET approach: position averaging at the end of each stage, and discrete, staged scheduling instead of adapting the errorbar and step-size continuously with time. In FSSD, a larger step size will generally lead to faster convergence; however, it will result in worse final convergence quality, because the atomic positions will fluctuate in larger magnitudes around the minimum. Position (or parameter) averaging helps to dramatically improve the convergence quality FSSD. The idea of averaging parameters over an optimization trajectory has a long history [44][45][46] and has been applied in previous structural optimizations in QMC (see e.g., Refs. [22,24,25]). Our algorithm defines a precise and efficient scheme to apply position averaging retroactively after convergence has been detected. It allows a wide range of choices for step size, with almost no effect on the convergence quality, as illustrated in Appendix C. The convergence quality within this range is dictated by the target error bar size s. This makes it more natural to introduce the concept of a separate stage, in which we target a smaller error bar (with increased computational cost), and reduce the step size at the same time to account for the reduced system scale. Comparing to a smooth scheduling procedure, we find this staged scheduling to be efficient, more robust, and resilient to saddle points. We mention some possible improvements to the SET algorithm over our present implementation. We have chosen to reduce s and the step size L by the same scale when entering a new stage. Around the minimum, the optimal step size L is essentially proportional to the distance D to the minimum, suggesting a choice of 0.1D ∼ 0.2D for L. The target error bar s on the force should also be reduced with D, but as illustrated in Appendix C, D decreases more slowly than s. This indicates that it would be more optimal to reduce s faster than L. A related point is how much to reduce s in each stage of the scheduling. If the choice is too aggressive, a large reduction in L would be required to reach convergence, which in turn would require a large number of steps, hence large computational cost. If a very small reduction of s is used, a large number of stages will be needed, which is less optimal since there is a threshold of steps to identify convergence in each stage. Our empirical choice of ∼ ×10 is based on the balance of these two extremes. It is worth emphasizing that SET can be employed in combination with other algorithms. For example. we find that position averaging can improve the convergence quality in (by-norm) RMSProp by a similar extent to what is seen with FSSD. The RMSProp × SET approach, although slightly slower than FSSD × SET in the examples we studied, would provide more freedom in the choice of the step size, as RMSProp allows for small autoadaptions. Finally, we comment on the computational cost and scaling of the overall FSSD×SET algorithm. Under optimal step size and error bar sequence choices, the number of steps taken within each stage is roughly the same. The last stage dominates the computational cost associated with the force or gradient computation (Fig. 3 (c)), and computational cost per step is proportional to the inverse square of error bar. The overall computational cost is then proportional to the inverse square of target precision. IV. A REALISTIC APPLICATION IN AFQMC We next apply our algorithm to perform a fully ab initio quantum many-body geometry optimization in Si. Recent progress has made possible the direct computation of atomic forces and stresses by plane-wave auxiliaryfield quantum Monte Carlo (PW-AFQMC) [17]. Employing this framework, we study the pressure-induced structure phase transition from the insulating diamond phase to the semi-metallic β-tin phase. The detailed setup of this system is given in Appendix B. Figure 4 shows the energy difference and Euclidean distance relative to the target β-tin structure in each step during the geometry optimization process. The run is divided into two stages. In stage I, our convergence analysis identified convergence at step 26. (See Sec. III B.) Atom positions are accumulated and averaged starting from this step, yielding a lower and more stable Euclidean distance curve. This averaged position is taken to be the starting point x 0 for the second stage. In the second stage the statistical error and the step-size are reduced to 2/7 of the first stage. The optimization quickly converges and approaches the correct β-tin structure. The total energy in the final structure is consistent with the ground-state energy computed by AFQMC at the ideal β-tin structure, and the final structure is in agreement with the ideal structure within our targeted precision (Euclidean distance of ∼ 0.1 Bohr). V. A NEW STRUCTURE IN SI A (meta)stable orthorhombic structure in Si was discovered accidentally in our study. In this section we present this structure, which to our knowledge was not known. The new structure emerged in tests of our algorithm for full geometry optimization in solids allowing both the atomic positions and the lattice structure to relax. To apply our algorithm to a full geometry-lattice optimization, we combine the atomic position vectors and (a) (b) Step # Step # and the interatomic forces and stress tensor into a single gradient F = (F; Ω {σ 11 , σ 22 , σ 33 , σ 12 , σ 13 , σ 23 }) , such that F = −∂E(X )/∂X as before. The cell volume Ω appears above, which is included in the definition of the stress tensor: σ ij = −(1/Ω)(∂E/∂ ij ) [47]. Care must be taken with metrics, e.g. the step size L in the algorithm should be defined as where ν has the dimension of inverse length. An additional role of ν is to tune the optimization procedure, as it controls the relative step size for optimizing the atomic positions versus the overall lattice structure. Different choices thus can result in different optimization trajectories. As we describe in detail in Appendix F, there is considerable sensitivity of the optimized structure (local minimum) with respect to the choice of ν, as well as an interplay with the particular stochastic realization of the optimization trajectory. In general this would seem to be an additional disadvantage of optimization in the presence of stochastic gradients. However, it provides a natural realization of statistical sampling of the landscape which could broaden the search in the optimization. It is this feature that lead to the surprise discovery of the new structure shown in Fig. 5. The structure identified has an energy +0.312 eV/atom higher than that of the ground state in the diamond structure, determined by DFT PBE calculations. We have verified that it is a meta-stable state. We sampled 5,000 different perturbations around the structure with randomly displacements in both atomic positions and lattice distortions, confirming that all resulted in higher energy. This was followed by the computation of a Hessian matrix, by fitting the total energy with a second-order Taylor expansion, which was found to be positive with respect to all geometrical degrees of freedom. VI. CONCLUSION We have proposed a new structural optimization algorithm to work with stochastic forces and gradients. The presence of statistical error bars in the gradients is a common characteristic in many quantum many-body computations. We find that existing optimization algorithms all experience significant difficulties in such situations. This is a fundamental problem whose importance is magnified by both the growing demand for the higher predictive power and the generally high cost of ab initio many-body calculations. Our algorithm addresses this problem by the combination of a fixed-step steepest descent and a staged error scheduling with position averaging. The algorithm is simple and straightforward to implement. It out-performs standard optimization methods used in structural optimization, as well as several machine-learning methods, in our extensive analysis performing realistic geometry optimizations in solids. The algorithm is then applied in an actual ab initio many-body computation, using plane-wave auxiliary-field quantum Monte Carlo to realize a full structural optimization. This marks a milestone in the optimization of a quantum solid using systematically accurate manybody forces beyond DFT. The optimization algorithm can be applied to atomic position and lattice structure optimizations, as well as a full geometry optimization combining both. We demonstrated the combined approach for a full geometry optimization, which resulted in the discovery of a new structure in Si. Furthermore, we illustrated that the presence of statistical noise sometimes creates new opportunities in the optimization. This can be in the form of tuning the target statistical error to minimize the computational cost, or exploiting the noise to alter the optimization paths and expand the scope of the search, in the spirit of simulated annealing. In addition to geometry optimization, the algorithm can potentially be applied to other problems in which the gradients contain stochastic noise. The two components of the algorithm can be applied independently or combined with other methods. Insights from them can also stimulate further developments. With the intense effort in many-body method development to improve the predictive power in materials discovery, more efficient op-timization methods which handle and take advantage of the stochastic nature of the gradients will undoubtedly find ever-increasing applications. Supplemental Materials Appendix A: The Effect of Noise in Line-search Methods Two of the most common geometry optimization methods are the steepest descent [35] and conjugated gradient [36][37][38][39], both of which use line-search [29][30][31][32][33][34] as a building block. Such methods are fragile in the presence of noisy forces. Here we use steepest descent as an example. The search direction is given by d n = −∇E n−1 = F n−1 . The next step position is chosen along this direction, at or near the energy minimum. This position is found by either manually choosing a few points to fit a curve, or by automatically selecting a few points until a criteria is reached, following a line-search algorithm. An example is given in Fig. 6, using the Newtonian line-search algorithm. The minimum is the point ∠(d n , F n ) ≈ 90°, when the force is perpendicular to the search direction d n . This line-search method works well for forces without noise, but can run into difficulty with noisy forces. Noisy forces can cause multiple candidates for ∠(d n , F n ) ≈ 90°to appear. Since line-search algorithms usually set a tolerance to avoid excessive searching, this can result in the algorithm stopping at an undesired multiplier when the threshold is reached. In the example of Fig. 6(b), we set a tolerance of 5°, and the run stops at a multiplier of ∼0.2, with an angle of 86.4°. This is far from the real minimum, which is at a multiplier of ∼2.0. Although the energy fluctuations are not directly involved in the runs, the statistical noise in forces is directly inherited from the energies, which are shown in Fig. 6(a). The same difficulty is manifested from either the perspective of the forces or the total energy. Appendix B: Setup and Details of the Geometry Optimization Examples Two realistic optimization problems from quantum solids are used as test cases in this paper. The first is a phase transition in silicon under pressure [49]. The second involves phases in a two-dimensional material, monolayer molybdenum disulfide (MoS 2 ). Silicon phase transition. We explore the phase transition between the diamond (Si-I) and beta-tin structure (Si-II). The parameters of the diamond-structure is taken from experiment: the primitive cell is an face-centered cubic (FCC) with lattice constant of 10.263 Bohr [50]. The beta-tin structure is only stable under high pressure, with an experimental lattice constant a of 8.82 Bohr and c/a of 0.550 under 11.7 GPa [51]. At zero pressure, DFT (all-electron LAPW) predicts the equilibrium beta-tin structure with a lattice constant of about 8.988 Bohr and c/a of 0.552 [49]. In the force-only optimization, we consider an "anistropically compressed diamond (ACD)" structure: the diamond structure is compressed from the experimental cubic cell to the beta-tin cell, with the x and y directions compressed to 8.988 Bohr and z direction to 9.922 Bohr (c/a = 1.104). This is a meta-stable structure (local minimum), mimicking the diamond structure within the choice of supercell size and shape. The optimization starts from an equal mixture of ACD and beta-tin. This "50:50 mix" is the middle point on the closest route that moves all atoms in ACD to their corresponding beta-tin position, considering all possible atom swaps, crystal symmetries, and translation symmetry. FIG. 7 (c) shows a plot of the total energy along this route. The middle point (50:50 mix) is close to the energy barrier but on the beta-tin side. For our full geometry optimization, we start from ACD as well, but with the experimental lattice constants at the phase transition (a = 8.82 Bohr, c/a = 1.100). MoS 2 monolayer. This is a two-dimensional system with a finite layer thickness in the third direction. Simulations of such a system is done with a large z axis lattice constant (36.12 Bohr in our case). There are two stable sulfur atom alignments [52]: one is the 2H global minimum where the two S atoms are stacked together at the top view, and the other is the 1T local minimum where the Mo and each of the two S atoms sit at the 3 possible hexagonal sites (see Fig. 8). The thickness of the layer, the S-S atom distance in z, is tunable and not constrained by any symmetry requirements. Thus the two phases are characterized by two controllable parameters: x which gives the S-S alignment mismatch, and d which gives the layer thickness. In our definition, x = 0 gives 2H, and x = − 1 3 gives 1T. We work with the 3-atom primitive cell, starting from a system with layer thickness d compressed to 1.8 Angstrom and one of the S atom moved to a 50:50 mix of the 2H and 1T structure (x = − 1 6 ). Together with x and d are seven (7) additional free parameters to be optimized which define the structure of the system. (7) other degrees of freedom to fully specify the geometry of each phase, which are optimized simultaneously with x and d. Appendix C: Additional Discussion on SET We study the relation between the step size, the target statistical error on the gradient, and the convergence speed. A larger step size means faster convergence as long as the step size is not too large to blur the difference between different minima. However, as shown in Fig. 9 (a), large step sizes can result in a worse final convergence quality, due to larger fluctuations in the atomic positions around the correct minimum. By introducing position averaging, we can mitigate the fluctuation so that there is almost no effect on the convergence quality within a wide range of step size choices (white-background region in the plot), and the final convergence quality only depends on the error bar size ( Fig. 9 (b)). At this point, if a higher precision is still desired, we should increase the computational cost to target a smaller error bar, and reduce the step size at the same time to account for the reduced system scale. Perhaps a more natural and intuitive approach to the scheduling is to tune the error bar (and step-size) continuously. However, the entanglement of the statistical noise and the effect of retardation in the search process makes this less straightforward. For example, if the optimization process moves through a flat region (saddle point) and then re-enters a fast convergence phase, a smooth control of the target error bar or step size could lead to a significant reduction in efficiency. The SET algorithm, instead of more sophisticated techniques (e.g. P-controller [53]), devises a simple solution by dividing the runs into stages, in each of which the error bar size and step size are kept constant. Using stages does not remove the retardation effect mentioned above; however, by using an automatic convergence identification algorithm and requiring a relatively long convergence phase, we can now identify a convergence with confidence, albeit at a later time. Combining with FSSD and position averaging allows a sampling around the minimum in a Monte Carlo sense, and avoids "wasting" the extra steps after convergence, as illustrated in Fig. 3. We find that this approach makes for a simple, and more robust and efficient algorithm which outperformed all our attempts at continuous scheduling. Appendix D: Parameter Choices in the Optimization Methods The optimization methods employed in this work all have some free parameters or variations. We have not attempted to perform the most detailed optimization of these parameters. The following describes our choices. FSSD uses a step size of 0.5 Bohr for Fig. 1, 0.3 Bohr for Fig. 2, 0.5 Bohr for Fig. 3 (stage I), and 0.7 Bohr for Fig. 4 (stage I). A mixing parameter of α = 1/e is used throughout the work. Our tests show that a value between 0.35 ∼ 0.5 yields good convergence speed and final convergence accuracy (Fig. 10). Steepest-descent and conjugate-gradient are based on a Newtonian line-search that finds the root of < d n , F n >= 90°, as in Appendix A. Conjugate gradient uses the Polak-Ribiére formula and restarts every 5 steps. Note that for noisy PES, specially designed methods [40] can result in better performance for conjugate gradient. This was not pursued here, since the required automatic differentiation is not always available in the many-body computations with which the optimization algorithm is expected to couple. Machine-learning based algorithms discussed in this work have an "element-wise" version and a "by-norm" version. Figure 9. Analysis of the convergence quality vs. step size and statistical error (noise) size. In plot (a), the dashed lines show the convergence quality without position averaging, while the solid lines show the convergence quality after position averaging. Error bars show the standard error of 6 runs with random noises. Region with a white background is the region where position averaging will remove the dependence on step sizes in the convergence quality. Region with a red background marks where this dependency can no longer be fully removed, indicating the step size is too large. The inset provides a zoomed-in view of the white region. Plot (b) shows the convergence quality vs. noise size after position averaging. For each noise size, the convergence quality shown in this plot is computed from the average of the position-averaged convergence qualities of all step-sizes in the inset of plot (a). Error bars show the standard error of these 7 step-sizes. The system is MoS2 and convergence quality is defined by the Euclidean distance (see Appendix E) from the correct minimum. The version in the original literature of RMSProp, AdaDelta, and Adam is "element-wise" [41][42][43]; for example, the RMSProp algorithm is x n+1 = x n + η where x n is the atom position at step n, F n is the force computed at position x n , η is a fixed learning rate, and is a small number to prevent singularity. F 2 is a "historical average" of all squared forces. This original "element-wise" algorithm treats each dimension separately, and x n , F n , F 2 , and F 2 are all vectors. A variant of this algorithm, which we call the "by-norm" algorithm, is given by replacing the equations above with x n+1 = x n + η where the force is now treated as a whole for all dimensions, as each dimension receives the same value as the prefactor for the force. By analogy, the FSSD algorithm we use should be classified as a "by-norm" algorithm. Our application of the Adadelta algorithm has one small modification from its original form [42], which forces E[∆x 2 ] 0 = 0 and appears to have poor efficiency in our optimizations. Replacing this value with a finite number gives the algorithm an initial boost and specifies the initial step size with η = E[∆x 2 ] 0 /(1 − ρ). Step # Step # (a) (b) Our convergence analysis algorithm is illustrated in Fig. 11. The Euclidean distance metric is used to build a one-dimensional distance function of the steps in the optimization history, with x ref being the "current best guess." At step N , x ref is selected as the position averagex of the last N ave steps of the convergence procedure. We compute D n = D(x n ,x) for all 0 ≤ n ≤ N − N ave , and then search for a step number m between N A and N − N ave − N B that divides the entire run into two phases (N A , N B is the "minimum phase length" of phase A and B), such that the ratio of the standard errors of the distance in the first phase and that in the second phase is maximized: where stderr Q n=P (D n ) denotes the standard error of {D P , D P +1 , . . . , D Q }. Convergence is reached if R m > R th , where R th is a threshold. There are a few tunable parameters in this analysis algorithm. By default we choose N A = N B = 5, N ave = 10, and R th = 5. These parameters can be varied. Note that low N A , N B , N ave , R th might lead to misidentification of the saddle points as equilibrium, while high parameters can result in longer runs. Appendix F: Sensitivity in the Joint Optimization of Positions and Lattice Structure The optimization result can depend on the stress weight ν. A good guess of ν is the ratio of the optimal strain step size in a stress-only lattice optimization vs. the optimal atom-position step size in a force-only geometry optimization, Table I. Final structure of the geometry-and-lattice optimization, for different stress weight ν: a no-noise run and 6 noisy runs are shown per stress weight. "Diamond" is the Si-I structure; "β-tin" is the Si-II structure; Imma [51] is a transition structure between Si-II and Si-V (simple hexagonal); Cmca is a new orthorhombic structure. but different ν can be chosen to emphasize the two aspects (atomic positions vs. lattice structure). TABLE I shows the result of an FSSD optimization with different choices of ν. We use DFT, starting from a 50:50 diamond/beta-tin structure at the diamond-to-beta-tin transitional lattice constant [51]. For smaller ν the lattice structure is optimized more cautiously and the optimization tends to reach the Imma structure [51]. In the presence of noise, the optimization does not reach the beta-tin structure (a = b and ∆ = 1/4) unless the step size and statistical error is made very small. A larger choice of ν leads to the diamond structure which is the global minimum at zero pressure. An even larger choice of ν (ν = 0.04) has a large chance of landing on the new orthorhombic structure. This example also shows that the presence of noise can sometimes help find a new structure by adding a small annealing effect. at different optimization steps, measured by the SOAP kernel [55] (computed with ASE [56,57] and DScribe [58] packages). In all three examples, FSSD with position averaging remains one of the fastest method by efficiency, while having also the smallest fluctuations (best accuracy). The behaviors of the three ML methods (RMSProp, Adadelta, Adam) are also consistent with the observations described in the main text. We included gradient descent with momentum (SGD+momentum) in the first two examples, NaCl and PbTiO 3 . Its behavior is seen to resemble that of Adam, characterized by large and slow fluctuations. This indicates that a fixed step size, which is the essential difference between it and FSSD, is critical for the improved performance of FSSD. . The x axis shows the number of force computations, which is the same as optimization steps in FSSD and the three ML algorithms, but is larger than the optimization steps in the two line-search methods. The y axis shows the SOAP similarity kernel, which measures the distance between the structure at each step and the global minimum.
2022-04-27T06:47:52.906Z
2022-04-26T00:00:00.000Z
248391910
s2orc/train
v2
Review of frequency stability services for grid balancing with wind generation
Review of frequency stability services for grid balancing with wind generation : Frequency stability in power systems is achieved by active power control, which aims to balance grid generation with load demand. Historically, grid balancing services have been provided by synchronous thermal generating units. As wind penetration levels increase on the power system, it is essential that wind turbine generators (WTGs) provide robust, reliable frequency stability services to grid operators. Like other forms of renewable generation such as solar photovoltaic generation, modern variable speed WTGs are connected to the power system using power electronic converters. This non-synchronous connection decouples the natural inertia of the WTG from the grid frequency. As system non-synchronous penetration levels increase, non-synchronous generation will be required to participate in frequency stability services such as automatic generation control. This study presents a review of WTG frequency response systems that allow WTGs to participate in frequency stability services by emulating the natural inertia and droop characteristics of conventional synchronous thermal generators. Power system simulations performed in MATLAB/Simulink show that the addition of emulated inertia and droop controllers into WTG's power/speed control systems can reduce the rate of change of frequency and increase frequency nadir when the power system is subject to a load/generation imbalance. Introduction As variable renewable generation levels grow to significant penetrations on the all-Ireland power system, grid operators will struggle to maintain system stability and reliability by depending solely on synchronous thermal generators to provide grid balancing services. Renewable generation such as wind can also offer reliable ancillary services to grid operators. In power systems, stability and reliability are maintained by managing system inertia and frequency response during normal operating conditions and disturbances. In conventional power systems, employing synchronous thermal generators, the initial rate of change of frequency (RoCoF) of small signal frequency disturbances is naturally retarded by an increase/decrease in power generation, due to the natural inertia of the large rotating masses of committed generating units [1]. This natural inertial response occurs due to stiff coupling between the synchronous generator and the grid frequency. State-of-the-art variable speed wind turbine generators (VSWTGs) are, however, either partially or fully decoupled from the grid frequency by power electronic converters. This decoupling means that VSWTGs provide little or no natural inertial response to grid imbalances; a response that is essential for maintaining frequency stability [2]. In addition to the inertial response of conventional generating units, primary and secondary responses are required to balance generation with load demand, thereby restoring system frequency to its nominal value. Currently, primary and secondary operating reserves are used in Ireland and the UK such as many other countries, to provide these responses, using governor droop characteristics and spinning reserve, respectively. Traditionally these automatic generation control (AGC) services are provided by synchronous thermal generators such as coal, peat, natural gas, and oil. If WTGs were also permitted to offer AGC services such as conventional synchronous generators, then grid operators would have access to additional resources when needed. However, to arrange this for grid operation, the robustness of AGC from variable renewable generation must be analysed. Therefore, wind farm operators need to examine, develop, and test that their power electronic converters can provide AGC to comply with appropriate grid code standards. This paper presents a review of WTG frequency response systems that allow WTGs to participate in grid balancing, AGC services by emulating the natural inertia and droop characteristics of conventional synchronous thermal generators. Frequency response systems Unlike conventional synchronous generators, modern VSWTGs are either partially (Type C WTGs) or fully decoupled (Type D WTGs) from the grid frequency by power electronic converters [3]. This decoupling means that VSWTGs provide little or no natural active power response to frequency events. Active power controllers can be used to adjust the VSWTG's power electronic converter's active power reference to increase or decrease generation in response to frequency events [4,5]. A frequency response scheme that mimics the inertia and droop characteristics of a synchronous generator is proposed in [4]. The control scheme releases the hidden inertia of the VSWTG by providing an active power response that is activated by RoCoF and frequency deviation. This would allow VSWTGs to participate in frequency control schemes, by providing synthetic inertial responses such as that of conventional synchronous generators, which have stiff coupling to the grid frequency. The hidden inertia of VSWTGs can be released in many ways. A novel control strategy that shifts the WTG's maximum power point tracking (MPPT) curve to virtual inertia control curves according to frequency deviations is proposed and investigated in [6]. Fixed frequency responses to frequency events are investigated in [7], while Kang et al. [8] propose a stable adaptive inertial control scheme. Although frequency response strategies differ in the literature surveyed, the objective of all schemes is to release the kinetic energy of the turbine and generator to provide grid balancing, AGC services. A frequency response system based on the emulated inertia and droop controller proposed in [4] was investigated using a simplified power system model built using MATLAB/Simulink. WT model A wind farm was modelled using an aggregate model of a WT doubly fed induction generator (DFIG), where one large DFIG was used to represent the entire wind farm. This is a proven modelling This is an open access article published by the IET under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/) technique for power system stability studies [9]. The mechanical power output of the WT is given by the equation below: where P m , C p , ρ, A, v, λ, β are the mechanical output power, power coefficient, air density, turbine blade swept area, wind speed, tip speed ratio, and blade pitch angle. Equation (2) is used to model the power coefficient of the WT [10] where The coefficients used in (2) to model the turbine power coefficient characteristic are c 1 = 0.5176, c 2 = 116, c 3 = 0.4, c 4 = 5, c 5 = 21, and c 6 = 0.0068 [10]. The maximum value of C p = 0.48 is achieved with a blade pitch angle of 0° and a nominal tip speed ratio of 8.1. Power/speed control: Power/speed control is achieved in the WTG model by controlling the quadrature component of the current injected into the DFIG's rotor windings by the rotor-side converter [11]. Manipulating the magnitude of the quadrature component of the rotor current I qr controls the active power output of the DFIG. The rotor speed ω r is measured and used to determine the optimal active power reference P ref , which is determined by a predefined MPPT characteristic. The actual electrical power output P elec is measured and added to the calculated power loss P loss . The sum of P elec and P loss is then subtracted from P ref to produce an error signal P error . This error signal is then passed through a proportional plus integral controller to produce the rotor current quadrature component reference signal I qr *. Operating zones WTGs operate in four distinct zones as illustrated in Fig. 1. The expression used to determine the power/speed controller's active power reference differs for each operating zone. The four operating zones will be discussed in the following sections. Zone 1: cut-in zone: For very low wind speeds, the WTG's active power reference is zero; hence, no active power is supplied to the grid. When the rotor speed reaches the cut-in speed (point A in Fig. 1), the WTG operates at quasi-constant speed until the mechanical power generated reaches point B. When operating in the cut-in zone, (ω A < ω r ≤ ω B ), the active power reference is given by the equation below: Zone 2: MPPT zone: In the operating zones B and C, maximum power is extracted from the available wind resource. It can be noted that the curves B and C intersects the maxima of the turbine power curves for all wind speeds between 7 and 12 m/s. When operating in the MPPT zone, the active power reference is a cubic function of rotor speed w r as shown in (5). The constant K opt is determined by the turbine P-ω characteristic Zone 3: quasi-constant speed zone: In the operating zones C and D, the WTG operates at quasi-constant speed until the mechanical power generated reaches the WTG's maximum active power rating P max (point D). When operating in the quasi-constant speed zone, the active power reference is given by the equation below: Zone 4: maximum power zone: For rotor speeds greater than ω D , blade pitching is used to limit the active power reference to P ref = P max . Blade pitching reduces the power coefficient C p , which reduces the amount of mechanical power that can be extracted from the available wind resource. Fig. 2 shows a WTG frequency response system comprising emulated inertia and droop controllers based on [4]. The emulated inertia and droop controllers produce active power responses proportional to the RoCoF (RoCoF, df/dt) and frequency deviation (df), respectively. Under normal operating conditions, the power reference set point is determined by MPPT. Frequency disturbances, caused by load/generation imbalances, trigger active power responses from both controllers. The emulated inertia controller uses the time derivative of frequency to create the control signal ΔP in that is inversely proportional to the RoCoF, thus negative RoCoF will cause ΔP in to increase, while positive RoCoF will cause ΔP in to decrease. The droop controller emulates the governor droop characteristic of a synchronous generator. Frequency deviation is measured and scaled by the droop constant −K d to produce the control signal ΔP droop . Negative frequency deviation causes ΔP droop to increase, while positive frequency deviation causes ΔP droop to decrease. The control signal ΔP cont is the summation of ΔP in and ΔP droop and emulates both the inertial and governor droop characteristics of a synchronous generator. The control signal ΔP cont is added to P MPPT to produce P ref . When the WTG is operating in the MPPT zone of its power tracking characteristic (Fig. 1), the turbine is extracting maximum power from the available wind resource. The addition of positive ΔP cont to P MPPT means that the electromagnetic power supplied to the grid is greater than the mechanical power applied to the rotor shaft, which causes the rotor to decelerate. The increased active power output from the WTG acts to restore system frequency, thereby reducing the RoCoF and frequency deviation. Reduction in the RoCoF and frequency deviation reduces ΔP cont . In addition to the reduction of ΔP cont , as the rotor decelerates the active power set point from the MPP tracker P MPPT also reduces. The reduction in P MPPT is due to rotor deceleration and means that the active power reference is only increased for a short duration; the duration being dependent on the magnitude of ΔP cont , as the increased active power output decreases both P MPPT and ΔP cont . When the frequency is restored back to its nominal value, the active power reference is determined solely by MPPT. However, after a period of increased active power output, the WTG operates at reduced, sub-optimal speed. A recovery period of reduced active power output is required to accelerate the rotor back to optimal speed, which can lead to a secondary frequency nadir while the power system is still in recovery from the initial frequency event [8]. Fig. 4 compares the VSWTG's active power output and the system frequency response to a sudden 5% increase in system load at t = 5 s. To investigate the emulated inertia controller's influence on system dynamics, the droop controller was deactivated by setting its droop constant to K droop = 0, while the inertia constant K in of the emulated inertia controller was varied from 0 to 20. Fig. 4a shows that the emulated inertia controller produces a very fast active power response to the grid imbalance; the magnitude of the response being proportional to K in . This is an intuitive result as maximum RoCoF occurs at the inception of a frequency event. The fast-acting emulated inertial response reduces the initial RoCoF; the magnitude of the initial RoCoF decreasing as K in increases, see Fig. 4c. Wind generation's ability to provide fast-acting active power response to limit RoCoF will become very important as system non-synchronous penetration (SNSP) levels increase. However, due to the WTG's contribution to frequency support ending before the system frequency reaches its nadir, the emulated inertial response has minimal impact on the frequency nadir. The frequency support provided by the emulated inertia controller lasted ∼3-5 s. An interesting characteristic of the emulated inertia controller is that as the frequency begins to recover, the RoCoF becomes positive, which results in ΔP in becoming negative. This reduces the WTG's active power reference while the system is still in recovery, which results in a slightly reduced frequency nadir as K in increases. Fig. 5 compares the VSWTG's active power output and the system frequency response to a sudden 5% increase in system load at t = 5 s. To investigate the droop controller's influence on system dynamics, the emulated inertia controller was deactivated by setting its inertia constant to K in = 0, while the droop constant K droop of the droop controller was varied from 0 to 20. Referring to Fig. 5a, the duration of the WTG's increased active power output is significantly longer than that produced by the emulated inertia controller; in the order of tens of seconds as opposed to seconds in the case of the emulated inertia controller. Under droop control, the WTG maintains an increased active power output for ∼20-25 s. The problem with increasing the WTG's active power output is that it cannot be sustained for long periods without causing significant over-deceleration. Droop controller: variation in K droop In theory, a large increase in active power output is desirable as it increases the frequency nadir. Fig. 5b shows that increasing K droop significantly increases the frequency nadir, as increasing K droop increases the magnitude of the WTG's active power response. Unfortunately, the increased frequency nadir comes at the expense of rotor speed. As the WTG expends more and more kinetic energy, the rotor decelerates further from optimal operating speed. The larger the increase in active power output, the further the rotor deviates from optimum speed. Hence, a longer recovery period is required to accelerate the rotor back to optimum speed. Fig. 5c shows that the variation of K droop has no appreciable effect on the initial RoCoF. The simulation results clearly show that the emulated inertia controller is more dominant than the droop controller in the initial few seconds after the inception of the frequency disturbance. However, its dominance reduces as the system frequency begins to recover and the magnitude of the RoCoF reduces. In contrast, the droop controller has no appreciable effect on the initial RoCoF. As the system frequency deviates further from the nominal frequency, the droop controller's dominance increases, which helps to increase the frequency nadir. Removing frequency support The turbine and generator of a WTG have kinetic energy stored in their large rotating masses. It is this stored kinetic energy that is used to provide the increased active power output after an underfrequency event. When the WTG's active power output increases, its rotor slows down, as the electrical power generated and supplied to the grid is greater than the mechanical power delivered to the rotor. The difficulty with frequency response control schemes is that once the kinetic energy is used, it must be restored. The generator must supply reduced active power output for a period after the initial response to restore the expended kinetic energy. This restoration or recovery period allows the WTG's rotor to accelerate back to optimal operating speed. The kinetic energy required to accelerate the rotor from its post-frequency support speed to optimal operating speed is equal to the kinetic energy used during the frequency support. This can be estimated using the equation below: where E kinetic is the kinetic energy, J is the moment of inertia of the WTG, ω opt is the optimum rotor speed, and ω 1 is the postfrequency support rotor speed. The time that it takes the rotor to accelerate to optimum speed is dependent on the magnitude of the active power reduction, hence a larger reduction in the WTG's active power output will require a shorter acceleration time. The difficulty with a large reduction in the WTG's active power output is that a lower secondary frequency nadir and increased RoCoF will ensue. Therefore, a compromise must be made between the duration of the acceleration phase and the system frequency response. The WTG's rotor can be accelerated by removing the frequency support (disconnecting the output of the emulated inertia and droop controllers). The sudden removal of the WTG's frequency support will cause a sudden drop in system frequency, due to the grid imbalance caused by the loss of generation. This results in a secondary frequency nadir. However, if the WTG's frequency support is gradually removed, the impact on system frequency can be minimised. Fig. 6 compares the system frequency response and VSWTG active power and rotor speed dynamics when the VSWTG's frequency support is removed according to the controller designs of Fig. 6a. The results show that sudden removal of the WTG's frequency support at t = 50 s (Controller A) causes a significant secondary frequency nadir, which is lower than the frequency nadir of the system operating without frequency support from the WTG (base case). The magnitude of the secondary frequency dip is dependent on the magnitude of the frequency response system's controller output (ΔP cont ) at the time of its removal. A better solution is to gradually remove frequency support (Controller B). Controller B has a significantly lower impact on the system frequency. Gradual removal of the WTG's frequency support also results in smoother acceleration of the rotor back to optimum speed. Regardless of the frequency support removal strategy implemented, restoring the WTG's kinetic energy when the power system is still in recovery from the initial grid imbalance could lead to instability. Hence, to protect the power system against instability, the frequency support provided by WTGs should only be removed when the system is in steady state. Conclusion This paper has presented a review of WTG frequency response systems that allow WTGs to participate in grid balancing services by emulating the natural inertia and droop characteristics of conventional synchronous thermal generators. The MATLAB/ Simulink simulation results show that emulated inertia controllers can provide fast-acting active power responses to frequency events, which can reduce the magnitude of the RoCoF. The ability of WTGs to provide fast-acting frequency support will become more important as the natural inertia of power systems reduce as SNSP levels increase. In contrast, the droop controller produced a slower active power response, proportional to frequency deviation. The droop controller maintained increased active power output for a longer duration than the emulated inertia controller; in the order of tens of seconds as opposed to seconds in the case of the emulated inertia controller. As a result, the droop controller was more dominant than the emulated inertia controller over the entirety of the frequency dynamic. Grid protection requires sustained and robust preservation, particularly with greater penetration of lowinertia generation. As SNSP levels increase, it will become essential for wind farms to provide controllable frequency response services to grid operators, to protect against frequency instability. However, increasing the WTG's active power output, in response to an under-frequency event, comes at the expense of rotor speed, and when operating under MPPT a recovery period of reduced active power output is required to accelerate the rotor back to optimal operating speed. The simulation results show that linearly decreasing the output of the emulated inertia and droop controllers reduces the impact that the VSWTG's recovery period has on system frequency. The conclusion presented in this paper is that the implementation of emulated inertia and droop controllers will allow wind farms to provide grid balancing, AGC services, which can protect low-inertia power systems from frequency instability, whilst operating at high SNSP.
2018-12-06T12:38:08.111Z
2017-12-11T00:00:00.000Z
54881810
s2orc/train
v2