id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
265414846 | pes2o/s2orc | v3-fos-license | Leaf, root, and soil microbiomes of an invasive plant, Ardisia crenata, differ between its native and exotic ranges
Introduction Ecological underpinnings of the invasion success of exotic plants may be found in their interactions with microbes, either through the enemy release hypothesis and the enhanced mutualism hypothesis. Whereas recent high-throughput sequencing techniques have significantly expanded our understanding of plant-associated microbiomes and their functional guilds, few studies to date have used these techniques to compare the microbiome associated with invasive plants between their native and exotic ranges. Methods We extracted fungal and bacterial DNA within leaf endosphere, root endosphere and soil of an invasive plant, Ardisia crenata, sampled from their native range Japan and exotic range Florida, USA. Using Illumina sequencing data, we compared microbial community compositions and diversity between the native and exotic ranges, and tested whether abundance of pathogenic or mutualistic microbes differ between the native or exotic ranges in accordance to the enemy release hypothesis or the enhanced mutualism hypothesis. Results Fungal and bacterial community compositions differed among leaves, roots and soil, and between the native and exotic ranges. Despite a higher microbial diversity in the soil in the exotic range than in the native range, the microbial diversity within leaf and root was lower in the exotic range compared to the native range. In addition, leaves in the native range harbored a greater number of plant pathogenic fungi compared to those in the exotic range. Discussion These patterns suggest plant controls over what microbes become associated with leaves and roots. The higher abundance of leaf pathogenic fungi, including the pathogen which is known to cause specific disease in A. crenata in the exotic range than in the native range, support the enemy release hypothesis and highlighted potential importance of examining microbial communities both above- and below-ground.
Introduction
Plant-microbe interactions are increasingly recognized to play a role for the reason why some exotic plants become invasive (Dawson and Schrama, 2016;Dickie et al., 2017;Egidi and Franks, 2018;Pearson et al., 2018).Since Elton (1958) published the concept of the "enemy release hypothesis, " many consider that the presence/absence of host-specific natural enemies Nakamura et al. 10.3389/fmicb.2023.1302167Frontiers in Microbiology 02 frontiersin.org is key to explain the increased invasiveness of organisms (Keane and Crawley, 2002;Catford et al., 2009;Jeschke et al., 2012;Heger and Jeschke, 2014).According to the enemy release hypothesis applied to plant-microbe interactions, a plant population may be kept at check by host-specific natural enemies in its original range, but the lack of these microbes in the exotic range endows a competitive advantage to non-native plant species (Adams et al., 2009;Chiuffo et al., 2015;Aldorfová et al., 2020;Huang et al., 2020;Kuźniar et al., 2020).Not only pathogenic microbes, but also differences in the geographical distribution of mutualistic microbes may explain invasion success by plants, for example, via novel associations with mutualistic microbes that enhance plant defense, growth and stress tolerance ("enhanced mutualism hypothesis"; Callaway and Ridenour, 2004;Traveset and Richardson, 2014;Aslani et al., 2019).Despite the relevance of the enemy release and enhanced mutualism hypotheses in plant-microbe interactions between native and exotic ranges, only a few studies have compared the biogeography of microbial communities associated with invasive species (Harrison and Griffin, 2020).In such comparative studies, two aspects need to be considered.Firstly, in a given region, a plant species interact with many different microbes that interact with each other.Secondly, what microbes become associated with a given plant species in a given place depends not only on the microbial community composition in the environment, but also on specific parts where microbes may harbor (e.g., leaves, roots, and soil).If done properly, a comparison of how a given invasive plant species interacts with microbes between its native and exotic ranges in leaves, roots, and soil should shed light on the basic ecology of plant-microbial interactions, beyond the search for the key ecological process that explains its invasiveness.
To date, many studies on interactions between invasive plants and microbes have focused on soil microbiomes (Chiuffo et al., 2015;Aldorfová et al., 2020).Certain mutualistic microbes in the roots and rhizosphere are known to enhance stress tolerance to salt, heat, or resistance to plant pathogens (Rodriguez et al., 2009;Jogaiah et al., 2013), subsequently leading to successful plants invasion (Kamutando et al., 2017).Soil microbial communities vary geographically (Castellanos et al., 2009), and so do root endophytes (Brigham et al., 2023).Yet, few have investigated geographical differences in rootassociated microbial communities of invasive plant species between its native vs. exotic ranges.Plants can recruit beneficial microbes to the rhizosphere with root exudates (Upadhyay et al., 2022), and control which microbes in the rhizosphere can penetrate the roots through immune responses and/or biofilm formation (Vieira et al., 2020).It is postulated that such selectiveness exerted by plants involves species-specific genetic factors (Bulgarelli et al., 2013;Edwards et al., 2015;Yu and Hochholdinger, 2018).Hence, if an exotic plant species has a strong coevolved relationship with certain soil microbes in its native range, a reduction in root-associated microbial diversity might be observed in the exotic range due to a scarcity of microbes that can enter roots.Additionally, a local monodominance of an invasive plant could lead to a decrease of local plant diversity, resulting in a reduction of microbial species richness in the rhizosphere (Lu-Irving et al., 2019).
Similarly to root-associated microbes, certain leaf endophytes are shown to benefit the host plant performance by protecting against pathogens and increasing resistance to insect herbivory (Arnold et al., 2003;Tanaka et al., 2005).Yet, leaf endophytes receive much less attention than soil microbes do in invasive plant research.Hence, understanding how differences in microbial composition and diversity between native and exotic ranges influence plant invasiveness remains largely unexplored (but see Lu-Irving et al., 2019;Pan et al., 2023).Experimental evidence from field and manipulative studies suggest that the leaf microbial composition is influenced more by the surrounding environmental microbial pool than by plant genetic factors (Whitaker et al., 2018;Pan et al., 2023).Therefore, deciphering how leaf microbial communities associated with leaves and roots of an invasive plant species differ between its native and exotic ranges will help fill a research gap relevant for both the enemy release hypothesis and the enhanced mutualism hypothesis.
In recent years, amplicon sequencing technologies allow highthroughput analyses of microbial taxa/species belonging to diverse functional guilds such as pathogens and mutualists (Nguyen et al., 2016).This approach has certain advantages over the more traditional approach of comparing differences in plant growth between sterilized and non-sterilized soils (Dawson and Schrama, 2016).Whereas sterilization experiments can provide useful hints as to the importance of microbial communities in the soil and leaf litter (which is the major spore source of leaf endophytic fungi), the effect detected is a net effect of negative and positive effects from pathogens and mutualists (e.g., Pizano et al., 2019).Comparisons of microbial functional guild compositions between their native and exotic ranges can be informative as to how the abundance of pathogens and mutualists differ between the geographic ranges.
In this study, we chose Ardisia crenata, a shade tolerant shrub native to East Asia that acts as an aggressive invader in North America (Dozier, 2000;Kitajima et al., 2006).It can form a mutualistic association with arbuscular mycorrhizal (AM) fungi found in the mesic forests that it invades in North Central Florida, USA (Bray et al., 2003).Although A. crenata occurs at low densities in its native range in Japan (no more than a few adults within 5 m of each other), it forms a dense monodominant understory in Florida (Supplementary Table S1), reducing the diversity of native plant species.The lack of noticeable herbivores or seed predators in both ranges (Kitajima et al., 2006) suggests that differences in plantmicrobe interactions between native and exotic populations may be an important factor underpinning its invasion success.A. crenata is widely cultivated in Japan, but cultivation at high density often results in an onset of heavy mortality within Japan.These pieces of background information make A. crenata to be an ideal candidate for studying microbiome differences between the native range (Japan) and exotic range (Florida).We described and compared the diversity and structure of fungal and bacterial communities in leaves, roots, and soil with high-throughput Illumina sequencing.Furthermore, we assigned microbes to functional guilds (pathogens, mutualists) and compared taxonomic composition within each microbial functional guild.We hypothesized that (1) microbial community structure would differ by plant parts and geographical ranges, (2) microbial α diversity would be lower in the exotic range than in the native range, because of the monodominance of A. crenata in the former, (3) the observed differences in functional taxa between native and exotic ranges would align with either the enemy release hypothesis or the enhanced mutualism hypothesis.The results would be corroborative of the enemy release hypothesis if putative pathogens are more prominent in the native range than in the exotic range.Conversely, the enhanced mutualism hypothesis is supported if we detect positive patterns supportive of the possibility that A. crenata in the exotic range has formed novel associations with beneficial microbes.
Study sites and sampling
We set three and four sampling sites each in the native range (Honshu, Japan) and exotic range (Florida, United States), respectively (Figure 1).Sampling sites in the native range were Kamigamo Experimental Station (KA) of Kyoto University, Tokuyama Experimental Station (TO) of Kyoto University, and Yanagido Experimental Station (YA) of Gifu University, all of which have a warm temperate climate (mean annual temperature of 15.7-17.7°C),mean annual precipitation of 1522.9-2625.5 mm (from observation from1991 to 2020, Japanese meteorological agency).All four sites in the exotic range were in Alachua County, Florida (mean annual temperature of 20.7°C, mean annual precipitation of 1227.1 mm) (1991-2020, Florida Climate Center).They were Bivens Arm Nature Park (BA), Evergreen Cemetery (EC), Hawthorne Trail (HT), and Newnan's Lake (NL).See Supplementary Table S1 for geographical coordinates and vegetation characteristics of these sites.Kitajima et al. (2006) has reported genetic differences between the invasive population in Florida and wild populations of A. crenata in Kyushu and Okinawa.In contrast, the wild populations of A. crenata in Honshu sampled in the current study are genetically close to the invading populations in Florida both in terms of morphological traits and DNA sequences analyzed with ddRAD-seq (Wataru Noyori, unpublished data).Hence, differences in microbial communities associated with A. crenata individuals in the current study are unlikely to reflect differences in plant genotype, but likely due to geographical differences in the background microbial communities and/or differences in local density of A. crenata individuals (low vs. high density in Japan vs. Florida).
At each sampling site, two transects, 1 m wide and 15 m long each, were set to encompass the highest local density of A. crenata.The two transects were separated by a minimum of 10 m from each other.Each transect was divided into 1 m × 1 m quadrats, from which we collected a healthy individual of A. crenata with height less than 15 cm (a total of 30 per site).From each plant sampled, we collected three leaves without disease, five fine roots as well as ca.10 g soil from its vicinity (within 5 cm).The total number of plants sampled was 210 (30 individuals × 7 sites).In the field, samples were individually sealed in plastic bags, and kept immediately in a cooler box with ice packs until further processing in the laboratory.In the lab, leaf and root samples were surface sterilized for 1 min in water with an ultrasonic cleaner, followed by sequentially soaking in 70% ethanol for 1 min, 0.5% NaClO for 1 min, and sterile water for 1 min.Afterward, all samples were sealed in plastic bags with silica gel to dry and store at the −20°C freezer until DNA extraction.
DNA extraction and sequencing
From each leaf sample, we cut a 1-cm 2 piece to include leaf edge with a pair of sterile scissors.For root samples, three 2 cm long pieces of fine root were cut with a pair of sterile scissors.These samples were pulverized with Qiagen Tissue Lyser II (at 25/s, for 2 min, Qiagen) with two 4 mm zirconium beads inside 1 mL lysis buffer (20 mmol/L Tris, pH 8.0, 2.5 mmol/L EDTA, 0.4 mol/L NaCl, 0.05% SDS).For extraction of DNA from soil, 0.25 mL of each sample was placed in a 2 mL microcentrifuge tube containing 900 μL lysis buffer and 100 μL skim milk, and 0.25 mL of 0.5 mm zirconium beads, then pulverized with Qiagen Tissue Lyser II (at 25/s, for 2 min).After centrifugation at 4,400 rpm for 5 min, we collected the supernatant containing extracted DNA for PCR intended for amplifying the 16S region for procaryotes.Because this supernatant could not yield successful PCR for fungal ITS, we also used the phenol-chloroform extraction method (Wilson, 2001) for PCR intended for the ITS region of fungi.
The prokaryotic 16S rRNA and fungal internal transcribed space 1 (ITS1) regions were PCR-amplified following the protocol detailed elsewhere (Toju et al., 2019) with some modifications.Briefly, for the prokaryotic 16S rRNA region, the primer set 515f/806rB (515f, Caporaso et al., 2011;806rB, Apprill et al., 2015) was fused with the Illumina sequencing primer region and 3-6-mer Ns for improving sequencing quality (Lundberg et al., 2013).Likewise, the fungal ITS1 region was amplified using the primer set ITS1-F_KYO1/ITS2_KYO2 (Toju et al., 2012) fused with the Illumina sequencing primer region and 3-6-mer Ns.We conducted PCR with the DNA polymerase system of Ampdirect Plus (Shimadzu, Kyoto, Japan) with a temperature profile of 35 cycles consisting of denaturation at 98°C for 10 s, annealing at 55°C for 60 s, extension at 72°C for 60 s, and a final extension at 72°C for 7 min, for both primers.Fusion primers with P5/ P7 Illumina adapters and 8-mer index sequences for sample identification were added to the PCR products.In the reaction, the DNA polymerase system of KOD One (Toyobo) was used with a temperature profile of 8 cycles at 98°C for 10 s, 55°C for 5 s, 68°C for 30 s, and a final extension at 68°C for 2 min.The amplified PCR fragments were purified and equalized using AMpure XP Kit (Beckman Coulter), and equal volumes of all specimens were pooled together.The pooled library was sequenced with the Illumina MiSeq sequencer with 10% PhiX spike-in (Center for Ecological Research, Kyoto, Japan, 2 × 300 cycles).
Bioinformatics
The bcl2fastq 1.8.4 program distributed by Illumina was used to convert the raw sequence data into FASTAQ files.The FASTAQ files were demultiplexed with the program Claident v0.2.2018.05.29 (Tanabe and Toju, 2013).Chimeric and low quality-score reads (< 20) were subsequently discarded and the reads that passed the filtering process were clustered using VSEARCH (Rognes et al., 2016) with 97% clustering threshold as implemented in Claident.Then, operational taxonomic units (OTUs) were obtained, resulting in a total read number of 4,040,099 and 3,556,515 from the 16S and ITS1 primers, respectively.Taxonomic assignment of OTUs was performed with a combination of the query-centric auto-k-nearest neighbor (QCauto) method (Tanabe and Toju, 2013) and the lowest common ancestor (LCA) algorithm (Huson et al., 2007) as implemented in Claident.Afterward, unclassified bacterial and fungal OTUs at the kingdom level were subjected to a blastn search, and OTUs matching of plant-derived sequences (chloroplast-derived 16S) were removed.Note that the 16S 515f/806rB primers used in this study also amplified host-derived sequences from leaf and root samples (around 60% of all 16S reads).For 16S reads of leaf samples, about 90% of the post-filter reads clustered into closely related OTUs affiliated with Burkhorderia crenata, a known symbiont in the leaf-edge nodules of A. crenata (Carlier et al., 2016).Many of these OTUs could be identified with nonuniform descriptions like "symbiont bacteria" in the public database.Thus, we attempted a phylogenetic approach to conclude that these OTUs as "Burkholderia sp." for use in subsequent analysis after confirming the blastn results of those OTUs were within monophyletic group of B. crenata (Supplementary Figure S1).Finally, we excluded all OTUs that could not be assigned to either bacteria or fungi at the kingdom level (i.e., archaea or unidentified taxa), and then proceeded with the following statistical analyses.
To minimize the effect of PCR/sequencing errors, we removed the OTUs that represented less than 0.1% of the total reads in each sample (Peay et al., 2015).This resulted in high quality reads of 982,788 and 3,264,380 for 16S and ITS primers, respectively.The dataset was rarefied at 200 reads and 1,000 reads per sample for bacteria and fungi, respectively, with the "rrarefy" function of the "vegan" package (Oksanen et al., 2022) of R version 4.2.1 (R Core Team, 2022).Samples that yielded less than these read numbers were discarded, leaving bacterial 499 and fungal 469 samples.See Supplementary Table S2 for the sample size for each site.The rarefaction curves for each group reached to an asymptote (except for 16S in the soil), indicating that we had adequate levels of sampling (Supplementary Figure S2).After rarefaction, 4,819 and 3,998 unique OTUs of bacteria and fungi were included in the final analysis.Of these, 4,021 (83.4%) and 3,933 (98.4%) could be classified at the phylum level, and 1,603 (33.3%) and 1,653 (41.3%) were classified at the genus level for 16S and ITS reads, respectively.Further blastn analysis based on the NCBI database was performed on several OTUs to identify them at the species level.Fungal OTUs were assigned their ecological functional guilds including pathogens and mutualists (i.e., plant pathogen, AM fungi) based on their taxonomic assignment, with the FUNGuild (Nguyen et al., 2016).Only OTUs with a confidence rank of "High Probability" or "Probability" were retained in the analysis, but those with "Possible" and those assigned to more than one guild were treated as "Unidentified." This resulted in 1,213 of 2,785 OTUs assigned guilds (30.5%), with remaining 69.5% being unidentified.Finally, for fungal and bacterial OTUs that account for more than 2% of the total relative abundance at the genus level, a literature survey based on the published literature was conducted on functional taxonomic guilds.
Statistical analysis
We calculated Shannon diversity of fungal and bacterial communities was calculated as the exponent of Shannon entropy (i.e., exp l n − ∑ ( ) ( ) where Pi is the proportional abundance of species i, Shannon, 1948) calculated from the R package "vegan." To test differences among plant parts and between the native and exotic ranges, ANOVA was conducted and a post hoc comparison with a nonparametric Kruskal-Wallis rank sum test.To examine how microbial compositions differed by plant parts and geographical ranges, permutational multivariate analysis of variance (PerMANOVAs, 9,999 permutations; Anderson, 2006) was conducted based on Bray-Curtis distance values with the "adonis2" function in "vegan." We used 'strata' in adonis2 to control for site-to-site variation, as a random effect included in the PERMANOVA to restrict permutations solely within each country for range comparison.With Non-metric multidimensional scaling (NMDS) based on Bray-Curtis dissimilarity was also conducted within "vegan" and "ggplot2" (Wickham, 2016).In addition, Permutational analysis for multivariate homogeneity of dispersions was conducted for bacterial and fungal communities (PERMDISP, 9,999 permutations; Anderson, 2006).
We also used sites as strata when as mentioned above.To compare the abundance of putative pathogens and mutualists between the two ranges, Welch's t-test was conducted for the relative abundance of OTUs belonging to each FUNGuild.To find microbial genera biased to either the native or exotic range, we performed paired comparisons of genera that constituted more than 2% of the total relative abundance with Mann-Whitney U tests using the R package "exactRankTests" (Hothorn and Hornik, 2022).
Results
Fungal diversity in leaves, roots, and soil The total numbers of fungal OTUs derived from leaves, roots and soil were 992, 1,244, and 2,415, respectively.The number of fungal OTUs detected only in the native range was 1,459 (36.5%), while 1,684 (42.1%)OTUs were specific to the exotic range, with 885 (21.4%)OTUs shared between the two regions.The leaf endophytic fungi were dominated by Ascomycota (90%; an average of the two ranges combined; Supplementary Figure S3A), whereas Basidiomycota accounted for a much smaller proportion (10%).The abundance of Ascomycota in the leaf was higher in the exotic range than in the native range (95 and 75%, respectively).Ascomycota was also dominant in the root and soil samples (57.4 and 38.3%, respectively), followed by Basidiomycetes (24 and 35%, respectively), and Mucoromycota (18 and 22%, respectively).
Shannon diversity of fungal OTUs showed a significant interaction effect between 2 factors: plant parts and geographical range (ANOVA, p < 0.001, Supplementary Table S3).Additionally, there were significant differences among plant parts (ANOVA, p < 0.001), and small overall differences between the two geographical ranges (ANOVA, p = 0.685).Shannon diversity of fungi within leaves and roots were higher in the native range than in the exotic range, (the top two panels of Figure 2A), but it was higher in the exotic range than in the native range for soil (the bottom panel of Figure 2A).
Bacterial diversity in leaves, roots, and soil
The total numbers of bacterial OTUs derived from leaf, root and soil samples were 490, 1,575 and 3,687, respectively.Among bacterial OTUs, 1,571 OTUs (32.6%) were detected only from the native range, whereas 1,300 OTUs (27.0%) were detected only from the exotic range, with 1,948 OTUs (40.4%) shared between the two regions.As to bacterial taxonomic composition, Proteobacteria was dominant in the leaf (Supplementary Figure S3B).Burkholderia crenata, an obligate symbiont taxon detected from Ardisia crenata previously (Carlier et al., 2016), was the most abundant Proteobacteria.In the root and soil, Proteobacteria and Actinobacteria dominated and accounted for more than half of the total OTUs.Shannon diversity of bacterial OTUs had significant interaction effect between 2 factors: plant parts and geographical range (ANOVA, p < 0.001, Supplementary Table S3).Shannon diversity varied between the two geographical ranges (ANOVA, p = 0.002), and varied among the three plant parts (ANOVA, p < 0.001).Shannon diversity of the root-associated bacteria was higher in the native range than in the exotic range (middle panel, Figure 2B).In contrast, Shannon diversity in the -soil in the exotic range was higher than in the native range (bottom panel, Figure 2B).
Fungal and bacterial community structures
Non-metric multidimensional scaling (NMDS) plots showed strong differentiation of fungal and bacterial community structures among leaves, roots and soil, as well as between native and exotic regions (Figures 3A,B).The results of the PerMANOVA showed that plant parts and ranges had significant effects with strong interactions (Supplementary Table S4).Leaf endophytic fungi showed stronger differences between the native and exotic ranges than fungal communities in roots and soil (Figure 3A; Supplementary Table S5).Interactive effects between plant parts and geographical regions were also significant for bacterial communities (PerMANOVA, Supplementary Table S4).For bacteria, however, the regional difference was less pronounced in leaves compared to roots and soil (Supplementary Table S5), as visualized by the NMDS plot (Figure 3B).PERMADISP, which test whether the heterogeneity of OUT compositions differed between regions, showed significant differences (Supplementary Table S5).Greater community heterogeneity in the native range than in the exotic range was significant for fungal communities associated with leaves and roots (but not with soil), and for bacterial communities associated with rotos and soils (but not with leaves).
Taxonomic comparisons between native and ranges
Leaf endophytic fungi showed similar order-level diversity between the native and exotic ranges, with noticeable abundance of Capnodiales in the exotic range (Supplementary Figure S5A).At the genus level, Pallidocercospora dominated in the exotic range (62.5%, Figure 4C; Table 1), but Pallidocercospora was also common in the native range along with Pestalotiopsis, Phyllosticta, and Carlosrosaea.Order-level diversity of root-associated fungi was similar between the native and exotic ranges, with the same five orders dominant (Figure 4A).At the genus level, Melanconiella and Russula, and Glomus were the most common in the native range, whereas Russula, Glomus, and Mortierella were the most common in the exotic range (Figure 4C).Within the soil, Mortierellales was the most common in the native range, whereas its abundance was similar to those of Hypocreales (Figure 4A).In the exotic range, Russulales was also common.This reflects the abundance of Russula, Saitozyma, and Metarhizium in the exotic range, which was more pronounced than their abundance in the native range (Figure 4C).Order-level and genus-level fungal community compositions for each site were shown in Supplementary Figures S4A,C.
Order-level and genus-level composition of leaf endophytic bacteria were similar between native and exotic ranges, with a dominance of Burkholderia crenata, a symbiont in the leaf-edge nodules of A. crenata (Figures 4B,D; Table 1).At the order level, the composition of root-associated bacteria was dominated by Burkholderiales and Rhizobiales in both the native and exotic ranges.However, Mycoplasmatales were more common in the native range, while Streptosporangiales were more common in the exotic range (Figure 4B).At the genus level, Candidatus.Moeniiplasma, Burkholderia, and Bradyrhizoblium were most commonly found in the native range, while Burkholderia, Mycobacterium, and Halomonas dominated in the exotic range (Figure 4D).Within the soil, the orderlevel composition was similar between the native and exotic ranges, with Rhizobiales dominant in both.At the genus level, Rhodoplanes was the most common in both ranges, although its abundance was more pronounced in the exotic range.Order-level and genus-level fungal community compositions for each site were shown in Supplementary Figures S4B,D.
Relative abundance of pathogen and mutualist guilds in native and exotic ranges
The relative abundance of plant pathogens differed significantly in leaf samples, with much higher relative abundance in the native range than in the exotic range (Figure 5, Welch's t-test, p < 0.0001).However, there was no significant difference in the relative abundance of plant pathogens in roots (p = 0.349; Figure 5), and the relative abundance of pathogens in soil was similar but significantly higher in the exotic range than in the native range (p < 0.001; Figure 5).Among taxa that constituted more than 2% of the total relative abundance, Pallidocercospora, Pestalotiopsis, and Phyllosticta were assigned to plant pathogens (Table 1).Pallidocercospora was more common in the S2 for each sample size.Nonmetric multidimensional scaling (NMDS) plots based on Bray-Curtis distances of (A) fungal and (B) bacterial communities sampled from leaves (green), roots (orange), and soil (brown).Each point represents individual of A. crenata sampled from Japan (〇) and Florida (△).Small numbers of obvious outliers were excluded from the figure (4 root samples of the fungal community, and 1 leaf sample and 4 root samples of the bacterial community).See Supplementary Table S4 for the results of PerMANOVA test on difference among parts and geographical ranges.
10. 3389/fmicb.2023.1302167Frontiers in Microbiology frontiersin.orgexotic range than in the native range, while Pestalotiopsis and Phyllosticta were unique in the native range (Figure 4).For Phyllosticta, 80% of the reads were confirmed to be Phyllosticta ardisiicola by the blastn top hit sequence in the NCBI database (Percent identity = 100%; accession number; NR136952.1).P. ardisiicola is reported to be a pathogen discovered from A. crenata, causing leaf spots in A. crenata (Motohashi et al., 2008).A literature review was conducted on microbial taxa with high abundance (> 2% of the total), but no candidate pathogenic microbes were found.
The relative abundance of AM fungi in roots and soil was significantly higher in the native range than in the exotic range by 114 and 706%, respectively (Welch's t-test, p < 0.0001 each, Figure 5, bottom).Glomus and Russula were identified as mutualistic microbes among taxa that constituted more than 2% of the total relative abundance (Table 1).Glomus is a major group of arbuscular mycorrhizal fungi (Morton and Benny, 1990), and Russula is considered to be ectomycorrhizal fungi (Smith and Read, 2010).Glomus was more common in the native range than in the exotic range, while the genus Russula was less common in the native range than in the exotic range.Our literature review of microbial taxa with high abundance (> 2% of the total) found no clear candidates that may act as mutualists in leaves, except for Carlosrosaea.This genus was more common in the native range than in the exotic range, and it has been reported as a potentially mutualistic endophyte that improves seedling growth of Bromeliaceae (Marques et al., 2021).
Discussion
To explore potential differences in microbial communities associated with the invasion success of A. crenata, we compared fungal and bacterial community structures within the leaf, root, and soil between native and exotic ranges.Two trends stood out.Firstly, the microbial diversity inside leaves and roots was lower in the exotic range than that in the native range, despite higher microbial diversity in the soil in the former.Secondly, leaves harbored a higher number of plant pathogenic fungal genera in the native range than in the exotic range, which is corroborative of the enemy release hypothesis.These findings underscore the importance of microbial diversity including potential pathogens in keeping check on the host plant population in the native range.Previous studies that examined either fungi or bacteria associated with invasive plants reported differences in microbial community between plant parts or population ranges (Gundale et al., 2016;Pickett et al., 2022).Our study that examined both fungi and bacteria is more comprehensive in revealing differences among leaves, roots and soils, as well as between geographical ranges (Figures 3A,B).Lu-Irving et al. ( 2019) compared bacterial communities of invasive grass between its native and exotic ranges, and reported lower bacterial diversity in the leaf endosphere, root endosphere, and root surface in the exotic range than in the native range, while some research indicates the opposite, or shows no clear trend (Pickett et al., 2022;Pan et al., 2023).These inconsistent results might be dependent on the invasive plant species under consideration.
The composition and diversity of leaf endophytic bacteria were similar between the native and exotic ranges, with strong dominance of Burkholderia crenata, which is known to be an obligate symbiont that is vertically transmitted from mother plants (Ku and Hu, 2014).Perhaps, this bacterium is maintained throughout the process of cultivation and subsequent naturalization of A. crenata in the US.In contrast, the composition of leaf endophytic fungi showed wide differences by geographical ranges compared to leaf endophytic bacteria and fungi in roots and soil (Figures 3A,B; Supplementary Table S5), suggesting that A. crenata interacts with distinct fungal communities in its exotic range as opposed to its native range.We found a lower diversity of leaf endophytic fungi in the exotic range than in the native range, possibly tied to the dominance of a latent plant pathogen, Pallidocercospolla, in the exotic range (Figure 4C).Previous studies have reported cases of accumulation of specific plant pathogens in invasive plants in their exotic ranges (Stricker et al., 2016;Anthony et al., 2017).Pallidocercospolla, which is obviously non-lethal, may have accumulated in the leaf of A. crenata as its population density increased within the exotic range.
Differences in the root fungal and bacterial taxonomic composition were observed between the native and exotic ranges, although these variations were less pronounced than those in the leaves or soil (Supplementary Table S5).This could be attributed to a higher microbial diversity in roots (i.e., wider scatter Figures 3A,B), compared to that in leaf and soil samples.There are two potential factors for explaining such pattern: (1) colonization of microbes into roots could vary among individual plants; (2) microbial colonization of roots varies greatly among fine roots within individuals (Rüger et al., 2021).We cannot distinguish these two possibilities as we sampled only a few fine root samples per plant.Overall, the diversity of root-associated microbes in the exotic range was lower than in the native range, despite more diverse soil microbial pool than in the native range (Figures 2A,B).Considering this, one possibility is that root-associated microbes are selectively accumulated by A. crenata rather than passively recruited from the locally available species pool in the soil.This could be due to the host plants imposing selection or because fewer microbes in exotic ranges can overcome host resistance.Soil microbial communities differed significantly between native and exotic ranges for both bacteria and fungi, similar to the findings of a previous study comparing geographical ranges of invasive grasses (Lu-Irving et al., 2019).These differences could merely reflect biogeographical differences between Japan and Florida, and/or the possible influence of invasive plants on soil microbial communities (Trognitz et al., 2016;Rodríguez-Caballero et al., 2020).The local monodominance of invasive exotic plants may influence microbial communities along with decreasing species richness of aboveground plant communities (Anthony et al., 2017;Zhang et al., 2020).Contrary to our hypotheses, both fungal and bacterial diversity of soil were higher in the exotic range than in the native range.According to Ramirez et al. (2019), invasive species can produce more root exudates when they are in environments without their natural predators.This increased production of root exudates might contribute to a rise in the diversity of microbes in the soil.These possibilities are worth testing to improve mechanistic understanding of the geographical variances of microbial composition and diversity.
Support for enemy release hypothesis
Many studies examining the enemy release hypothesis have focused on belowground pathogens or aboveground herbivores (Adams et al., 2009;Chiuffo et al., 2015;Aldorfová et al., 2020;Huang et al., 2020).Whereas less attention has been given by the research community to leaf endophytic microbes in the context of the enemy release hypothesis, the results of our study suggest the potential importance of leaf endophytic fungi.Indeed, we found a sign of enemy release only in leaf endophytic fungi, but not in bacteria or rootassociated fungi (Figure 5; Table 1).Notably, the pathogenic fungi observed in the leaf endosphere of the native population were diverse, including Pestalotiopsis with a wide host range (Maharachchikumbura et al., 2014), and Phyllosticta ardisiicola known to cause specific diseases in A. crenata (Motohashi et al., 2008).These deleterious fungi were found only in the native range (Japan) but absent in the exotic range (Florida).Hence, they deserve further study as potential candidates that may limit the local density of A. crenata in Japan.On the other hand, Pallidocercospora, a known pathogenic genus, was more abundant in leaves sampled in the exotic range (Crous et al., 2013).However, Pallidocercospora is often detected in healthy leaves and hence it has been suggested to be a latent pathogen (Napitupulu et al., 2021).We suspect that it is only weakly deleterious and the high local density of A. crenata in the exotic range may promote its abundance.To explore this possibility, we are currently conducting another study to examine the effect of local population density of A. crenata within the exotic range of Florida.Overall, we detected patterns in support of the enemy release hypothesis with leaf endophytic fungi than with leaf endophytic bacteria, or fungi and bacteria associated found in roots or soils.
Support for enhanced mutualism hypothesis
We did not observe microbial patterns that are supportive of the enhanced mutualism hypothesis either for fungi, as the relative abundance of arbuscular mycorrhizal (AM) fungi within roots was higher in the native range than in the exotic range (Figure 5).Several other studies on the acquisition of new AM fungi report higher AM fungal colonization rates and greater AM fungi diversity in the exotic range compared to the native range (Yang et al., 2013;Soti et al., 2014;Sheng et al., 2022).A. crenata is reported to be capable of acquiring genotypes of AM fungi that enhance growth in Florida (Bray et al., 2003).Even though many AM fungi are considered generalist, their effectiveness depends on the combination of host and fungal species and genotypes.Our study found greater abundance of AM fungi within the roots of plants from the native range than those from the exotic range (Figure 5; Table 1), but it is not clear whether they are necessarily effective mutualists.Although ectomycorrhizal fungi (EcM) were more common in the exotic range soils (largely attributable to the genus Russula), the relative abundance of OTUs assigned to EcM in the roots was similar between native and exotic ranges (Table 1). A. crenata is not known to form symbiotic relationships with EcM fungi: there are no reports suggesting an interaction between A. crenata and EcM, and we do not detect EcM hyphae inside the roots under microscopes (data not shown).The high relative abundance of EcM in exotic range soils is likely to reflect differences in the overstory vegetation.Whereas ectomycorrhizal pines and oaks were dominant in Florida sites, AM fungi dependent conifers (cedars and cypress) were mixed with oaks in Japan (Supplementary Table S1).For bacterial communities, we did not find any notable differences in abundance of potentially beneficial bacteria between native and exotic ranges, with ubiquitously high abundance of Burkholderia in leaves.
Conclusion
Whereas many recent studies addressed positive and negative feedbacks between plants and soil microbial communities, our results suggest that it is essential to simultaneously examine leaf-associated microbial communities.A vast diversity of microbes were found to interact with A. crenata in both native and exotic ranges, including mutualistic, commensalistic, to pathogenic fungi and bacteria.While functional guilds were estimated from the database, a given microbe may act differently depending on environmental and host conditions.Furthermore, these microbes interact with each other in addition to their direct interaction with their host plant.We did not evaluate the interactions between hosts and microbes, but narrowed down candidates that may cause ecologically significant interactions with A. crenata in its native and exotic ranges.Specifically, the results suggest a potential importance of leaf pathogenic fungi in explaining the local density of A. crenata in Japan vs. Florida.Manipulative experimental study that employs density manipulation and inoculation tests with these putative pathogens within the native range of A. crenata will prove whether these are the key density-dependent agents, the lack of which explains the invasive population growth in the exotic range.
Data availability statement
The datasets presented in this study can be found in online repositories.The names of the repository/repositories and accession number(s) can be found at: https://ddbj.nig.ac.jp/public/ddbj_ database/dra/fastq/, DRA017027.
FIGURE 1
FIGURE 1 Sampling sites (circles) for this study within the exotic range (4 locations in Alachua County, Florida) [(A) open circles], and the native range (three locations in Japan) [(B) closed circles].See Supplementary Table S1 for the full names of locations abbreviated in two letter codes.County layers map (TIGER/Line Shapefile, 2016, state, Florida, Current County Subdivision State-based) and city locations (TIGER/Line Shapefile, Current, State, Florida, Places) were accessed from the United States Census Bureau, https://catalog.data.gov/dataset.
FIGURE 4
FIGURE 4Taxonomic compositions of fungal OTUs (relative abundance) at the order (A) and genus (C) levels (left), and bacterial OTUs at the order (B) and genus (D) levels (right).The top 20 taxa are indicated, and all remaining taxa are consolidated into the 'Others' category.
FIGURE 5
FIGURE 5Relative abundance of fungal guilds, i.e., % of sequence reads that could be classified to plant pathogens (top) and arbuscular mycorrhizal (AM) fungi (bottom) within Japan (JPN) and Florida (FL) populations of A. crenata detected from leaf, root and soil samples.The guild classification was based on the FUNGuild at genus or family level.Asterisks indicates the results of Welch's t-test (*p < 0.05, **p < 0.01, ***p < 0.001).
TABLE 1
List of fungal and bacterial genera exhibiting a prevalence of 2% or more in either native (Japan) or exotic range (Florida). | 2023-11-25T16:05:37.026Z | 2023-11-22T00:00:00.000 | {
"year": 2023,
"sha1": "14c99367ce3717be5f335e249b349d72d7319045",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1302167/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a080f0aff890ba71bfce53acca83b1178ce7803c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214142548 | pes2o/s2orc | v3-fos-license | Discharge prediction of Amprong river using the ARIMA (autoregressive integrated moving average) model
An accurate determination of water availability of Amprong River has an important role in the planting system to support the agricultural production process in the Kedungkandang Irrigation Area, because if the availability of water is not precisely determined, there will be an error in regulating irrigation water. To overcome these problems, a good analysis system is needed. One of the time-series models is the ARIMA (Autoregressive Integrated Moving Average) model. The model was built by discharge data from 9 periods from 2008/2009 to 2016/2017, and its purpose was to predict the discharge of the 2017/2018 period. There were only five models feasible for use. The best model is the ARIMA model (2, 0, 1) (1, 2, 1)36 with values of MSE = 22.90; KR = 6.00; MSD = 8.05; MAD = 2.04; MAPE = 18.53; and MPE = -8.98.
Introduction
River discharge prediction is required in the application of hydrology, including management and planning of water resources. Information on the average discharge in a period represents the potential of water resources that can be utilized from a watershed, so the utilization plan must be arranged appropriately.
Accurate determination of water availability in the 10-day period of the Amprong River has a very important role in the planning of cropping patterns to support the process of agricultural production in the Kedungkandang irrigation area. If the availability of water is not precisely determined, there will be an error in the regulation of irrigation.
To overcome these problems, an analysis system is needed that is able to predict well. River discharge has a repetitive behaviour in the same period, and thus the creation of a time-series model allows the river discharge pattern to be represented in a mathematical formula. One of these timeseries models is the ARIMA (Autoregressive Integrated Moving Average) model developed by Box and Jenkins (1976).
The pattern of river discharge data is often unclear, but with the ARIMA model, the pattern can be identified so that it can be used to forecast future patterns. The ARIMA model has a very good accuracy for short-term forecasting, but for long-term forecasting, its accuracy of forecasting is not good. ARIMA is a black box model. This model is not for finding out the factors that influence a system. The system is merely considered as a process generator. The main purpose of this method is to predict what is coming, not knowing why it happens.
The ARIMA model is a model that completely ignores independent variables to make forecasts. ARIMA uses past and present values of the dependent variable to produce accurate short-term forecasting. ARIMA is suitable if observations from the time-series are statistically related to each other (dependent). Forecasting with the Box-Jenkins ARIMA method in general gives better results than other forecasting methods, because this method does not ignore rules in time-series data [1]. Nigam et al. (2009) stated that the ARIMA model is the right approach for hydrological data models, which often show auto-correlation with time and need an exact explanation of fundamental dynamics [2]. This is not possible with simple statistical forecasting methods such as regression analysis. Their predicted rainfall data showed a very good correlation with existing data. This showed that the chosen ARIMA model has a good level of trust [3]. Valipour et al. (2012) stated that the ARIMA model has better performance than the ARMA model because it results in a stationary timeseries in both the calibration and forecasting phases [4]. The ARIMA model could be used for forecasting monthly inflow discharge that is suitable for the next 12 months. The accuracy of both the ARMA and ARIMA models increased compared to previous studies, due to the increase in the number of autoregressive and moving average parameters in the model. The ARIMA method produced a better model than ARMA [5]. The results of analysis of the Sengguruh Reservoir operating pattern showed that the actual and the forecasted discharge did not have significant differences, which showed that the forecasting method with the ARIMA model is good enough for use [6].
The purpose of this study is to determine a prediction of the discharge of Amprong River in the next one-year period by using the ARIMA model, and then to compare it to the observed discharge.
Materials and Methods
The location of this study is the Kedungkandang Dam on the Amprong River in Kedungkandang Sub-District, City of Malang. The Amprong watershed is located in the City of Malang and Malang Regency, with an area of approximately 24,984 ha. The water from the Amprong River is utilized by the Kedungkandang irrigation area, which is 5,169 ha in size, through the Kedungkandang Dam. The study location is illustrated in Figure 1.
Materials 2.1.1. Data
The needed data is composed of discharge of the Kedungkandang Dam from 10 periods (2008/2009 to 2017/2018), which was obtained from the PSDA Technical Execution Unit in Malang, by the Department of Water Resource Public Works of the Province of East Java.
ARIMA Model
The ARIMA model is divided into 3 elements: the autoregressive (AR), moving average (MA), and integrated (I) models. These three elements are modified to form new models, for example the autoregressive and moving average (ARMA) model. The general form is ARIMA (p, d, q) where p represents the autoregressive order, d represents the integrated order, and q represents the moving average order.
Autoregressive means that the value of x is influenced by the x value of the previous period up to the p-period. Thus, what matters here is the variable itself. Moving average means that the value of the variable x is influenced by the error of the variable x. Integrated means that differences are stated from the data. This means that in the making of ARIMA models, the requirement that must be met is data stationarity. If the data are stationary at the same level, the order is 0, but if they are stationary at the first difference, the order is 1, and so on. To adjust the data to the Kedungkandang irrigation area cropping pattern, which is conducted within a period of 10 days, the average daily discharge data were converted to 10-day average discharge data.
Data Identification
10-day average discharge data patterns were determined by plotting graphs of data by using the Minitab 16 software worksheet.
After the data pattern was known, the data was then tested for stationarity. This is because the data must meet the conditions for the ARIMA model, in that the data must be stationary to the variance and average. The Box-Cox plot was carried out to test the stationarity of the data against variance. The data is stationary to the variance if the value of = 1. If the data was not stationary to the variance, then the data needed to be transformed to achieve stationarity against variance.
Stationary testing of the average was conducted if the data is stationary to the variance. The ACF (Auto Correlation Function) plot was performed to determine the stationarity of the data against the average. Data that are stationary to the average are marked with a lag that is not patterned (random) and does not contain seasonal elements. If the data was not stationary to the average, then the data needed to be made stationary by differentiating until stationary data was obtained.
Determining Temporary Models
At this stage, p, d, and q were determined. The process of determining p and q was assisted by an autocorrelation correlogram (ACF) and a partial autocorrelation correlogram (PACF). Meanwhile, d was determined from the stationarity level. ACF measures the correlation between observations with a lag of k while PACF measures the correlation between observations with a lag of k and controls the correlation between two observations with a lag less than k.
Determining the Final Model
The final model was determined by fitting the model evaluation criteria: Residual of forecasting must be random. To ensure that the model met this requirement, a Ljung-Box Statistics indicator was used. From this indicator, furthermore, it could be seen that the P-value for this statistical test is greater than 0.05, which indicated that the residual is random. The model must be in the simplest form (a parsimonious model) Conditions of invariability or stationarity must be met. This was indicated by the value of MA or AR coefficients, which must be less than 1. The model must have a small MS and SS. The ACF and PACF graphs of residuals showed a cut-off pattern, which meant that the residuals are random.
Forecasting
At this stage, the selected model was inputted in the Minitab 16 software, and then the forecasting process was carried out by the software.
Calibration/Model Reliability Testing
Discharge prediction results were compared with comparative data that had been prepared beforehand to determine the reliability of the model in forecasting
10-Day Average Discharge
The 10-day average discharge of the Kedungkandang Dam that had been converted from the daily average discharge was plotted in a graph as presented in Figure 2, where the graph indicates a seasonal pattern. The data was not stationary on average and the data variance was too large. The data was also not stationary on variance. The stationarity of variance was proven from the Box-Cox plot, while the stationarity of the average was proven by the ACF plot. Figure 3 showed that the 10-day average discharge data of Kedungkandang Dam, after being tested by the Box-Cox plot, had a value of 1, which means that the data was not stationary to the variance. The Box-Cox transformation was then performed until the value of = 1 was obtained. The results of the first transformation, as shown in Figure 4, showed that after the first transformation, the data became stationary to the variance, indicated by the value of = 1.
Stationary Test for the Average
Stationary testing of the average was conducted when the data became stationary to the variance. The ACF plot in Figure 5 shows that the data was not stationary to the average, characterized by lags that were still patterned (not random) and contained seasonality. Therefore the data needed to become stationary through the first differentiating. Another ACF plot was created from the data that had been differentiated once. Figure 6 shows that the data was still not stationary to the average, and thus a second differentiating needed. Figure 7 shows that the ACF plot of the data that had been differentiated twice became stationary to the average, which is shown by irregular patterns. Figure 7, there was a cut-off after lag 1, and the result was that the obtained tentative model for non-seasonal MA is q = 1 and tentative model for seasonal MA is Q = 1. Because the differentiating process was performed twice, the order of D = 2. Seasonal cut-off lag occurred at 36th lag, the order of S = 36. A PACF plot was created to determine the orders of p and P of the AR model. The PACF plot in Figure 8 shows that the observation points of the average 10-day discharge of the Kedungkandang Dam died down. The tentative model has nonseasonal AR orders of p = 1, 2, 3, 4, and 5, while the seasonal AR order is P = 1. IOP Conf. Series: Earth and Environmental Science 437 (2020) 012032 IOP Publishing doi:10.1088/1755-1315/437/1/012032 7 From these stages, the values of each parameter of the ARIMA model was obtained, which then resulted in several tentative models, as presented in Table 1.
The feasibility test of the models in Table 1 was carried out using the Ljung-Box statistical test. The model is not feasible if the p-value is less than 0.05 on one or all of the lags. The results of the feasibility test of the model showed that there were only five tentative models that could be used.
The best model was selected at the model calibration stage. Model calibration was performed by comparing the discharge data from the ARIMA forecasting method to the existing discharge data.
Best Model Selection
The relative error of a feasible ARIMA model was used to choose the best model. The best ARIMA model was chosen from the model that had the smallest relative error. Relative error was calculated by comparing the model discharge with historical discharge.
From the historical discharge and the ARIMA model forecasting discharge, the Relative Error (RE) of each model was then calculated. This is the percentage ratio of the difference between the volume of the model and the historical volume, compared to the historical volume. Relative Error (RE) was used to select the best model from the five models that met the Ljung-Box test statistical requirements. The (2,0,1)(1,2,1) 36 ARIMA model was chosen as it had the lowest MSE, MSD, MAD, MAPE, and MPE values. The model with the lowest RE value is the (3,0,1)(1,2,1) 36 ARIMA model, which is 5.86%, but this is not much different from the RE value for the (2,0,1)(1,2,1) 36 ARIMA model, which is 6.00%. The chosen model was therefore the (2,0,1)(1,2,1) 36 ARIMA model. The parameters of the (2,0,1)(1,2,1) 36 ARIMA model obtained from Minitab software output are shown in Figure 9. Table 3 and Figure 10. | 2020-02-13T09:22:05.458Z | 2020-02-11T00:00:00.000 | {
"year": 2020,
"sha1": "62035a369c455788946a3c57de9647c95c1098de",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/437/1/012032",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b8899a92457a57b8d041f12df85d01abfcf3e306",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
271256011 | pes2o/s2orc | v3-fos-license | The multifaceted role of insulin-like growth factor binding protein 7
Insulin-like growth factor binding protein 7 (IGFBP7) serves as a crucial extracellular matrix protein, exerting pivotal roles in both physiological and pathological processes. This comprehensive review meticulously delineates the structural attributes of IGFBP7, juxtaposing them with other members within the IGFBP families, and delves into the expression patterns across various tissues. Furthermore, the review thoroughly examines the multifaceted functions of IGFBP7, encompassing its regulatory effects on cell proliferation, apoptosis, and migration, elucidating the underlying mechanistic pathways. Moreover, it underscores the compelling roles in tumor progression, acute kidney injury, and reproductive processes. By rigorously elucidating the diverse functionalities and regulatory networks of IGFBP7 across various physiological and pathological contexts, this review aims to furnish a robust theoretical framework and delineate future research trajectories for leveraging IGFBP7 in disease diagnosis, therapeutic interventions, and pharmaceutical innovations.
Introduction
Insulin-like growth factor binding protein 7 (IGFBP7), as a crucial member of the IGFBP family, has been extensively investigated and recognized for its significant roles in cellular biology and pathophysiology.Serving as an extracellular matrix protein, IGFBP7 not only participates in regulating fundamental biological processes such as cell proliferation (Xia et al., 2020), apoptosis (Tang et al., 2021), and migration (Hong et al., 2023) but also exerts important regulatory effects in tumor development (Yi et al., 2022;Artico et al., 2023), angiogenesis (Bracun et al., 2022;Lam et al., 2022;Liu et al., 2023;Tan et al., 2023), renal diseases (Waskowski et al., 2021;Chapman et al., 2023;Hu et al., 2023;Stanski et al., 2023), and reproduction (Wandji et al., 2000;Huang et al., 2021;Wu et al., 2022).In recent years, continuous research focus on the relationship between the structure and function of IGFBP7, as well as its mechanistic involvement in various diseases have gradually been acknowledged.However, numerous mysteries persist regarding the functional mechanisms of IGFBP7, its potential applications in disease diagnosis, treatment, and drug development.Therefore, this review aims to systematically summarize the recent advances in the study of IGFBP7, encompassing its structural characteristics, expression pattern, as well as its functional and mechanistic roles in different physiological and pathological processes.The comprehensive understanding of IGFBP7's biological functions provided herein is intended to lay a theoretical foundation and guide future research directions for its further development in clinical applications.
Structure and function of the IGFBP family
IGFBPs, a family of proteins that bind to insulin-like growth factors (IGFs) and regulate their biological activity, play a crucial role in the IGF signaling pathway (Ma et al., 2023).By binding to IGFs, IGFBPs modulate their biological activity and availability, prolonging their half-life in vivo and regulating their access to IGF receptors, thus impacting the activity of the IGF signaling pathway and regulating biological processes such as cell growth, proliferation, and apoptosis (Baxter, 2023;Galal et al., 2023;Werner, 2023).Based on their different affinities for IGF, IGFBPs are divided into two classes: high-affinity binding proteins (IGFBP1-6) and low-affinity binding proteins (IGFBP-rP1-10).
IGFBPs are a family of proteins characterized by multiple conserved domains.They typically consist of three distinct domains: the N-terminal domain, the C-terminal domain, and the central domain.The N-terminal domain contains approximately 16-18 conserved cysteine residues, including a common IGFBP motif (GCGCCXXC), which is a key region for binding to IGFs (Vorwerk et al., 2002).In contrast, the C-terminal domain usually contains about six conserved cysteine residues, with potential variations among different members of the IGFBP family (Zhou et al., 2023).The central domain, also known as the binding domain, exhibits structural differences from other domains and typically contains glycosylation and phosphorylation sites, which can influence the activity and stability of IGFBPs.The central domain mediates the binding of IGFBPs to IGFs, thereby regulating the biological activity of IGFs and the activation of cellular signaling pathways (Fowlkes et al., 1997).
The IGFBP-rPs, including IGFBP-rP1 to IGFBP-rP10, share structural and functional similarities with IGFBPs.IGFBP-rP1, initially named IGFBP7, was the first discovered IGFBP-related protein component due to its ability to connect with IGF via the N-terminal domain (Song et al., 2021).IGFBP7 has been cloned from various cellular systems and is known by multiple names such as mac25 (Kato, 2000), tumor adhesion factor (Albelda, 1993), prostate stromal factor (Yarosh et al., 2015), and angiostatin (Jin et al., 2020).Structurally, IGFBP7 differs significantly from other IGFBPs, particularly in its C-terminal domain, which lacks conserved cysteine residues, possessing only one cysteine residue (Oh et al., 1996).Moreover, IGFBP7 exhibits 100-fold lower affinity for binding to IGF-1 and is the only member of the family that binds insulin with strong affinity, limiting its binding to insulin receptors (Yamanaka et al., 1997).Unlike IGFBP3 and IGFBP5, IGFBP7 is not subject to glycosylation or phosphorylation effects and is distinguished from other IGFBPs by its regulation mechanisms at the RNA and DNA levels (Kutsukake et al., 2008).These structural and post-translational modification differences suggest that IGFBP7 may possess unique functions independent of IGF.
Expression of IGFBP7
IGFBP7 expression was detected in various normal tissues (Figure 1) including brain, liver, heart, small intestine, spleen, kidney, placenta, lung, skeletal muscle, thymus, prostate, testis, ovary, pancreas, and colon (Hwa et al., 1998).Immunohistochemical analysis revealed strong positive staining of IGFBP7 in peripheral nerves, respiratory cilia, epididymis, and fallopian tubes; smooth muscle cells in intestines, bladder, prostate, and endothelial cell walls also exhibited strong positive staining (Degeorges et al., 2000).Conversely, lymphocytes, plasma cells, and adipocytes displayed negative staining (Artico et al., 2021).Within Expression of IGFBP7 in various tissues and its association with diseases related to its downregulation and upregulation.The left side lists diseases associated with downregulation of IGFBP7 expression, while the right side lists diseases associated with upregulation of IGFBP7 expression.The upper part shows tissues and organs where IGFBP7 expression is detected, and the lower part shows tissues and organs where IGFBP7 expression is not detected.Diseases marked with the same color indicate the same disease.Data are sourced from transcriptome sequencing data in the GEO and GeneCards databases.
the kidneys, stronger staining was observed in the epithelium of distal tubules compared to proximal tubules (Sekiuchi et al., 2012).Moreover, cells from the reticular zone and glomerular zone showed stronger staining than those from the cortical zone, with some studies indicating stronger expression of IGFBP7 in proximal tubules and localization along the brush border of certain proximal convoluted tubules (Emlet et al., 2017).In the liver, analysis via serial analysis of gene expression revealed that activated stellate cells were the major contributors to IGFBP7 expression (Degeorges et al., 2000).Notably, compared to isolated activated stellate cells, IGFBP7 exhibited lower expression throughout the entire liver.Immunohistochemical studies conducted on human prostate tissue (normal) demonstrated universally intense staining (Degeorges et al., 1999).IGFBP7 is also detectable in various body fluids such as serum, urine, cerebrospinal fluid, and amniotic fluid of pregnant women (Anderlová et al., 2022).The cell-specific differential expression pattern of IGFBP7 within tissues may suggest its potential specific functions in these organs.
Functional mechanisms of IGFBP7
IGFBP7, a novel member of the IGFBP superfamily, possesses a unique molecular structure characterized by a conserved N-terminal domain similar to other IGFBPs, as well as distinctive Kazal-type serine protease inhibitor domains and immunoglobulin-like C2 domains (Yamanaka et al., 1997).Apart from its canonical role in modulating the effects of IGFs, IGFBP7 independently regulates cellular processes such as apoptosis, proliferation, and migration (Kim et al., 1997).In particular, IGFBP7 is implicated in cell adhesion and tumor cell proliferation processes, with its N-terminal fragments postdegradation retaining cell membrane adhesion properties (Oh et al., 1996;Vorwerk et al., 2002).Studies have demonstrated an upregulation of IGFBP7 expression in cells treated with TGF-β1 and retinoic acid (Oh, 1998).Additionally, IGFBP7 has been shown to bind to cell surface heparan sulfate, although this interaction may be influenced by the cleavage of IGFBP7 by pancreatic trypsin-like integral membrane serine protease, matriptase (Godfried Sie et al., 2012).Cleavage by matriptase at the P1 site, involving Arg or Lys residues, has been associated with breast cancer invasion and metastasis.Proteolytic cleavage, particularly at the N-terminus, including the heparin-binding domain, reduces heparin binding and IGF-1R occupancy (Werner, 2023).Furthermore, researchers have observed co-localization of IGFBP7 with the basement membrane in the vasculature, and subsequent direct measurement of IGFBP7 binding to extracellular matrix proteins, revealing its ability to bind to Type IV collagen (Pen et al., 2007).Moreover, IGFBP7 was found to stimulate adhesion of human umbilical vein endothelial cells to Type IV collagen matrices, inducing morphological changes.St Croix et al. (2000) also identified a role for IGFBP7 in binding to Type IV collagen protein.They demonstrated elevated expression of IGFBP7 compared to healthy endothelial cells, suggesting IGFBP7 as a potential tumor endothelial cell marker, as determined by serial analysis of gene expression.
The role of IGFBP7 in tumor development
The role of IGFBP7 in cancer has been a highly researched area of interest.Numerous studies have confirmed the association between IGFBP7 and various cancers (Jin et al., 2020;Li et al., 2023), including hepatocellular carcinoma, breast cancer (Godina et al., 2021;Wilcox et al., 2021), esophageal cancer (Li et al., 2022), colorectal cancer, and prostate cancer (Singh et al., 2020).However, the role of IGFBP7 appears to exhibit a complex pattern across different types of cancer.Utilizing detection techniques such as qRT-PCR, immunohistochemistry, Northern blot, and Western blot, studies have revealed that IGFBP7 expression is generally downregulated in hepatocellular carcinoma, melanoma, and lung cancer, while showing an upregulation trend in esophageal cancer.In breast, gastric, prostate, colorectal, and glioma cancers, some studies have reported upregulation of IGFBP7 expression, while others have reported downregulation, indicating a dual role of IGFBP7 in cancer cell proliferation, progression, and prognosis (Lin et al., 2019).Furthermore, research on IGFBP7 has shown its ability to alter cancer cell sensitivity to chemotherapy drugs, suggesting its potential beneficial value in anticancer therapy (Roška et al., 2020;Tang et al., 2021).However, despite a wealth of studies elucidating the significant role of IGFBP7 in tumor development, its specific mechanisms and roles in different types of cancer still require further investigation.
IGFBP7 primarily exerts its anti-tumor effects by inhibiting tumor cell growth and accelerating tumor cell apoptosis (Figure 2).This is achieved through inhibition of the expression of cell cycle proteins D1 and p21, and promotion of the expression of cell cycle proteins A, E, p16, and p27, or by suppressing Akt kinase activity, leading to upregulation of cyclin-dependent kinase (CDK) inhibitory factors p27Kip1 and p21Cip1, thereby inducing cell cycle arrest at the G0/ G1 phase.Overexpression of IGFBP7 or addition of exogenous IGFBP7 in cell culture can induce cell cycle arrest at the G2 phase through non-IGF-1 receptor, AKT, and ERK pathways, subsequently leading to cell apoptosis (Sato et al., 2007;Wang et al., 2017;Zhang et al., 2019).Despite some conflicting conclusions, the majority of evidence currently suggests that IGFBP7 inhibits tumor cell growth and promotes tumor cell apoptosis, rendering it a potential candidate for tumor suppression.Overall, IGFBP7 exhibits a dual role in tumor development, inhibiting tumor cell growth and accelerating tumor cell apoptosis, thus emerging as a potential candidate for tumor suppression.However, further research is needed to elucidate its specific mechanisms and roles in different types of cancer.
6 The role of IGFBP7 in acute kidney injury IGFBP7 has been proposed as a biomarker for acute kidney injury (AKI), aiming to enhance early detection, discrimination, and prognosis assessment, complementing serum creatinine and urine output (Meena et al., 2023;Murugan et al., 2023;Stanski et al., 2023).Insights derived from studies on TIMP2 and IGFBP7, which modulate cell cycle, exhibit differential expression and distribution, and undergo alterations in severity of AKI, along with changes in protein distribution, are crucial for guiding the diagnosis of renal injury across various etiologies, extents, and locations (proximal tubule, distal tubule, collecting duct, or interstitium).In 2013, was identified by Kashani et al. (2013) as a biomarker for AKI(56).From a screening of 340 candidate biomarkers, IGFBP7 was found to predict AKI based on creatinine standards.Released from proximal tubules, IGFBP7 facilitates pinpointing specific segments of damaged renal tubules.In the early phases of cellular stress, IGFBP-7 and TIMP2 induce G1 cell cycle arrest by inhibiting cyclin-dependent protein kinases.A TIMP2×IGFBP7 > 0.3 has demonstrated a sensitivity of 92% for moderate to severe AKI (Luthra and Tyagi, 2019;Zaouter and Ouattara, 2019).Moreover, elevated IGFBP7 mRNA levels have been observed in uranium nitrate-induced acute renal failure in mice (Taulan et al., 2006).
The mechanism of IGFBP7 in AKI involves its ability to regulate cell cycle progression (Zang et al., 2019), inflammation, fibrosis, apoptosis, and oxidative stress (Yu et al., 2022).IGFBP7 induces G1 phase cell cycle arrest in renal tubular epithelial cells, thereby inhibiting their proliferation.This effect is mediated through the upregulation of CDK inhibitors such as p21 and p27, which suppress CDK activity, thus halting cell cycle progression (Wang et al., 2019).Additionally, IGFBP7 is implicated in the modulation of renal inflammation and fibrosis, hallmark features of AKI progression.It can regulate the expression of pro-inflammatory cytokines and chemokines, including IL-6 and TNF-α, thereby mitigating inflammatory responses in the kidney (Zwaag et al., 2019).Moreover, IGFBP7 has been shown to inhibit the activation of the TGF-β signaling pathway (van Duijl et al., 2022), a key mediator of renal fibrosis, thereby ameliorating fibrotic changes in the kidney.In summary, IGFBP7 serves as a promising biomarker for acute kidney injury, aiding in early detection and prognosis assessment.Its involvement in regulating cell cycle progression, inflammation, fibrosis, apoptosis, and oxidative stress underscores its significance in AKI pathogenesis and highlights its potential as a therapeutic target.
7 The role of IGFBP7 in reproduction IGFBP7 plays a regulatory role in folliculogenesis (Wijesena et al., 2024).IGFBP7 exhibits significant homology with follicular inhibin (Kato, 2000).Follicular inhibin is considered an inhibitor of FSH secretion, playing a pivotal role in follicular development and ovarian function (Appiah Adu-Gyamfi et al., 2020).Similarly to follicular inhibin, IGFBP7 can bind with activin A, thereby influencing the growth inhibitory effects of the TGF-β superfamily on granulosa cells (Figure 3) (Tamura et al., 2007).Recent studies have shown that IGFBP7 is expressed in granulosa cells of pig antral follicles and bovine corpora lutea, capable of suppressing estrogen production in granulosa cells (Ożegowska et al., 2018).RNA-seq results have revealed high expression of IGFBP7 in granulosa cells of buffalo antral follicles, and expression of IGFBP7 has been identified in granulosa cells of bovine large antral follicles and bovine corpora lutea (Li et al., 2018).Knockdown of IGFBP7 has been observed to affect the number of apoptotic cells, cell cycle, cell proliferation, as well as estrogen and progesterone production (Kim et al., 2018).Treatment of granulosa cells with FSH and activin has significantly increased the expression of Cyp19a1 mRNA and secretion of 17β-estradiol (E2), whereas the addition of exogenous recombinant mouse IGFBP7 in the culture medium inhibits this promotion (Tamura et al., 2007).Treatment of granulosa cells with IGFBP7-specific small interfering RNA (siRNA) reduces IGFBP7 expression, enhancing FSH-stimulated E2 secretion into the culture medium.These results suggest that IGFBP7 inhibits estrogen production in granulosa cells, indicating that this protein secreted into the follicular fluid may serve as an ovarian intrinsic factor, negatively regulating granulosa cell differentiation (Yoshie et al., 2021).Furthermore, invertebrate insulin-like growth factor-binding proteins (ILPBPs) share structural homology with vertebrate IGFBP7, and ILPBPs have been shown to potentially function in ovarian development in the invertebrate red deep-sea crab (Huang et al., 2021).
IGFBP7 is also significantly associated with embryo implantation and the success rate of pregnancy.IGFBP7 is present in the uterine glandular epithelial cells and uterine stromal cells, with elevated expression during the mid-to-late secretory phase of the menstrual cycle in women (Domínguez et al., 2003).In vitro studies have demonstrated that IGFBP7 acts as a decidualization regulator in uterine stromal cells, potentially exerting its effects during the decidualization process of uterine stromal cells (Kutsukake et al., 2007;Yoshie et al., 2021).IGFBP7 participates in embryo implantation and uterine decidualization.Inhibition of IGFBP7 significantly increases the TH1-type cytokine IFNγ and decreases the Th2-type cytokines IL-4 and IL-10, thereby inhibiting uterine decidualization and reducing uterine receptivity.This can significantly lower embryo implantation and pregnancy rates, leading to pregnancy failure in a mouse model (Liu et al., 2012).In human umbilical vein endothelial cells, IGFBP7 treatment inhibits exogenous VEGF-induced angiogenesis, proliferation, and phosphorylation of MEK and ERK (Tamura et al., 2009).Using the human endometrial epithelial cell line (EM1) to study the significance of IGFBP7 in endometrial glandular function, the results indicate that IGFBP7 regulates glandular cell morphological changes by interfering with normal PKA and MAPK signaling pathways associated with the transformation and/or differentiation of endometrial glands, which is crucial for the initiation of embryo implantation (Kutsukake et al., 2010).
IGFBP7 plays a crucial role in pathological pregnancies, including complete hydatidiform mole, pregnancy-related nausea and vomiting (hyperemesis gravidarum), and endometriosis.IHC analysis revealed that downregulation of IGFBP7 may play a significant role in the progression of complete hydatidiform mole (Xiao et al., 2014).Common variants of IGFBP7 are susceptibility loci for the diagnosis of pregnancy-related nausea and vomiting (Fejzo et al., 2019b), with serum levels of IGFBP7 significantly increased in hyperemesis gravidarum women at 12 weeks of pregnancy (Fejzo et al., 2019a).Moreover, the homologue of fruit fly IGFBP7 has been shown to play a role in coordinating neurons between metabolic states and feeding behavior, potentially conveying food preferences and pregnancy intentions (Bader et al., 2013).IGFBP7 is associated with the pathophysiology of endometriosis, as serum IGFBP7 concentrations in patients with endometriosis are significantly higher than those in the control group (Kutsukake et al., 2008), and metformin can upregulate the expression of IGFBP7 in both human and mouse models of endometriosis (Huang et al., 2022).IGFBP7 is also involved in male reproductive processes.A study conducted at the Federal University of São Paulo from May 2014 to April 2016 detected increased expression levels of IGFBP7 protein in the semen of patients with varicocele using Western blot analysis (Belardin et al., 2016).
In summary, IGFBP7 plays a multifaceted role in folliculogenesis, embryo implantation, pregnancy success, and pathological pregnancies, including conditions like complete hydatidiform mole, hyperemesis gravidarum, and endometriosis.Its involvement in regulating decidualization, angiogenesis, glandular function, and neuronal coordination underscores its significance in reproductive processes and highlights its potential as a diagnostic and therapeutic target in reproductive disorders.
In this review, we focus on the roles of IGFBP7 in tumor development, acute kidney injury, and reproduction due to their significant impact on clinical outcomes and the extensive research supporting IGFBP7's involvement in these areas.These functions are critical in understanding IGFBP7's diverse biological activities and its potential as a therapeutic target.Furthermore, IGFBP7's interaction with key signaling pathways such as the AKT/ERK pathway, which are common to these conditions, underscores its multifaceted role in cellular processes.In the realm of future research, IGFBP7 holds promise across multiple fronts.In cancer, its tumor-suppressive properties in melanoma, breast, and colorectal cancers beckon exploration into underlying mechanisms and its potential as a biomarker for early detection and therapeutic target.Additionally, its role in fibrosis regulation in organs like the liver, lungs, and kidneys warrants investigation into fibrotic disease pathogenesis and therapeutic potential.In metabolic disorders such as diabetes and obesity, IGFBP7's influence on metabolic processes hints at diagnostic and therapeutic applications.By delving into these avenues, IGFBP7 could emerge as a pivotal player in disease diagnosis, prognosis, and treatment strategies, offering hope for improved healthcare outcomes.
FIGURE 3
FIGURE 3 Potential mechanism of IGFBP7 regulation in follicular development.IGFBP7 is structurally homologous to follistatin (FST) and has been shown to interact with activins (ACT) on protein level.The INH-ACT-FST axis plays a crucial role in regulating follicular development, modulating follicular cell secretion of E2 and P4, proliferation, apoptosis, and expression of BMP15, EGF, through Smad, MAPK, and PI3K signaling pathways.This schematic diagram illustrates the potential molecular mechanism by which IGFBP7 regulates follicular development, highlighting its interactions with key players in the INH-ACT-FST axis and downstream signaling pathways. | 2024-07-18T15:09:11.676Z | 2024-07-16T00:00:00.000 | {
"year": 2024,
"sha1": "969a1a3ef30251bb4ab3242d14f8631e2a3104d3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fcell.2024.1420862",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da05e29d7a3d004c631c623db9b59ad73a6fcc49",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
238837872 | pes2o/s2orc | v3-fos-license | Role of CD68 in Tumor Immunity and Prognosis Prediction: A Pan-Cancer Analysis
CD68 plays a critical role in promoting phagocytosis. However, the function of CD68 in tumor immunity and prognosis remains unknown. This study systematically analyzed CD68 expression among 33 tumor and normal tissues from the Cancer Genome Atlas (TCGA) and Genotype-Tissue Expression (GTEx) datasets. In addition, the relationship between the expression of CD68 and cancer prognosis, immune infiltration, checkpoint markers, drug response were explored. Upregulated levels of CD68 were observed in various cancer types, which were verified through tumor tissue chips using Immunohistochemistry. High expression of CD68 in tumor samples correlates with an adverse prognosis in GBM, KIRC, LGG, LIHC, LUSC, THCA, and THYM while with a better prognosis in KICH. The top three negatively enriched KEGG terms in the high CD68 subgroup were chemokine signaling pathway, cytokine-cytokine receptor interaction, cell adhesion molecules cams, and the top negatively enriched HALLMARK terms included complement, allograft rejection, and inflammatory response. Based on CD68 levels, a series of targeted drugs and small molecule drugs with promising therapeutic effects were predicted. The clinical prognosis and immune infiltration of high expression levels of CD68 differ across different tumor types. Inhibiting the CD68-dependent signaling could be a promising therapeutic strategy of immunotherapy in many tumor types.
Introduction
According to the latest study in 2019, cancer has become the first or second leading cause of death in more than 112 countries for less than 70 years old 1 . Worldwide, more than 19 million new cancer cases and 10 million cancer deaths occurred in 2020 2 . Even undergoing a series of traditional treatment methods, including radiotherapy, chemotherapy, biological therapy, and surgery, the effect of tumor therapy is still unsatisfactory 3,4 . Tumor immunotherapeutic strategies such as the focus on programmed cell death protein 1 (PD1) have proven to be a novel and promising treatment for tumors 5 6 . With the rapid development of high-throughput sequencing technology, more and more immune-associated molecules related to tumor prognosis have been discovered, which may play an irreplaceable role in tumor immunotherapy. The cluster of differentiation 68(CD68), also known as GP110, LAMP4, or SCARD1, is a 110 kD transmembrane glycoprotein widely expressed in the monocyte cell types such as macrophages, microglia, and osteoclasts 7 . CD68 plays an essential role in various physiological and pathological processes, including atherosclerosis formation 8 , inflammation and auto-immunity 9 , bone-resorbing promotion 10 , tumor progression 11,12 . Bone marrow-derived macrophages (BMMs) are the most common type among tumor-infiltrating immune cells in the tumor microenvironment (TME) and are vital factors that mediate the antitumor immune response [13][14][15] . Recent studies found that CD68 is overexpressed in tumor-associated macrophages (TAMs) and tumor cells. High levels of CD68 are associated with higher tumor grade, larger tumor size, Ki67 positivity, and other malignant features, which indicate tumor progression and aggressiveness [16][17][18][19] . TAMS, identified by CD68 expression, can be divided into two subtypes: classically activated type 1(M1-like) macrophages and alternatively activated type 2 (M2-like) macrophages. M1-like macrophages, with proinflammatory characteristics that express high levels of free radicals and major histocompatibility complex molecules, contribute to antitumor activity 20,21 . In contrast, M2-like macrophages, which release multiply anti-inflammatory cytokines and chemokines, were reported to promote tumor growth and metastasis 22,23 . Increasing evidence has shown that CD68 is a promising tumor-associated diagnostic and prognostic marker in cancer. However, the signaling pathways that CD68 involved in the tumor immunity and progress remain still far from understood. In this study, we studied the expression of CD68 in 33 cancer types using large-scale RNA-sequencing (RNA-seq) data from the public dataset-TCGA. Upregulated levels of CD68 were observed in various cancer types, which were observed in the TCGA database and our tumor tissue chips. Meanwhile, we discussed the value of CD68 in prognostic prediction in pan-cancer. Moreover, the relationship between the expression levels of CD68 and the infiltration of immune cells in the pan-cancer microenvironment was observed. Finally, we analyzed the correlation of a series of predicted drugs and CD68 expression, which might be used for tumor immunotherapy in the future.
Materials and Methods
Collection of sample and patient data The clinicopathological features and RNA sequencing (RNA-seq) data of the 33 types of cancers were chosen from the TCGA dataset(http://cancergenome.nih.gov). As the data from normal tissue is relatively insufficient, the RNA-seq data of normal human tissues were additionally added from the GTEx dataset (https://www.gtexportal.org/) to analyze the expression levels CD68 between tumor and normal tissues. A tumor tissue chip (Catalog NO. BCN963) contains multiple organ tumor arrays with matched normal tissues that were used to verify the expression of CD68. Informed consent of all subjects has been obtained in this study.
Recognition of relevant features
The gene expression data of CD68 were extracted from TGCA and GTEx databases to form an expression matrix using ONCOMING (https://www.oncomine.org/), GEPIA (http://gepia.cancer-pku.cn/) and R package (4.0.4). The genetic mutation aspects of CD68 were observed from the public database-CBIOPORTAL (https://www.cbioportal.org/). The Kaplan-Meier (KM) analysis by the log-rank test was used to compare the disease-free interval (DFI), progression-free interval (PFI), disease-specific survival (DSS), and overall survival (OS) among patients. Univariate Cox model was used to calculate the relationship between CD68 expression levels and patient survival. The immune infiltrates among 33 types of cancers were studied by Tumor Immune Estimation Resource (TIMER 2.0, https://cistrome.shinyapps.io/timer/) 24 and CIBERSORT 25 . The ESTIMATE algorithm was applied to estimate the stromal and immune cells in the tumor microenvironment and calculated the stromal scores, immune scores, and estimate scores. Gene set enrichment analysis (GSEA) was selected to display the involved biological functions and pathways of CD68. This analysis was implemented in Sangerbox (http://sangerbox.com/) based on the molecular signatures database (MSigDB) H (hallmark gene sets) and Kyoto Encyclopedia of Genes and Genomes database (KEGG). The relationship between CD68 expression and drug responses was predicted from CELLMINER(http://discover.nci.nih.gov/cellminer/) by R language. Statistical analysis A Student's t-test was performed to explore the correlation between CD68 expression and drugs. Kruskal-Wallis test was adopted to compare the expression levels of CD68 in tumor and normal tissues. Meanwhile, KM curves, the Log-rank test, and the Cox proportional hazards regression model were applied to analyze the survival conditions. In addition, the Spearman test was used for correlation analysis. All analyses were performed under the R language. All statistical tests were two-sided, and p < 0.05 was considered a significant difference.
Results
Expression of CD68 in pan-cancer First, we observed the expression of CD68 in pan-cancer using the Oncomine dataset and found that the levels of CD68 were relatively higher in the brain and central nervous system (CNS) cancer, breast cancer (BRCA), kidney cancer, lymphoma, pancreatic cancer than in normal tissues. Meanwhile, other studies also indicated that the expression of CD68 was downregulated in colorectal cancer, kidney cancer, leukemia, and lung cancer ( Figure 1A). In addition, we matched the GTEx normal samples with the TCGA tumor samples ( Figure 1B). We found that the levels of CD68 in colon adenocarcinoma (COAD), glioblastoma multiforme (GBM), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), brain lower-grade glioma (LGG), ovarian serous cystadenocarcinoma (OV), pancreatic adenocarcinoma (PAAD), rectum adenocarcinoma (READ), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), testicular germ cell tumors (TGCT) and uterine carcinosarcoma (UCS) were significantly elevated (p<0.01) in tumor tissues than in normal tissues. On the contrary, CD68 was significantly declined (p<0.001) in thymoma (THYM) compared to GTEx normal controls. Moreover, the immunohistochemical results indicated that the expression of CD68 was obviously enhanced in COAD, GBM, KIRC, LGG, OV, PAAD, READ, SKCM, and STAD compared to their normal controls ( Figure 1C).
Mutation profile and prognostic value of CD68 in pan-cancer
Then, we checked the landscape of CD68 aspects in different cancer types from the TCGA database using cBioportal ( Figure 2). The data showed that PRAD and diffuse large B-Cell lymphoma had a high mutation level with CD68 deep deletion of more than 4% (Figure 2A and 2B). A total of 45 mutation sites (including 31 missense, 11 truncating, two splices, and 1 inflame) were found between amino acids 0 and 354 ( Figure 2C). Next, to further understand the prognostic value of CD68 in pan-cancer, we downloaded RNA-seq and clinical data of CD68 from the TCGA dataset. As shown in Figure 3A Furthermore, we observed the prognostic value of CD68 in DFI (Supplement Figure 1A) and PFI (Supplement Figure 1B). The results showed that high levels of CD68 were associated with a poorer DFI in GBM, LGG, THYM and a better DFI in KICH (Supplement Figure 1E-1I). Meanwhile, the high levels of CD68 were associated with a poorer PFI in CHOL, LIHC, STAD, and a better DFI in CESC (Supplement Figure 1J-1M).
Relationship between CD68 expression and immune cell infiltration
Next, we explored the landscape of CD68 in the tumor microenvironment in all tumor types based on the TIMER2.0 database. As shown in Figure 4, the CD68 expression was positively related to multiplying immune cells infiltration, including dendritic cells, monocyte, macrophage, and neutrophil. However, the CD68 expression was negatively associated with the infiltration of myeloid-derived suppressor cells (MDSC). Next, we analyzed the correlation of CD68 levels and immune cell infiltration in the tumor microenvironment in 33 cancer types. The results indicated that the expression of CD68 was positively related to the abundance B cell, CD4+ T cell, CD8+ T cell, dendritic cell, macrophage, and neutrophil in many tumor types. As shown in Figure 5A, the three most significantly related tumors are adrenocortical carcinoma (ACC), BRCA, and CESC. The details in other tumor types are shown in Supplement Figure 2. We further calculated the stromal score, immune score, and estimate score of 33 cancer types by ESTIMATE algorithm. As shown in Figure 5B, the top three tumor types that CD68 expression positively correlated with stromal score are BLCA, BRCA, and GBM (p<0.001); the top three tumor types that CD68 expression positively correlated with immune score are ACC, BLCA, and BRCA(p< 0.001); the top three tumor types that CD68 expression positively correlated with estimate score are BLCA, BRCA, and CESC(p<0.001). Data in Supplement Figure 3 showed that the expression of CD68 was significantly and positively correlated with the stromal score in all tumor types except CHOL and mesothelioma (MESO). In addition, the expression of CD68 was significantly and positively correlated with immune scores in all tumor types (Supplement Figure 4). Moreover, the expression of CD68 was significantly and positively correlated with immune scores in all tumor types (Supplement Figure 5). These indicated that CD68 has a close relationship with immune infiltrates in the tumor microenvironment and might act as a promising immunotherapy target. Neoantigen is a neoantigen encoded by a mutated gene of tumor cells, which plays a crucial role in tumor immunotherapy. We then explored the relationship between CD68 expression and the number of neoantigen in human cancers ( Figure 6). Our results indicated that high levels of CD68 were significantly and positively related to the number of neoantigen in LUAD, KIRP, CESC, and PRAD (p<0.05).
The relationship between CD68 expression and checkpoint gene markers, tumor mutation burden, and microsatellite instability
To further elaborated the potential immune mechanisms of CD68, we next compared the association of CD68 expression with various checkpoint markers in different cancer types ( Figure 7A). The results showed that CD68 expression positively correlates with the expression of LARI1, HAVCR2, LGALS9, PD1 in most of the 33 tumor types. We also studied the relationship between CD68 expression and five DNA mismatch repair (MMR) markers ( Figure 7B). CD68 levels were significantly and negatively correlated with mutL homolog 1 (MLH1), mutS homolog 2 (MSH2), mutS homolog 6 (MSH6), postmeiotic segregation increased 2 (PMS2), and epithelial cell adhesion molecule (EPCAM) in BRCA, CESC, KIRC, OV and THCA(p<0.05).
However, the CD68 levels were significantly and positively correlated with MSH6 in KICH and READ(p<0.05). In addition, we studied the correlation between tumor mutation burden (TMB) and microsatellite instability (MSI) with CD68 levels. Table 1. The results from drug response analysis by CellMiner suggested that a lot of drugs were associated with CD68 expression, of which the top 16 were exhibited in Supplement Figure 6. In addition, the top 20 related drugs are shown in Supplement Table 2.
Discussion
In this study, we explored the role of CD68 in clinical outcome prediction and immune cell infiltration in pan-cancer from TCGA and GETx databases. The results indicated that elevated CD68 was observed in many tumor types, including COAD, GBM, KIRC, KIRP, LGG, OV, PAAD, READ, SKCM, STAD, TGCT, UCS, and was associated with a more unsatisfactory clinical outcome in GBM, KIRC, LGG, LIHC, LUSC, THCA, and THYM. Meanwhile, we calculated the infiltration of the immune cells in the tumor microenvironment. We found that high levels of CD68 were associated with a lot of immune cells in the tumor microenvironment, such as monocyte, abundance B cell, CD4+ T cell, CD8+ T cell, dendritic cell, macrophage, and neutrophil. Moreover, the upregulated expression of CD68 was closely related to the stromal score, immune score, and estimate score in many types of human cancer. These results were consistent with previous studies in CD68 [26][27][28] , which indicated that CD68 might be a future novel and promising immunotherapy target. Neoantigens, kinds of non-autologous proteins with specific characteristics, are generated from the tumor cell genome through non-synonymous mutations 29 and play an essential role in tumor immunotherapy [29][30][31] . The present work illustrated that high expression levels of CD68 were significantly and positively related to the number of neoantigen in LUAD, KIRP, CESC, and PRAD. In addition, the mutation landscape of CD68 in pan tumor types was also displayed in this study. Anti-immune checkpoint therapy has become a necessary treatment in fighting cancer in recent years 29,32,33 . To the full clarified the immune value of CD68 in pan-cancer, we next studied the correlation between CD68 expression and large numbers of immune checkpoints in pan-cancer and found that high levels of CD68 were significantly and positively related with some key checkpoints, including LARI1, HAVCR2, LGALS9, PD1 in the most of 33 tumor types. Furthermore, the CD68 expression was majority negatively related to DNA mismatch repair (MMR) markers in most types of cancer. Increasing studies found that TMB and MSI are emerging clinical biomarkers in immunotherapy, clinical outcome, and chemotherapy sensitivity in various tumor types 34-36 . This present study also found that CD68 expression was correlated with TMB and MSI in many cancer types, which might provide probable and potential evidence for predicting the efficacy of tumor immunotherapy. The specific mechanisms of CD68 in tumor growth and metastasis are still far from understand until right now. Our study also explored a majority of pathways related to the expression of CD68 in pan-cancer, which might help figure out the exact function of CD68 and downstream signaling pathways in the future. In addition, there are no small molecule drugs that were specifically targeting CD68 in tumor therapy. Finally, we identified a series of targeted medicines and small-molecule drugs with promising efficacy predicted by CD68 levels, which the FDA proved. These drugs might serve a key role in tumor chemotherapy and are conducive to improving the treatment of tumors. There are several limitations to this study. Firstly, the mRNA expression levels of CD68 were assessed from public databases and only verified by tumor tissue chips, not validated in vivo and in vitro studies. Secondly, the role of CD68 in tumor immune cell infiltration in pan-cancer was not verified by cell and animal experiments in this study. More studies focus on the specific signaling pathways of CD68 in pan-cancer need to be explored clearly in the future.
Acknowledgments:
We are grateful to all of those with whom we have had the pleasure to work during this and other related projects. Author contributions: J.Z. and F.L. designed and performed the research and wrote the manuscript. Data curation and validation were performed by S.L. All authors contributed to writing and critically revising the manuscript.
Availability of data and materials:
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
Conflict of interest:
None.
Ethical approval and ethical standards:
The study with primary human tissues was approved by the ethics committee of the Xiangya Hospital, Central South University, and the procedures with human samples were performed in accordance with the ethical standards of the ethics committee and the Helsinki Declaration of 1975 and its later amendments. Reference Li | 2021-09-27T20:24:29.384Z | 2021-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "912c0910ef665c66cb1a4d3d6c1a0d5a7ccee0c9",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-757216/v1.pdf?c=1628211023000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "787e2163902480b8e9a1beca3ed34b5b0675a3c6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45844556 | pes2o/s2orc | v3-fos-license | Elastic $\alpha$-$^{12}$C scattering at low energies in cluster effective field theory
The elastic $\alpha$-$^{12}$C scattering at low energies is studied employing an effective field theory in which the $\alpha$ and $^{12}$C states are treated as elementary-like fields. We discuss scales of the theory at stellar energy region that the ${}^{12}$C($\alpha$, $\gamma$)$^{16}$O process occurs, and then obtain an expression of the elastic scattering amplitudes in terms of effective range parameters. Using experimental data of the phase shifts for $l=0,1,2$ channels at low energies, for which the resonance regions are avoided, we fix values of the parameters and find that the phase shifts at the low energies are well reproduced by using three effective range parameters for each channel. Furthermore, we discuss problems and uncertainties of the present approach when the amplitudes are extrapolated to the stellar energy region.
Introduction
The radiative alpha capture on carbon-12, 12 C(α, γ) 16 O, is one of the fundamental reactions in nuclear astrophysics, which determines the ratio 12 C/ 16 O produced in helium burning [1]. The reaction rate, equivalently the astrophysical S-factor, of the process at the Gamow peak energy, T G = 0.3 MeV, however, cannot experimentally be determined due to the Coulomb barrier. It is necessary to employ a theoretical model and extrapolate the cross section down to T G by fitting the model parameters to available experimental data measured at a few MeV or larger. During a last half century, a lot of experimental and theoretical studies for the process have been carried out. For reviews, see, e.g., Refs. [2,3] and references therein.
In constructing a model for the process, one needs to take account of excited states of 16 O [2], particularly, two excited bound states for l π n-th = 1 − 1 and 2 + 1 just below the α-12 C breakup threshold at T = −0.045 and −0.24 MeV 2 , respectively, as well as 1 − 2 and 2 + 2 resonant (second excited) states at T = 2.42 and 2.68 MeV, respectively. Thus the capture reaction to the ground state of 16 O at T G is expected to be E1 and E2 transition dominant due to the subthreshold 1 − 1 and 2 + 1 states. While the resonant 1 − 2 and 2 + 2 states play a dominant role in the available experimental data at low energies, typically 1 ≤ T ≤ 3 MeV. Experimental data pertaining to processes for nuclear astrophysics are compiled, known as NACRE-II compilation [4], in which the S-factor of the 12 C(α,γ) 16 O reaction is estimated employing a potential model and reported uncertainty of the process is less than 20 %. While conflicting sets of experimental data for the process at very low energies still persist [5,6], and thus one may need to wait for new measurements at very low energies, T ≤ 1.5 MeV [3].
In the present study, we would like to discuss an alternative theoretical approach constructing an effective field theory (EFT) for the process, and apply the theory to the study of elastic α-12 C scattering at low energies. EFTs provide us a model independent and systematic method for theoretical calculations. An EFT for a system in question can be built by introducing a scale which separates relevant degrees of freedom at low energies from irrelevant degrees of freedom at high energies. An effective Lagrangian is written down in terms of the relevant degrees of freedom, and is perturbatively expanded order by order, by counting the number of derivatives. The irrelevant degrees of freedom are integrated out and their effect is embedded in coefficients appearing in the effective Lagrangian. Thus a transition amplitude is systematically calculated by writing down Feynman diagrams, whereas the coefficients appearing in the effective Lagrangian should be determined by experiments. For reviews, one may refer to, e.g., Refs. [7,8,9]. Various processes being essential in nuclear astrophysics have been investigated by constructing EFTs, for example, p(n, γ)d at BBN energies [10,11] and pp fusion [12,13,14,15] and 7 Be(p,γ) 7 B [16,17] in the Sun.
An unique feature of those studies in EFTs is that the theories allow us to estimate theoretical uncertainties, based on the model-independent and perturbative expansion scheme of the theories, in the extrapolated reaction rates. For example, less than 1 % accuracy in the estimate of the reaction rates of p(n, γ)d at BBN energies and the pp fusion in the Sun were obtained in the previous studies [11,14]. Thus our main aim in future studies for the 12 C(α,γ) 16 O reaction is to estimate the S factors with about 5 % uncertainty in theory.
We treat the α and 12 C states as elementary-like fields, and the scales involving in the theory are to be discussed in the next section. Then an effective Lagrangian is written down, an expression of the scattering amplitudes is obtained, and phase shifts for l = 0, 1, 2 channels of the elastic α-12 C scattering at low energies are studied. The main assumption of the present study, suggested by Teichmann [18], is that we may choose the energies of the resonant states the large energy scale of the theory so that, in the limited low energy regions, the Breit-Wigner type pole structure for the resonant states in the scattering amplitudes can be expanded in terms of the energy and the expression of the energy dependence of the amplitudes can be reduced to that of the effective range expansion. Thus our large energy scales of the theory for the elastic scattering for l = 0, 1, and 2 channels are the resonance energies, T = 4.89, 2.42, and 2.68 MeV for the 0 + 2 , 1 − 2 , 2 + 2 states, respectively. In addition, we do not introduce explicate degrees of freedom for the 1 − 1 and 2 + 1 states. Because, as to be discussed in detail later, the expression of the scattering amplitudes in terms of the effective range parameters have a restrictive condition in zero momentum limit, we find that it is not easy to incorporate the subthreshold states in the present study.
This article is organized in the following. In section 2, we discuss the scales of the theory and write down an effective Lagrangian for the elementary-like α and 12 C fields. In section 3, the expression of the amplitudes for the elastic α-12 C scattering for l = 0, 1, 2 channels in terms of the effective range parameters is obtained. In section 4, the parameters are fitted by using the experimental phase shifts, and for a qualitative study of the extrapolation the real part of the denominator of the scattering amplitudes is extrapolated to T G . Finally, conclusions and discussion of the present work are presented in section 6.
Scales and effective Lagrangian for the system
As mentioned above, we treat the α and 12 C states as elementary-like cluster fields. This treatment would be reasonable when a typical momentum scale is smaller than a scale at which a mechanism at high energy becomes relevant. For the α particle, excited states of the α particle should be treated as irrelevant degrees of freedom [19,20]. First excited energy of the α particle is E (4) ≃ 20 MeV, and thus a corresponding large momentum scale is Λ H ∼ 2µ 4 E (4) ≃ 170 MeV where µ 4 is the reduced mass for one and three nucleon systems, µ 4 ≃ 3 4 m N . m N is the nucleon mass. For the 12 C state, on the other hand, first excited energy of 12 C is E (12) ≃ 4.439 MeV, and thus the large momentum scale due to E (12) is Λ H ∼ 2µ 12 E (12) ≃ 150 MeV where µ 12 is the reduced mass for four and eight nucleon systems, µ 12 ≃ 8 3 m N . In addition, we need to introduce another large scale due to the Coulomb interaction. The inverse of the Bohr radius is κ = α E Z α Z C µ ≃ 247 MeV where α E is the fine structure constant, Z α and Z C are the number of protons in α and 12 C, respectively, and µ is the reduced mass for α and 12 C, µ ≃ 3m N . Thus we may choose the large momentum scale of the theory Λ H ∼ 150 MeV.
A typical momentum scale Q for the 12 C(α,γ) 16 O process in the starts is estimated from the Gamow peak energy, T G ≃ 0.3 MeV, and thus the typical momentum scale is Q ∼ k = √ 2µ T G ≃ 41 MeV. Thus we shall have the expansion parameter for the process at T G as Q/Λ H ∼ 1/3, and the about 5 % theoretical uncertainty mentioned above can be achieved by considering perturbative corrections up to next-to-next-to leading order. A typical momentum scale, Q ∼ k, for the elastic α-12 C scattering differs from that at T G . We employ the phase shift data from Plaga et al. [21] and Tischhauser et al. [22] to fix the effective range parameters. The reported energies of the α particle for the phase shift data in lab frame are T α ≃ 1.5-6.6 MeV and 2.6-6.6 MeV 3 , respectively, whereas, as mentioned above, we introduced the resonance energies as the large scales of the process. Thus we have the lowest momenta in the center of mass frame, k low ≃ 80 and 105 MeV for Ref. [21] and [22], respectively, whereas the highest momenta, k high ≃ 166, 117, and 123 MeV for l = 0, 1, and 2 channels, respectively. Because the large momentum scale of the theory is Λ H ∼ 150 MeV, though in the higher momentum region the series expansion would not converge, it may do in the relatively low momentum region. The convergence of the effective range expansion for each channel is to be studied below.
An effective Lagrangian for the present study may be written as [19,23,24] where φ α (m α ) and φ C (m C ) are point-like fields (masses) of α and 12 C, respectively. D µ is a covariant derivative, and the dots denote higher order terms. d (l) represent α and 12 C composite fields of angular momentum l. Thus d (0) for l = 0, d (1)i for l = 1 where the subscript i represents a state in l = 1, and d (2)ij for l = 2 where d (2)ij = d (2)ji and the subscripts ij represent a state in l = 2. C (l) n are coupling constants for the propagation of the α-12 C composite fields for the l channels, and can be related to effective range parameters along with common multiplicative factors 1/y 2 (l) . For the present exploratory study, three effective range parameters, the terms of n = 0, 1, 2, are retained for each partial wave. For the l = 0 state, for example, C (0) 0 is related to the scattering length, C (0) 1 the effective range, and C (0) 2 the shape parameter. In addition, y (l) are coupling constants of the α-12 C-d (l) vertices, and O l are projection operators by which the α-12 C system is projected to the l-th partial wave states. Thus one has (2) 3 The energies T α and T for the α-12 C system in lab and center of mass frames are related by T α ≃ 4 3 T .
Scattering amplitudes and phase shifts
The differential cross section of the elastic α-12 C scattering (for two spin-0 charged particles) in terms of the phase shifts are given by (see, e.g., Ref. [25]) where f (θ) is the scattering amplitude including both pure Coulomb part and Coulomb modified strong interaction part, θ is a scattering angle, k is the relative absolute momentum, and η = κ/k. In addition, ω l is the Coulomb scattering phase, ω l (= σ l − σ 0 ) = arctan(η/s) for s = 1 to l, 4 and δ l are real scattering phase shifts. The elastic scattering amplitudes for the Coulomb modified strong interaction part for l = 0, 1, 2 channels are calculated from the effective Lagrangian presented above. In Fig. 1 Feynman diagrams for dressed composite 16 O propagators consisting of the α and 12 C elementary-like fields including the Coulomb interaction between the two charged fields are depicted. In Fig. 2, a Feynman diagram for a scattering amplitude for elastic α-12 C scattering for each partial wave state including the initial and final state Coulomb interactions is depicted. For derivation of the amplitudes from the diagrams in detail the reader may referee to, e.g., Refs. [26,27] and we do not repeat the detailed calculation.
Thus we have the scattering amplitudes, A l , for l = 0, 1, 2 states in terms of the effective range parameters as [19,28] Fig. 1 as well. with where ψ(x) is the digamma function. γ l , r l , P l are the three effective range parameters for l = 0, 1, 2. While the amplitudes, A l , can be represented in terms of the phase shifts δ l as Thus one has the relations between the phase shifts and the effective range parameters as where h(η) = ReH(η).
Fixing the parameters
Before fixing values of the effective range parameters, we discuss some features of the equations obtained in Eqs. (10,11,12). At low energies the function h(η) appearing in the equations can be expanded in terms of 1/η(= k/κ) as One may see that the series expansion converges in the energy region which we consider below and there is no constant term appearing from the h(η) function. In addition, the factor C 2 η being multiplied to the cotangent terms in Eqs. (10,11,12) becomes vanishingly small at the very low energies. Thus the left hand side of the equations vanishes in zero S0 S1 S2 S3 γ 0 (MeV) 0.058 ± 0.058 0.034 ± 0.003 0.015 ± 0.001 −0.008 ± 0.001 r 0 (fm) 0.270 ± 0.002 0.2693 ± 0.0001 0.2685 ± 0.0001 0.2674 ± 0.0000 P 0 (fm 3 ) −0.037 ± 0.005 −0.0372 ± 0.0002 −0.0390 ± 0.0001 −0.0416 ± 0.0000 Table 1: Fitted values of s-wave effective range parameters using four sets of the experimental data labeled by S0, S1, S2, and S3. See the text for details. momentum limit, k → 0, and the parameters γ 0 , γ 1 , and γ 2 in the right hand side of Eqs. (10), (11), and (12) are required to vanish as well in the limit. On the other hand, experimental data do not exist at such very low energies, and values of the parameters are fixed by using existing experimental data at some higher energies. As mentioned above, the experimental data from Plaga et al. [21] and Tischhauser et al. [22] where the lowest energies the data are T α ≃ 1.5 and 2.6 MeV, respectively are employed up to the energies of the resonant states, T α ≃ 6.5, 3.2, and 3.6 MeV for l = 0, 1, and 2, respectively. 5 We note that we have to choose γ 2 = 0 in fitting the parameters because the phase shift for the l = 2 channel is very small, less than two degrees, in the fitting energy range and it is not easy to have a non-vanishing contribution to γ 2 . In addition, due to the feature in the zero momentum limit mentioned above, it is not easy to incorporate the pole structure of the subthreshold states in the amplitudes either because it makes the γ l terms significantly large. Therefore, we fix the parameters, without including the pole structure of the subthreshold states, from data sets which we arbitrarily choose for each of the partial wave states, l = 0, 1, 2, below.
l = 0 channel
Four sets of the experimental data for the s-wave phase shift are chosen in order to qualitatively study the dependence from the choice of the data sets for the extrapolation to T G . The four sets of the data are labeled as S0, S1, S2, and S3. S0 denotes a data set of the s-wave phase shift at energies T α = 1.5-6.5 MeV from Table 2 in the Plaga et al.'s paper [21], and S1, S2, and S3 do those at energies T α = 2.6-6.5, 2.6-6.0, and 2.6-5.0 MeV, respectively, from the Tischhauser et al.'s paper [22].
In Table 1 fitted values of the s-wave effective range parameters by using the four data sets, S0, S1, S2, and S3 are displayed. 6 One can see that the fitted values of γ 0 are sensitive to the choice of the data sets, those of r 0 are not, and those of P 0 are in between the two cases. We find that almost exact cancellations between the r 0 term and the coefficient of the term proportional to k 2 , 1/(3κ) ≃ 0.2687 fm, from 2κh(η) term in Eq. (10) and significant cancellations between the P 0 term and that of the term proportional to k 4 term, −1/(15κ 3 ) ≃ −0.0210 fm 3 , from the 2κh(η) term. As discussed in the introduction, we find that the expansion series in terms of the effective range parameters well converges in the energy regions for the fitting. Those coefficients of the k 2n power series after including the corrections from the 2κh(η) term become significantly small, e.g., the γ 0 values being a few hundredth MeV, compared to the scale of the system. This may be due to the suppression factor from the C 2 η term in Eq. (10), which becomes C 2 η ∼ 10 −6 -10 −4 in the range of the energy, T α ≃ 2.0-6.0 MeV. . Three curves are plotted by using three sets of fitted effective range parameters (labeled by S0, S1, S3) obtained in Table 1. Experimental data labeled by Exp. (I) from Plaga et al. [21] and Exp. (II) from Tischhauser et al. [22] are also displayed.
In Fig. 3, curves of the s-wave phase shift are plotted by using the effective range parameters obtained in Table 1. The experimental data are also included in the figure. We find that the curves are well reproduced the data in the energy ranges where the effective range parameters have been fitted.
In Fig. 4, in order to qualitatively study the extrapolation to the Gamow energy, T G = 0.3 MeV in the center of mass frame, which corresponds to T α ≃ 0.4 MeV in the lab frame, we plot curves of the real part of the denominator of the s-wave scattering amplitude in Eq. (5), by using the values of the effective range parameters obtained in Table 1, as functions of T α where k = √ 1.5µT α . One can see that the fitted curves almost overlap at T α ≃ 3-5 MeV, Table 1. A vertical line at T α = 0.4 MeV is also included.
except for the curve of S0, and when one extrapolates the curves to the lower energies, they are scattered. The curves of f s (k) decreases, is almost the same, and increases at T α ≃ 0.4 MeV, compared to the values of the function at T α ≃ 3 MeV, depending on the choice of the data sets, S1, S2, and S3, respectively. Thus we find a significant uncertainty in the extrapolation to the energy, T α ≃ 0.4 MeV, corresponding to T G . We note that the curve of S3 vanishes at a small value of T α . That indicates the presence of the resonant state at the very low energy and thus the parameter set S3 should be excluded.
l = 1 channel
Three sets of the experimental data for the p-wave phase shift to fit the effective range parameters are chosen, and labeled as P 0, P 1, and P 2. P 0 denotes a data set of the p-wave phase shift at T α ≃ 1.5-3.1 MeV from Table 2 in the Plaga et al.'s paper [21], and P 1 and P 2 do those at T α = 2.6-3.0 MeV and 2.6-3.1 MeV, respectively, from the Tischhauser et al.'s paper [22]. We note that, as mentioned above, we chose the largest energies of the data sets less than the resonance energy, T α ≃ 3.23 MeV.
In Table 2 fitted values of the p-wave effective range parameters by using the three sets of the data labeled by P 0, P 1, and P 2 are displayed. We find in Table 2 the similar tendency to what we found in the fitted values of the s-wave effective range parameters in Table 1. The fitted values of γ 1 are quite sensitive to the choice of the data sets, whereas those of r 1 and P 1 do not. We can see that the significant cancellations between the r 1 P 0 P 1 P 2 γ 1 (10 3 MeV 3 ) −2.53 ± 1.09 −3.84 ± 1.27 −4.57 ± 0.38 r 1 (fm −1 ) 0.406 ± 0.002 0.405 ± 0.002 0.404 ± 0.000 P 1 (fm) −0.641 ± 0.006 −0.645 ± 0.007 −0.649 ± 0.002 Table 2: Fitted values of p-wave effective range parameters using three sets of the experimental data labeled by P 0, P 1, and P 2. See the text for details.
(P 1 ) term and the term proportional to k 2 (k 4 ) obtained from the 2κk 2 (1 + η 2 )h(η) term in Eq. (11) where we have 1 3 κ ≃ 0.413 fm −1 corresponding to the r 1 term and −11/15κ ≃ −0.591 fm corresponding to the P 1 term. We also find that the series expansion of the effective range parameters converges at very low energies, up to about T α ≃ 1.5 MeV, and the significant cancellations occur between the terms being proportional to k 2 and k 4 in the energy range, T α ≃ 2-3 MeV.
In Fig. 5, curves of the phase shift of the elastic p-wave α-12 C scattering are plotted by using the fitted effective range parameters in Table 2. The experimental data are also included in the figure. We find that the curves well reproduce the data in the energy ranges where the effective range parameters have been fitted.
In Fig. 6 we plot the real part of the denominator of the p-wave scattering amplitude in Eq. (6), by using the values of the effective range parameters obtained in Table 2 as functions of T α . One can see that the values of the function f p (k) are small at the energy range, T α = 2.6-3.0 MeV. In that energy region, as mentioned above, a significant cancelation among the terms of the effective range expansion occurs. While the values of the f p (k) function become large when it is extrapolated to T α = 0.4 MeV, due to the relatively large contribution from the γ 1 term compared to the other effective range terms of r 1 and P 1 when the corrections from the 2κk 2 (1 + η 2 )h(η) term are included. This implies that because the function f p (k) appears in the denominator of the scattering amplitude the scattering amplitude is rather suppressed at T G . Thus we cannot qualitatively reproduce the enhancement of the S-factor for the E1 channel, reported, e.g., in Ref. [29], in the extrapolation of the p-wave scattering amplitude to T G .
l = 2 channel
Three sets of the experimental data for the d-wave phase shift to fit the effective range parameters are chosen, and labeled as D0, D1, and D2. D0 denotes a data set of the dwave phase shift at T α ≃ 1.47-3.57 MeV from Table 2 in the Plaga et al.'s paper [21], and D1 and D2 do those at T α = 2.6-3.0 and 2.6-3.4 MeV, respectively, from the Tischhauser et al.'s paper [22]. We note that the maximum energies of the data sets are chosen as less than the resonance energy, T α ≃ 3.57 MeV. . Three curves are plotted by using three sets of fitted effective range parameters (labeled by P 0, P 1, P 2) obtained in Table 2. Experimental data, Exp. (I) from Plaga et al. [21] and Exp. (II) from Tischhauser et al. [22], are also displayed.
In Table 3 fitted values of the d-wave effective range parameters by using the three data sets introduced above are displayed. As mentioned before we have chosen γ 2 = 0 due to the small values of the phase shift in those data sets. We find that large error bars of the fitted parameters from the data set D0 compared to those from the data sets, D1 and D2. We also find the common tendency that the significant cancellations between the r 2 (P 2 ) term and the term proportional to k 2 (k 4 ) obtained from the 2κk 4 (4+η 2 )(1+η 2 )h(η) term in Eq. (12) where we have 1 3 κ 3 ≃ 0.6361 fm −3 and − 51 15 κ ≃ −4.217 fm −1 corresponding to r 2 and P 2 , respectively. For the convergence of the effective range expansion, we find that there is no convergence for the terms. There are large cancellations between the terms proportional to k 2 and k 4 at the energies T α ≃ 1.5-2 MeV. At the larger energies, the k 4 term becomes dominant and is significantly cancelled with the other terms.
In Fig. 7, curves of the d-wave phase shift are plotted by using the fitted values of the effective range parameters obtained in Table 3. The experimental data are also included in the figure. We can see that the curves plotted in the figure well reproduce the experimental data in the energy range below the resonance energy, T α ≃ 3.57 MeV, and the error bars of the Tischhauser et al.'s data are significantly smaller than those of the Plaga et al.'s data.
In Fig. 8 Table 2. A vertical line at T α = 0.4 MeV is also included.
amplitude in Eq. (7), by using the values of the effective range parameters in Table 3 as functions of T α . One can see that the curves vanish in the limit where k → 0 because of the assumption of γ 2 = 0. In addition, the tracks of the extrapolation for the data sets of D1 and D2 are quite different from that of D0, whereas as mentioned before, the curve from the D0 data set should have a large uncertainty due to the large error bars of the fitted parameters. Furthermore, we find that the curves of D1 and D2 are almost flat from T α ≃ 2 MeV to T α ≃ 0.4 MeV, so the extrapolated cross section would be almost the same except for the common factor C 2 η at T G and T ≃ 2 MeV. Thus we cannot qualitatively reproduce the enhancement of the S-factor for the E2 channel either, reported, e.g., in Ref. [30], in the extrapolation of the d-wave scattering amplitude to T G .
Discussion and conclusions
In this work we introduce an EFT for the 12 C(α,γ) 16 O at T G where the α and 12 C states are treated as the elementary-like cluster fields, and apply the theory to the study of the phase sifts of the elastic α-12 C scattering for l = 0, 1, 2 channels at the low energies. The expression of the scattering amplitudes for l = 0, 1, 2 channels is obtained in terms of D0 D1 D2 r 2 (fm −3 ) 0.37 ± 0.79 0.536 ± 0.001 0.533 ± 0.001 P 2 (fm −1 ) −6.2 ± 4.3 −5.505 ± 0.008 −5.526 ± 0.004 Table 3: Fitted values of d-wave effective range parameters using three sets of the experimental data labeled by D0, D1, and D2. See the text for details. the three effective range parameters. The effective range parameters are fitted by using the sets of the experimental data in the energy ranges below the resonance energies in which the phase shifts are smoothly varying. In the parameter fitting we find that it is difficult to incorporate the pole structure for the subthreshold 1 − 1 and 2 + 1 states in the amplitudes. Nevertheless we find that the experimental data at the low energies are well reproduced by the curves plotted using the fitted effective range parameters. To qualitatively study the uncertainty of the extrapolation to T G due to the fitting of the parameters to the experimental phase shift data, we extrapolate the real part of the denominator of the scattering amplitudes to T G , and find that there are the significant uncertainties. Because parameters deduced from the scattering phase shifts for l = 1 and 2 channels may play important roles in the extrapolation of the S-factors of the radiative capture reaction for the E1 and E2 transitions, we discuss our results in the parameter fitting and the extrapolation of the scattering amplitudes to T G in some details below.
In the parameter fitting for the l = 1 channel, the phase shift data entirely appear as the low energy tail of the 1 − 2 resonant state, and the tail from the 1 − 1 bound state at T = −0.045 MeV can be scarcely seen in the data. As we have found above, to reproduce the data it is not necessary to include the subthreshold bound state. In addition, the amplitude extrapolated to T G is largely suppressed. Those observations in fact have repeatedly been pointed out in the literatures [31,32,33]. Indeed, to estimate the radiative capture E1 transition cross section at T G , it would be essential to include an explicit degree of freedom for the 1 − 1 state in the theory, possibly along with the 1 − 2 state [34]. Although the tail of the the 1 − 1 state is not clearly seen in the elastic scattering data, the significance of the 1 − 1 bound state may be seen in the other experimental data such as the β-delayed α decay from 16 N, 16 N(β − ) 16 O(α) 12 C [35,36,37], whose minimum energy is T = 0.6 MeV, and the radiative E1 capture cross section whose minimum energy now becomes less than T = 1 MeV, see, e.g., Refs. [38,39,40].
In the parameter fitting for the l = 2 channel, the phase shift data at T α = 1.5-3.5 MeV show a down slope up to the 2 + 2 resonant state appearing at T α = 3.57 MeV. As demonstrated above, it is not difficult to fit the restricted data by using the three effective range parameters, but it appears not easy to precisely decompose it into three ingredients, the tail of the subthreshold state, that of the resonant state, and a background. Moreover, as discussed, it is not easy to include the 2 + 1 subthreshold state at T = −0.24 MeV because of the feature of the scattering amplitudes (represented in terms of the effective range parameters) in the present study. In the extrapolation, as seen above, we find that almost Table 3. A vertical line at T α = 0.4 MeV is also included. | 2016-03-24T00:37:00.000Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "2b868933dc26427631ca20979b40bb345b40f888",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1602.00408",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b868933dc26427631ca20979b40bb345b40f888",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245013116 | pes2o/s2orc | v3-fos-license | The impact of systematic assessment for adverse events on unscheduled hospital utilization in patients receiving neoadjuvant or adjuvant chemotherapy: A retrospective multicenter study
Abstract Background This study was conducted to compare the reported adverse event (AE) profiles and unexpected use of medical services during chemotherapy between before and after the healthcare reimbursement of AE evaluation in patients with cancer. Patients and Methods Using the electronic medical record database system, extracted patients with breast, lung, gastric, and colorectal cancers receiving neoadjuvant or adjuvant chemotherapy between September 2013 and December 2016 at four centers in Korea were matched using the 1:1 greedy method: pre‐reimbursement group (n = 1084) and post‐reimbursement group (n = 1084). Unexpected outpatient department (OPD), emergency room (ER) visit, hospitalization rates, and chemotherapy completion rates were compared between the groups. Results The baseline characteristics were well‐balanced between the groups. By chemotherapy cycle, hospitalization (1.8% vs. 2.3%; p = 0.039), and ER visit rates (3.3% vs. 3.9%; p = 0.064) were lower in the post‐reimbursement group than that in the pre‐reimbursement group. In particular, since cycle 2, ER visit and hospitalization rates were significantly lower in the post‐reimbursement group than those in the pre‐reimbursement group (2.6% vs. 3.3%; p = 0.020 and 1.4% vs. 2.0%; p = 0.007, respectively), although no significant differences were observed during cycle 1. The OPD visit rates were similar between both groups, regardless of cycles. The post‐reimbursement group had a higher proportion of patients who completed chemotherapy as planned than the pre‐reimbursement group (93.5% vs. 90.1%; p = 0.006). Post‐reimbursement group had more AEs reported, including alopecia, fatigue, diarrhea, anorexia, and peripheral neuropathy, during cycle 1 than the pre‐reimbursement group, which significantly decreased after cycle 2. Conclusion The introduction of healthcare reimbursement for AE evaluation may help physicians capture and appropriately manage AEs, consequently, decreasing hospital utilization and increasing chemotherapy completion rates.
| INTRODUCTION
Monitoring and assessing adverse events (AEs) in patients receiving systemic anticancer treatment are essential in ensuring patient safety and making clinical decisions, such as treatment delay, dose or schedule modification, and treatment discontinuation. Therefore, assessing AEs is a standard procedure in not only clinical trials, but also routine practice and has been performed most commonly using the National Cancer Institute (NCI) common terminology criteria for adverse events (CTCAE), which provides the definition and severity grading of AEs. 1 The CTCAE includes items of AEs derived from objective data, such as laboratory abnormalities and subjective symptoms experienced by the patients.
Currently, the assessment and reporting of AEs are usually performed by physicians, but it has been reported that physicians frequently underreport or underestimate the incidence and severity of AEs experienced by patients even in clinical trials as well as real-world routine clinical practice. [2][3][4] Recognizing this discrepancy in reporting AEs between physicians and patients, the use of patientreported outcomes (PROs) is becoming increasingly popular in AE monitoring in clinical trials. 5 However, the accurate capturing and grading of AEs by physicians are still of paramount importance, and considering its resource-intensiveness, an effective way in doing that in routine clinical practice needs to be further developed.
In South Korea, as part of the reorganization of patients' safety-related medical fees, the assessment of AEs by physicians in patients receiving systemic anticancer agents has become a medical service reimbursed by the National Health Insurance (NHI) since September 2015.
This study was conducted to compare the reported AE profiles and unscheduled hospital visits, including inpatient admissions or visits to the outpatient department (OPD)/emergency room (ER), during chemotherapy between before and after the healthcare reimbursement of AE evaluation in patients with cancer receiving neoadjuvant or adjuvant chemotherapy. To minimize bias from cancer-associated symptoms, patients with nonmetastatic cancer undergoing standard neoadjuvant or adjuvant chemotherapy after curative-intent surgery were studied.
| Study populations
Using the electric medical record (EMR) database system, patients were identified based on the diagnosis of breast, lung, gastric, and colorectal cancers and the visit, hospitalization rates, and chemotherapy completion rates were compared between the groups.
Results:
The baseline characteristics were well-balanced between the groups. By chemotherapy cycle, hospitalization (1.8% vs. 2.3%; p = 0.039), and ER visit rates (3.3% vs. 3.9%; p = 0.064) were lower in the post-reimbursement group than that in the pre-reimbursement group. In particular, since cycle 2, ER visit and hospitalization rates were significantly lower in the post-reimbursement group than those in the pre-reimbursement group (2.6% vs. 3.3%; p = 0.020 and 1.4% vs. 2.0%; p = 0.007, respectively), although no significant differences were observed during cycle 1. The OPD visit rates were similar between both groups, regardless of cycles. The post-reimbursement group had a higher proportion of patients who completed chemotherapy as planned than the pre-reimbursement group (93.5% vs. 90.1%; p = 0.006). Post-reimbursement group had more AEs reported, including alopecia, fatigue, diarrhea, anorexia, and peripheral neuropathy, during cycle 1 than the pre-reimbursement group, which significantly decreased after cycle 2.
Conclusion:
The introduction of healthcare reimbursement for AE evaluation may help physicians capture and appropriately manage AEs, consequently, decreasing hospital utilization and increasing chemotherapy completion rates.
K E Y W O R D S
adverse event, chemotherapy, emergency room visit, hospitalization administration of neoadjuvant or adjuvant chemotherapy between September 2013 and December 2016 at Asan Medical Center (Seoul, Korea), Seoul National University Bundang Hospital (Seongnam, Korea), Ulsan University Hospital (Ulsan, Korea), and Gangneung Asan Hospital (Gangneung, Korea). Patients who underwent palliative surgery; those who received neoadjuvant or adjuvant concurrent chemoradiation therapy; those who were lost to follow-up for reasons other than AEs; those who participated in clinical trials, and those whose disease progressed during neoadjuvant or adjuvant chemotherapy were excluded.
In Korea, the healthcare system is implemented under the NHI program, which was began in 1977, and is compulsory by law and is a universal social insurance program that covers the entire population. The Ministry of Health and Welfare oversees the NHI system and its two fundamental institutions: The National Health Insurance Service (NHIS) and the Health Insurance Review & Assessment Service (HIRA). The NHIS serves as the insurer and HIRA conducts claims reviews and quality assessment of healthcare services. Through this system, healthcare providers are required to claim medical services performed by themselves for reimbursement of payments by the NHI, and reimbursement is performed after review by the HIRA. All standard treatments of the neoadjuvant/adjuvant chemotherapy included in this study were covered by NHI during study period.
As the assessment of AEs started to be reimbursed by the NHI since September 2015, patients were classified into two groups: the pre-reimbursement (September 2013 to August 2015) and post-reimbursement (January 2016 to December 2016) groups. The exact matching was used along with the 1:1 greedy nearest neighbor algorithm within specified caliper widths based on age (<60 and ≥60 years), sex, cancer type, chemotherapy regimen, and treatment settings (neoadjuvant or adjuvant). The details of chemotherapy regimen according to the cancer type between the pre-reimbursement and post-reimbursement groups are shown in the Table S1.
This study was approved by the Institutional Review Board (IRB) of each participating center, and all information was obtained with appropriate IRB waivers.
| Clinical data and AEs collection
Clinical data regarding baseline characteristics, treatment, and AEs were retrospectively collected. Past and current medical history included hypertension, diabetes mellitus, tuberculosis, hepatitis, congestive heart failure, coronary artery disease, and chronic obstructive pulmonary disease. AEs were evaluated according to the NCI CTCAE.
To assist physicians in capturing and grading AEs and to facilitate claims for reimbursement, most hospitals have introduced systematic toxicity assessment form (STAF) ( Figure S1) containing common chemotherapy-related AE items and severity grading into the EMR system. Besides AEs in STAF, all AEs in the medical records written by physicians were also collected.
| Study endpoints and statistical analysis
The primary endpoints were the rate of unexpected utilization of medical services during chemotherapy including unexpected OPD and ER visits, and hospitalization rates. The secondary endpoints included chemotherapy completion rates and dose intensity or dose reduction rates.
Categorical and quantitative data were compared using the chi-square test or Fisher's exact test and Mann-Whitney U-test, respectively. The unexpected OPD and ER visit and hospitalization rates per patient or chemotherapy cycle were compared between the groups during all chemotherapy periods or according to the treatment period. The treatment periods were divided into the early ("during cycle 1") and late ("since cycle 2") periods. The impact of reimbursement for AE evaluation in terms of the unexpected ER visit since cycle 2 was estimated in the subgroup analysis. Two-sided p-values of less than 0.05 were used to denote statistical significance, and all statistical analyses were performed using Statistical Package for the Social Science (version 23.0; IBM Corp.).
| Patient characteristics
In this study (Figure 1), 2168 patients with breast, lung, gastric, and colorectal cancers who were treated with neoadjuvant or adjuvant chemotherapy were classified into the pre-reimbursement (n = 1084) and postreimbursement (n = 1084) groups after exact matching. The median age of the patients was 56 years (range, 17-84 years), and 68.7% of the patients were female. The most common tumor type was breast cancer (n = 996, 45.9%), and most patients (n = 2153, 99.3%) had Eastern Cooperative Oncology Group Performance Scores (ECOG PS) of 0-1. The STAF was used in 64 patients (5.9%) in the pre-reimbursement group and 949 patients (87.5%) in the post-reimbursement group (p < 0.001). The baseline characteristics of the patients are presented in Table 1. No significant differences in the baseline characteristics were observed between the two groups, except for a higher proportion of patients with ECOG PS of ≥1 (40.1% vs. 33.3%; p < 0.001) in the postreimbursement group. Only 15 patients had ECOG PS of 2-3, and no patients had ECOG PS of 4. There were also no significant differences in types of surgery per each cancer type between the two groups (Table S2). Table 2 summarizes the unexpected utilization rates of medical services during chemotherapy per patient. No significant differences in unexpected OPD visit (12.8% vs. 12.5%; p = 0.897), ER visit (17.6% vs. 15.2%; p = 0.147), and hospitalization (11.2% vs. 10.1%; p = 0.443) rates were observed between both groups. Interestingly, when we analyzed by dividing treatment periods into the early (during cycle 1) and late periods (since cycle 2), during cycle 1, no significant differences in the rates of unexpected OPD and ER visits and hospitalization were observed between the groups, but since cycle 2, the post-reimbursement group was less likely to visit the ER than the pre-reimbursement group (10.9% vs. 13.6%; p = 0.057). In the subgroup analysis, the beneficial effect of reimbursement for AE evaluation on ER visits since cycle 2 was larger in breast cancer patients (odds ratio (OR), 0.67; p = 0.026), female patients (OR, 0.74; p = 0.047), younger patients less than 60 years (OR, 0.63; p = 0.007), patients with earlier stage 1-2 (OR, 0.71; p = 0.028), and married patients (OR, 0.74; p = 0.037) (Figure 2). Table 3 summarizes the unexpected utilization rates of medical services per chemotherapy cycle. Although no significant difference in OPD visit rates was observed between the two groups (2.7% vs. 2.9%, respectively; p = 0.513), the hospitalization rate in the post-reimbursement group was significantly lower than that in the pre-reimbursement group (1.8% vs. 2.3%, respectively; p = 0.039), and a decreasing trend of ER visit rates was observed in the post-reimbursement group compared with the prereimbursement group (3.3% vs. 3.9%; p = 0.064). Since cycle 2, the ER visit (2.6% vs. 3.3%; p = 0.020) and hospitalization (1.4% vs. 2.0%; p = 0.007) rates in the post-reimbursement group were significantly lower than those in the prereimbursement group, although no significant differences in these rates were observed during cycle 1. The OPD visit rates were similar between both groups, regardless of cycles.
| Completion rates, dose intensity, and dose modification of chemotherapy
The post-reimbursement group had a significantly higher proportion of patients who completed chemotherapy as planned than the pre-reimbursement group (93.5% vs. 90.1%, respectively; p = 0.006) ( Table 4). No significant differences in dose intensity (p = 0.112) and dose modification (p = 0.639 for initial dose reduction from cycle 1 and p = 0.490 for subsequent dose reduction) were observed between the two groups (Table 4).
Similar trends of hematologic AEs were observed between both groups; however, numerical differences and changes were modest ( Figure 3B). Whereas, after cycle 1, the incidences of all-grade neutropenia (29.1% vs. 24.6%; p = 0.020), thrombocytopenia (5.4% vs. 2.6%; p = 0.001), alanine aminotransferase elevation ; p = 0.027) were higher in the post-reimbursement group than those in the pre-reimbursement group, their frequency became lower in the post-reimbursement group (18.5% vs. 22.0% for neutropenia; p = 0.037) or similar between both groups after cycle 2. Likewise, the incidence of grade 3 neutropenia was higher in the postreimbursement group than that in the pre-reimbursement group after cycle 1 (12.7% vs. 8.3%; p = 0.003) but became similar between the two groups after cycle 2 ( Figure 3B).
| DISCUSSIONS
Adverse events during anticancer treatment cover a spectrum of patient symptoms, laboratory values, clinical findings, and radiological examinations. Among them, subjective symptoms are at a higher risk of being underreported by physicians, even when prospectively collected within randomized trials. 2 In a study evaluating the agreement between 1090 patients receiving cytotoxic chemotherapy for breast or non-small cell lung cancer and physicians in reporting six chemotherapy-related AEs (anorexia, nausea, vomiting, constipation, diarrhea, and hair loss) in three randomized trials, for patients who reported toxicity (any severity), underreporting by physicians ranged from 40.7% to 74.4%, and examining only patients who reported "very much" toxicity, underreporting by physicians ranged from 13.0% to 50.0%. 2 A prospective multicenter study involving 604 patients with breast cancer receiving adjuvant chemotherapy outside a clinical trial has also shown that the frequency and severity of chemotherapy-related AEs were consistently greater in patient-reported data than physician-reported data with a low interrater agreement for most AEs, ranging from 0.10 for anorexia to 0.54 for vomiting (Cohen κ statistic). 4 Interestingly, the discrepancies in AE reporting positively correlated with the number of patients enrolled at each site, suggesting that patient workload affects the discrepancy between physician and patient reporting of AEs. 4 Considering the clinical practice setting where physicians are challenged by time constraints and high workloads, a better system or tool to facilitate the evaluation of AEs by physicians could decrease these physician-patient discrepancies. 6 Importantly, AEs during chemotherapy, if not managed appropriately, can often interfere with treatment continuation as planned, reduce patients' quality of life, and increase healthcare utilization and costs. 7 Since most chemotherapy-related AEs are predictable and preventable, if physicians correctly identify and assess AEs, they can be reduced during subsequent cycles through appropriate preemptive management. In this context, we hypothesized that the NHI coverage for AE evaluation improves the capturing of AEs by physicians in routine clinical practice, which has a positive impact on unplanned acute hospital use and proceeding with chemotherapy as planned. Indeed, the results in this study demonstrated that the introduction of healthcare reimbursement for AE evaluation resulted in better capturing of AEs, lower ER visits and unscheduled hospitalization since cycle 2, and a higher chemotherapy completion rate in patients receiving neoadjuvant or adjuvant chemotherapy for breast, lung, colon, and stomach cancers. While physicians reported more AEs after cycle 1 in the post-reimbursement group than in the pre-reimbursement group, when the AE profiles between after cycle 1 and cycle 2 were compared in both groups, most AEs reported after cycle 1, including alopecia, fatigue, diarrhea, anorexia, and peripheral neuropathy, were more reduced after cycle 2 in the postreimbursement group than in the pre-reimbursement group. This suggests that physicians might have identified the AEs better after cycle 1 and delivered more proactive management for cycle 2 in the post-reimbursement group, which can work better, especially in nonhematologic AEs F I G U R E 3 Nonhematologic and hematologic adverse events after cycle 1 (A) and 2 (B) between the pre-reimbursement and postreimbursement groups (≥5% or major). *p < 0.05. ALT, alanine aminotransferase; AST, aspartate aminotransferase based on patients' reporting. This favorable impact was also shown in unscheduled utilization of medical services; while unscheduled visits during cycle 1 were not different between both groups, unscheduled ER visits (3.3% vs. 2.6%; p = 0.020) and hospitalization rates (2.0% vs. 1.4%; p = 0.007) since cycle 2 significantly decreased in the post-reimbursement group compared with those in the pre-reimbursement group.
When it comes to the details of protocols of the same chemotherapy regimen, which could be a possible factor contributing to the results, the chemotherapy protocols in terms of doses and schedules were not different between pre-and post-reimbursement groups because they should have been the same as the protocols approved by regulatory authority, South Korea's Ministry of Food and Drug Safety (MFDS), which have been based on the latest version of International and Korean guidelines, including global pivotal trials. The compliance to the approved doses and schedules of chemotherapy is subject to the evaluation by HIRA system in Korea. In addition, since 2011 (before our study period, 2013-2016), the HIRA system has been assessing the quality of cancer care, including adjuvant chemotherapy, in patients who received surgery for major five cancers including gastric, lung, colorectal, breast, and liver cancers to reduce the variability of quality between individual healthcare providers, resulting in more stable and consistent provision of healthcare services nationwide. Based on these, we believe that the protocols of the same chemotherapy regimen were also not different between all four centers.
Studies have reported that proactive approaches to manage chemotherapy toxicity, such as telephone-based support or electronic symptom monitoring, could improve symptom control and quality of life and decrease ER visits, but these approaches required additional healthcare resources, such as nurses' telephone calls or counseling outside regular clinic hours, which are significant barriers for widespread implementation in a real-world setting. 8,9 However, our results demonstrated that through an appropriate healthcare reimbursement system without resource barriers in hospitals, physicians' AE evaluation was improved, which led to favorable healthcare service use in the clinical practice. Of note, this better AE reporting system could be, in part, attributed to the use of a systematic assessment tool containing common AE items into the EMR system in each hospital, whereas traditional data collection typically involved unstructured patient interviews. To further improve the quality and comprehensiveness of AE evaluation, not only better physician reports, but also more patient-involved tools, such as PRO measures, which are increasingly being used in drug development trials, should be widely implemented in routine clinical practice, which could be facilitated through healthcare reimbursement by the NHI. In addition, given the differential beneficial effect of reimbursement on ER visits according to the subgroup (sex, age, cancer type, stage, marriage) in our study (Figure 2), more tailored strategies in AE evaluation and management need to be developed.
This study has the following strengths: it was a large, multicenter study and focused on neoadjuvant or adjuvant chemotherapy for curatively resected nonmetastatic breast, lung, colon, and stomach cancers, minimizing potential confounders related to cancer-related symptoms. In addition, since the patients received chemotherapy in routine clinical practice and not in a prospective clinical trial mandating standardized AE reporting, the study results reflect real-world practice. However, this study has some limitations. First, although we suggested intensified symptom management by physicians as a result of improved AE assessment as a mechanism for clinical benefits based on the results of other studies, 10,11 this study did not evaluate medications or treatments related to supportive care. Secondly, we did not perform cost-utility analysis, which could be a relevant topic for future research to justify further investment from healthcare services. Third, in general, supportive care might have improved over time, which might have affected the incidence and severity of AE or unscheduled use of medical services in a comparison between two different periods (September 2013 to August 2015 in the pre-imbursement group vs. January 2016 to December 2016 in the post-reimbursement group) in our study. Specifically, during the study periods (2013-2016), the prophylactic use of long-acting G-CSF in patients with breast cancer receiving neoadjuvant/adjuvant AC (anthracycline plus cyclophosphamide)-containing chemotherapy became eligible for reimbursement by the NHI from September 2016 in Korea. However, the long-acting G-CSF could be already used in these patients if they agreed with medical expenses uncovered by the NHI since July 2014. Although data regarding the frequency of the prophylactic use of longacting G-CSF between the two periods was not available in our study, there were no significant differences in the use of conventional G-CSF (33.5% vs. 33.3%), any grades of neutropenia (40.5% vs. 34.9%; p = 0.101), and febrile neutropenia (14.1% vs. 13.7%; p = 0.854) between the pre-reimbursement and post-reimbursement group in breast cancer patients. Otherwise, there were no significant changes of health insurance systems between the two periods in our study.
In conclusion, our analysis showed that the introduction of healthcare reimbursement by the NHI for AE evaluation may have a positive impact on physicians' AEs capturing, and acute hospital utilization and chemotherapy completion in patients receiving adjuvant chemotherapy. Our findings highlight the importance of AE evaluation and the effect of healthcare reimbursement policy on the quality of oncology clinical practice. | 2021-12-11T06:16:24.033Z | 2021-12-09T00:00:00.000 | {
"year": 2021,
"sha1": "6395ffb085401b2bf3479eb820fd88b73f29ae70",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.4476",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edfc7c00feee16b96499432b00d63b84e8ba0281",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252385656 | pes2o/s2orc | v3-fos-license | Single-channel EEG signal extraction based on DWT, CEEMDAN, and ICA method
In special application scenarios, such as portable anesthesia depth monitoring, portable emotional state recognition and portable sleep monitoring, electroencephalogram (EEG) signal acquisition equipment is required to be convenient and easy to use. It is difficult to remove electrooculogram (EOG) artifacts when the number of EEG acquisition channels is small, especially when the number of observed signals is less than that of the source signals, and the overcomplete problem will arise. The independent component analysis (ICA) algorithm commonly used for artifact removal requires the number of basis vectors to be smaller than the dimension of the input data due to a set of standard orthonormal bases learned during the convergence process, so it cannot be used to solve the overcomplete problem. The empirical mode decomposition method decomposes the signal into several independent intrinsic mode functions so that the number of observed signals is more than that of the source signals, solving the overcomplete problem. However, when using this method to solve overcompleteness, the modal aliasing problem will arise, which is caused by abnormal events such as sharp signals, impulse interference, and noise. Aiming at the above problems, we propose a novel EEG artifact removal method based on discrete wavelet transform, complete empirical mode decomposition for adaptive noise (CEEMDAN) and ICA in this paper. First, the input signals are transformed by discrete wavelet (DWT), and then CEEMDAN is used to solve the overcomplete and mode aliasing problems, meeting the a priori conditions of the ICA algorithm. Finally, the components belonging to EOG artifacts are removed according to the sample entropy value of each independent component. Experiments show that this method can effectively remove EOG artifacts while solving the overcomplete and modal aliasing problems.
In special application scenarios, such as portable anesthesia depth monitoring, portable emotional state recognition and portable sleep monitoring, electroencephalogram (EEG) signal acquisition equipment is required to be convenient and easy to use. It is di cult to remove electrooculogram (EOG) artifacts when the number of EEG acquisition channels is small, especially when the number of observed signals is less than that of the source signals, and the overcomplete problem will arise. The independent component analysis (ICA) algorithm commonly used for artifact removal requires the number of basis vectors to be smaller than the dimension of the input data due to a set of standard orthonormal bases learned during the convergence process, so it cannot be used to solve the overcomplete problem. The empirical mode decomposition method decomposes the signal into several independent intrinsic mode functions so that the number of observed signals is more than that of the source signals, solving the overcomplete problem. However, when using this method to solve overcompleteness, the modal aliasing problem will arise, which is caused by abnormal events such as sharp signals, impulse interference, and noise. Aiming at the above problems, we propose a novel EEG artifact removal method based on discrete wavelet transform, complete empirical mode decomposition for adaptive noise (CEEMDAN) and ICA in this paper. First, the input signals are transformed by discrete wavelet (DWT), and then CEEMDAN is used to solve the overcomplete and mode aliasing problems, meeting the a priori conditions of the ICA algorithm. Finally, the components belonging to EOG artifacts are removed according to the sample entropy value of each independent component. Experiments show that this method can e ectively remove EOG artifacts while solving the overcomplete and modal aliasing problems. KEYWORDS electroencephalogram, discrete wavelet transform, empirical mode decomposition, independent component analysis, sample entropy Introduction EEG (Saeidi et al., 2021) is a non-linear and non-stationary electrophysiological signal of the central nervous system that contains rich information about brain activity. It is an important information source for human brain research, disease diagnosis and rehabilitation engineering. It also has rich rhythmic activity and can be used to characterize the dynamic changes in brain function. EEG signal acquisition equipment is required to be more portable and easier to use with the application of brain computer interfaces (BCIs). The fewer the number of channels is, the better in some special application scenarios, such as portable anesthesia depth monitoring, portable emotional state recognition (Wan et al., 2022) and portable sleep monitoring (Kwon et al., 2021), in which only a single channel is needed. However, EEG signals are easily contaminated by physiological and non-physiological artifacts, such as EOG (Miao et al., 2021), electromyogram (EMG) (Meng et al., 2022) and electrocardiogram (ECG) artifacts (Mourad, 2022). As a result, the brain function information in EEG signals is concealed by artifacts, which results in inaccurate classification. EOG artifacts have the highest amplitude and strongest randomness compared with the other two kinds of artifacts. The presence of artifacts makes EEG signals prone to obvious distortion (Jiang et al., 2019;Gu et al., 2021), interferes with the inherent information expression of neuronal electrical activity, weakens the signal-tonoise ratio and increases the difficulty of preprocessing in the recording process (Dora and Biswal, 2020;Liu et al., 2021;Sun et al., 2021).
In recent years, many classical methods, such as average artifact regression analysis (Semlitsch et al., 1986), principal component analysis (PCA) (Vigon et al., 2000), and ICA (Makeig et al., 1996;Jiang et al., 2020;Yuan et al., 2020), have been widely used to remove EOG artifacts from multichannel EEG signals. In the first method, it is assumed that the conductivity between the electrode collecting EOG signal and other electrodes remains unchanged. Then, the correlation between the EOG channel and other channels is estimated, and the EOG signals are removed from each channel according to the conductivity. In the PCA method, the EEG and EOG signals need to be recorded at the same time during the experiment, and the artifacts are removed by analyzing the main components. Different from the first method, the ICA method obtains each independent source signal and separates the features only according to its statistical characteristics and the observed signal when the source signal and transmission channel parameters are unknown (Li et al., 2013).
The above methods are all difficult to apply to a singlechannel portable BCI system, as they require more EEG channels to achieve a better separation effect. Kumar proposed a method that removes single-channel EOG artifacts by using a wavelet transform soft threshold (Kumar et al., 2008). However, this method requires much prior knowledge and is highly subjective, and directly removing the wavelet coefficients will negatively impact the components of the EEG source signal. Mammone proposed a WT-ICA algorithm to remove EOG artifacts based on both wavelet transform (WT) and ICA (Mammone et al., 2012). This algorithm meets the ICA a priori condition by wavelet decomposition of the single-channel EEG signal. However, the wavelet transform not only increases the observation signals but also decomposes the source signal into several subsource signals, which leads to the overcomplete problem. The empirical mode decomposition (EMD) (Wu and Huang, 2011) proposed by Huang is an adaptive time-frequency decomposition method that can better deal with non-linear and non-stationary signals (Park et al., 2011). However, there will be intermittent phenomena due to abnormal events (such as tip signals, pulse interference and noise) in the EMD process, which results in mode aliasing and causes the IMF component to lose its specific physical significance. Aiming at the problem that the ICA method cannot be used to solve the overcomplete and modal aliasing problems caused by using empirical mode decomposition to make the number of observed signals greater than that of the source signals in the process of single-channel EOG artifact removal, we propose a single-channel EOG artifact removal algorithm (DWT-CEEMDAN-ICA) based on DWT, CEEMDAN and ICA in this paper. First, the source signals are transformed by a discrete wavelet and decomposed by the CEEMDAN method so that they meet the a priori condition of the ICA algorithm, which solves the overcomplete and mode aliasing problems. Experiments show the effectiveness and stability of this algorithm compared with the other EOG artifact removal algorithms. This paper is organized as follows. Related work is introduced in Section 2. In Section DWT-CEEMDAN-ICA algorithm, the proposed method is described in detail. Experiments are presented in Section 4, and some concluding remarks are given in the last section.
Related works Discrete wavelet transform
The idea of wavelet transform is to gradually refine the signal at multiple scales through expansion and translation operations Frontiers in Human Neuroscience frontiersin.org . /fnhum. . so that it can be subdivided according to time in the highfrequency domain and subdivided according to frequency in the low-frequency domain. The EEG signal after wavelet transform has better frequency resolution corresponding to the two different domains, which automatically meets the requirements of time-frequency signal analysis and focuses on any detail. The window function is given by: Where a and b are the scale displacement and time displacement, respectively.
In the DWT method, the parameter α, τ of wavelet basis function ψ(α, τ ) needs to be limited to discrete points. The basis function is: Where j and k are the frequency resolution and time translation, respectively; then, the DWT at this time is:
Complete empirical mode decomposition
The idea of Empirical mode decomposition (EMD) is to decompose the signal step by step according to the fluctuation or trend to produce a series of data sequences with different characteristic scales. Each sequence is called an intrinsic mode function (IMF) (Boda et al., 2021) and meets the following conditions: The number of extreme points and zero crossings in the whole data segment is equal or has no more than one difference. The average value of the upper and lower packet routes formed by the local maximum and minimum points is zero and locally symmetrical about the time axis at any time. However, due to the large amount of noise, jumping change of the time scale and boundary effect in the actual signal, the phenomenon of mode aliasing will be caused in the process of EMD. The EMD formula is as follows: Where a n (t) is the nth-order IMF, r n (t) is the remainder, and N is the number of IMFs.
To solve the problem of mode aliasing in the EMD method, Torres et al. proposed a complete empirical mode decomposition (CEEMDAN) algorithm that can adapt to noise (Xu et al., 2018). White noise is added to the residual value, and the mean value is calculated for each IMF component and then iterated step by step.
The method is described as follows: 1: Add Gaussian white noise to the original signal: Where σ 0 is the standard deviation of the noise. w j (t) is the white noise added by the j-th decomposition, which is subject to the N(0, 1) distribution.
2: x j (t) is decomposed by EMD N times. After the first decomposition, the mean value is taken to obtain the first-stage modal component IMF 1 (t), as shown in Equation (6): 3: Obtain the first-stage residual by Formula (7) r 4: When the number of extreme points of r 1 (t) exceeds two, the first-stage residual r 1 (t) is added to the first-stage modal operator to , and then EMD is carried out to obtain the second-stage modal component IMF 2 (t): Where σ 1 represents the second-stage standard deviation of the noise and M a [·] is the stage IMF mode operator after EMD of the signal.
5: Repeat step 4 until the residual can no longer be separated, and the original signal x(t) is decomposed as shown in Equation (9): where K and k represent the number of times and layers of modal decomposition, respectively. The k-stage residual r k (t) in the kth layer decomposition is calculated by Equation (10): The k+1 stage modal component is calculated by Equation (11): Algorithm . CEEMDAN EEG signals and EOG artifacts partially coincide in the time domain and frequency domain of the low-frequency band. If the artifacts are directly removed from the result of the discrete wavelet transform, part of the EEG signal will be lost, resulting in distortion. In addition, the a priori condition of the ICA method will no longer be met. Therefore, we use the Algorithm 1 (CEEMDAN method) to decompose the wavelet coefficients after wavelet transform into several IMFs.
Independent component analysis and sample entropy
The original signal collected through the electrode is the linear instantaneous mix of EEG signals and EOG artifacts, which are independent of each other. Therefore, the ICA method can be used to decompose the original signal into multiple independent component spaces to realize the separation of EEG signals and EOG artifacts. Let S = [S 1 , S 2 , . . . , S M ] be the M mutually statistically independent source signal. X = [x 1 , x 2 , . . . , x N ] is the n-dimensional observation signal generated by the linear mixing of S through an unknown matrix A, i.e., X = A × S. Under the condition that both A and S are unknown, the ICA method uses the assumption that X and S are statistically independent to find a linear transformation separation matrix W to make the output signal approach S as much as possible. The FastICA (Chen et al., 2018) algorithm takes the maximum negative entropy as the search direction and can sequentially extract independent sources. In addition, it adopts fixed-point iteration to make the convergence faster and more robust. Therefore, we use this method to process the IMF and obtain the source signal S.
The EEG signal comes from brain bioelectric activity and contains much physiological and pathological information, while the EOG signal only represents eye movement and blinking. Compared with EOG signals, EEG signals have more complex characteristics. The higher the complexity is, the higher the corresponding entropy. Therefore, the components with high entropy can be extracted as EEG signals. Compared with the approximate entropy (Pincus and Goldberger, 1994;Li et al., 2010), the sample entropy has a better estimation effect in the time-domain statistics aspect (e.g., mean and variance) and can be used to calculate the mixed signal composed of the determined signal and random signal. Therefore, we use Algorithm 2 (Calculate sample-entropy) to remove the EOG artifacts in this paper. The Algorithm 2 is as follows: 1: Reconstruct the m-dimensional space vector from the phase space Where X m (i) represents the phase space position of the ith point.
DWT-CEEMDAN-ICA algorithm
The innovation of this algorithm is that it can effectively solve the overcomplete and modal aliasing problem existing in the current single-channel EEG removal algorithm, and improve the retention of effective EEG information. The algorithm proposed in this paper is described in detail as follows: 1: The collected EEG signal is transformed by db4 wavelet to obtain a low-frequency approximate component A 7 and seven high-frequency detail components D i (i = 1, . . . , 7) corresponding to different layers decomposed by 7 layers. 2: These components are single-branch reconstructed and CEEMDAN decomposed to obtain several IMFs, and FastICA decomposition is carried out to calculate several independent components and their sample entropy. 3: The component corresponding to the sample entropy satisfying the threshold discriminant proposed by Gomez-Herrero et al. (2006) is regarded as an EOG artifact and set to zero. Then, the inverse ICA transform is carried out to obtain the EEG signal after the artifact is removed. The threshold discriminant proposed by Gomez Herrero is as follows: where ϕ(k) represents the entropy value of the k-th independent component sorted in ascending order. The ICA components corresponding to the top k entropy values are regarded as EOG artifacts where k takes the smallest integer which satisfies the above conditions. 4: Repeat steps 2-3 for the remaining wavelet coefficients to obtain all the signals after removing artifacts and then carry out wavelet reconstruction to obtain a complete EEG signal without artifacts. Figure 1 shows the flowchart of the proposed algorithm.
Experiments
In this section, a series of experiments are conducted to evaluate the efficiency of the proposed algorithm compared with a number of state-of-the-art algorithms used to remove EOG artifacts. The algorithms in our comparisons include the following: acquire a series of IMFs, and then they are decomposed into independent components. Finally, the signals are reconstructed by ICA inverse transformation of these components.
The original signals used in the experiments are collected from the FP 1 channel by a single electrode EEG machine (Mindwave) of NeuroSky Technology Company (NeuroSky). The device runs TGAM EEG module and adopts Bluetooth plus BLE dual-mode transmission. The sampling rate is 512 Hz, and the sampling time is 2 min. First, the ears and forehead of the subject were treated to remove grease and cutin to reduce the components of EMG artifacts. Then, the subject was allowed to close his eyes and rest for 2 min. During the collection, the subject was instructed to remain calm and blink several times in the natural manner. As a result, the collected signals contain fewer other artifacts, which can be approximately considered to contain only EEG signals and EOG artifacts.
The root mean square error RMSE and correlation coefficient R are introduced to evaluate the performance of the algorithms. The smaller the RMSE is, the closer the signal is to the original signals after artifact removal. The better the removal effect is, the larger the correlation coefficient R and the more complete the effective information of the signals that is retained.
Preprocessing
Signal jump and mechanical noise are generated in the collection process, resulting in a certain amount of signals that are logically unreasonable and exceed the normal range of the original EEG signals. These signals affect the analysis quality of EEG signal data. Therefore, it is necessary to check the consistency and remove the unreasonable information from the original signals and estimate and supplement the defective signal by using the mean difference method of surrounding signals to restore continuity and time-frequency characteristics. The original signals and the signals after the consistency check and interpolation are shown in Figures 2, 3, respectively.
The frequency used to analyze the characteristics of EEG signals is mainly below 64 Hz, so it is necessary to filter and eliminate frequency interference by using a 0.05-64 Hz bandpass filter and a 50 Hz band-stop filter, respectively. Figures 4, 5 are the spectrum diagrams after bandpass and band-stop filtering, respectively.
Experimental results and analysis
It can be seen from the preprocessed signals that the artifacts are mainly concentrated on the sampling points in the range of 30,000-40,000. Therefore, the points in this range were used to compare the effectiveness of removing EOG artifacts. The retention degree for effective information is obtained by comparing the correlation coefficients of the sampling points in the range of 50,000-55,000 before and after being processed by the DWT-CEEMDAN-ICA algorithm. Figure 6 shows the results of the preprocessed signals after wavelet decomposition. Then, CEEMDAN decomposition is carried out to obtain IMFi (i = 1.0.0.15) components for each . /fnhum. . wavelet coefficient. Take the high-frequency detail coefficient D5 as an example. Figure 7 shows the decomposition results. All the IMF components are decomposed by the ICA algorithm to obtain each independent component, as shown in Figure 8. Finally, the sample entropy of each ICA component is calculated, as shown in Table 1. From the table, we can see that the more complex the independent component is, the higher the sample entropy. Therefore, the component corresponding to the sample entropy satisfying the threshold discriminant is regarded as an Frontiers in Human Neuroscience frontiersin.org . /fnhum. .
FIGURE
Original signal and EMD-ICA algorithm signal.
FIGURE
Comparison results between the errors and correlation coe cients of each algorithm.
EOG artifact and set to zero. Figure 9 shows the comparison result of D5 between its original value and the value after filtering the EOG artifacts by the DWT-CEEMDAN-ICA algorithm. We can see from the figure that our proposed algorithm has a very good effect in removing EOG artifacts. However, there is still a gap between the original signals and the signals reconstructed only from the D5 coefficient. To prevent distortion and retain more effective EEG signals, other wavelet coefficients also need to be processed by the DWT-CEEMDAN-ICA algorithm and then reconstructed by wavelet reconstruction to obtain the final clean EEG signals without EOG artifacts. The result is shown in Figure 10. From this figure, it is shown that the algorithm removed EOG artifacts well and had a high degree of fitting for the EEG signals. Figures 11-13 show the results of the other three algorithms on EOG artifact removal. We can see that the WT algorithm can not only effectively remove EOG artifacts but can also remove many valuable EEG signals, resulting in serious signal distortion. Moreover, this algorithm is highly subjective and the threshold and basis function need to be manually selected. Compared with the WT algorithm, the WT-ICA algorithm can retain more original EEG signals. Because of the problem of overcompleteness and the subjectivity of judgment, the effect is different every time, and the fitting of the original EEG signals is not good. The EMD-ICA algorithm cannot effectively remove EOG artifacts because of mode aliasing and noise. Figure 14 shows the correlation coefficient R and root mean square error RMSE calculated by using the four algorithms for sampling points in the range of 50,000-55,000. The algorithm proposed in this paper obtains the largest correlation coefficient and the smallest root mean square error, which proves that it not only solves the overcomplete and modal aliasing problems but also effectively removes the EOG artifacts and retains more valuable original information.
Conclusion
As a result of the problems of overcompleteness and mode aliasing in the single-channel EOG signal removal algorithm, the single-channel EEG equipment is restricted by few acquisition channels and lack of a reference electrode, so EOG artifacts cannot be effectively removed. We propose a novel method in this paper that integrates the discrete wavelet transform, complete empirical mode decomposition for adaptive noise, independent component analysis and the sample entropy algorithm. We carry out a series of experiments to demonstrate its effectiveness. Compared with some existing methods, our method can effectively identify and remove EOG artifacts from original signals while solving the above problems.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by School of Computer Science and Engineering, Guilin University of Aerospace Technology. The patients/participants provided their written informed consent to participate in this study.
Author contributions
QH, ML, and YL: contributed to the design of this work. QH: methodology, validation, formal analysis, and writing-original draft. ML: software, data curation, and data analysis. YL: conceptualization, resources, writing-review and editing, and funding acquisition. All authors contributed to the article and approved the submitted version. | 2022-09-21T13:45:34.117Z | 2022-09-21T00:00:00.000 | {
"year": 2022,
"sha1": "a24b755b235230ce4fede2ef750a2337e2d051ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a24b755b235230ce4fede2ef750a2337e2d051ba",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
21050304 | pes2o/s2orc | v3-fos-license | Elastic constants of bcc CuAl-Ni alloys
We have measured the adiabatic elastic constants of two Cu-Al-Ni martensitic alloys using ultrasonic methods and we have compared the results to recent neutron-scattering experiments. It is shown that the elastic behavior of Cu-Al-Ni alloys follows the same trends exhibited by other Cu-based alloys; in particular, the TA2 long-wavelength acoustic modes are softer than all other modes. Disciplines Condensed Matter Physics | Metallurgy Comments This article is from Physical Review B 49 (1994): 9969–9972, doi:10.1103/PhysRevB.49.9969. Authors Lluís Mañosa, M. Jurado, Antoni Planes, Jerel L. Zarestky, Thomas A. Lograsso, and C. Stassis This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/ameslab_pubs/106 PHYSICAL REVIEW B VOLUME 49, NUMBER 14 Elastic constants of bcc Cu-Al-Ni alloys 1 APRIL 1994-II Ll. Manosa, M. Jurado, and A. Planes Departament d Estructura i Constituents de la Materia, Facultat de Fr'sica, Uniuersitat de Barcelona, Diagonal 647, E-08028 Barcelona, Catalonia, Spain J. Zarestky, T. Lograsso, and C. Stassis Ames Laboratory and Department of Physics and Astronomy, Iowa State University, Ames, Iowa 500II (Received 7 December 1993) We have measured the adiabatic elastic constants of two Cu-Al-Ni martensitic alloys using ultrasonic methods and we have compared the results to recent neutron-scattering experiments. It is shown that the elastic behavior of Cu-Al-Ni alloys follows the same trends exhibited by other Cu-based alloys; in particular, the TA2 long-wavelength acoustic modes are softer than all other modes.
I. INTRODUCTION
The stability of the bcc structure exhibited by a num- ber of metals and alloys has been the subject of continu- ous interest for many years.' Since the pioneer work of Zener' it has been acknowledged that a large entropy is the stabilizing factor for the bcc phase, since close- packed phases have lower energies.This large entropy has mostly a vibrational origin ' and is associated with a low transverse acoustic TA2 branch, and a low value of the elastic constant C'= (C"-C, z )/2.
On cooling, many of these bcc metals and alloys under- go a phase transition towards a close-packed structure.The transformation is first order, diffusionless, and is principally described by a shear; it is the martensitic transformation.
Typical examples of materials undergo- ing these transitions can be found in alkali metals, transi- tion metals, and many noble-metal-based alloys.Among them, the Cu-based alloys have received special interest because of their technologically important shape-memory properties, associated with the martensitic transformation.
In the last few years, considerable effort has been de- voted towards an understanding of the martensitic trans- formation.Several Landau-type models have been proposed ' that involve two coupled order parameters: a uniform strain and a phonon mode (shuffie).Also, computer simulation studies qualitatively describe the vibra- tional properties of bcc solids.' The development of these models has renewed the effort to determine the vi- brational and elastic properties of materials undergoing martensitic transformations.
In this paper we present experimental results on the elastic behavior of Cu-Al-Ni single crystals just above their transition temperatures M, .Samples for elastic-constant measurements were cut into a cubic shape (about 10-mm side) using a low-speed diamond saw, with faces parallel to the (110), (110), and (001) planes.The samples were polished ffat to surface ir- regularities of about 2 pm and parallel to better than 10 rad.To remove stresses caused by the cutting pro- cess, samples were annealed for one hour at 1273 K and quenched into water at 298 K.The nominal transition temperatures were 260 and 220 K for Cuz 742Al, ,osNio»2 and Cu2 ~26Al»z2Nio»2 respectively.Elastic constants were determined using a pulse-echo ultrasonic method.Both X-cut and Y-cut transducers were used to generate and detect 10-MHz ultrasonic pulses.Acoustic coupling between sample and transduc- er is optimized using Dow resin 276-V9 and Nonaq stop- cock grease in the temperature ranges 210 -350 and 77 -270 K, respectively.Ultrasonic-pulse transit times were obtained using the phase-sensitive detection technique (MATEC, MBS-8000).'
III. EXPERIMENTAL RESULTS AND DISCUSSION
The velocity of ultrasonic waves has been measured along the [110] direction of the samples.The adiabatic second-order elastic constants at room temperature for the two crystals investigated are shown in Table I.The values correspond to an average over three independent runs, and the error is the maximum deviation from the mean value.We have double-checked the consistency of our data by measuring the velocity of ultrasonic waves along the [100] direction.From these measurements we have obtained C44 = 98.0 GPa, C» = 137 GPa for Cu2726A1, &z2Nio.&s2 and C44=96. 3GPa, C» =136 GPa for Cu2742Al, ,osNio, s2.These values are within 4% scatter coincident with those obtained from the data 0163-1829/94/49(14)/9969(4)/$06.00 49 9969 1994 The American Physical Society We have also measured the temperature dependence of the elastic constants close to the nominal martensitic transformation temperature, M, .Below M" the surface relief associated with the appearance of the martensitic domains breaks the acoustic coupling between the sample and transducer, causing the ultrasonic echoes to disappear.In Fig. 1 we have plotted typical examples for the relative change of the elastic constants with temperature, for the two samples investigated.
No anomalous behavior is found for CL and C44.they increase as the temperature is reduced.C' decreases as the temperature drops; that is, the material becomes softer for a (110) (110) shear.This behavior is common for all noble- metal alloys undergoing martensitic transformations.' The decrease in C' when the temperature is reduced has been found to be linear, and the corresponding slopes are listed in Table I.
It is worth stressing that ultrasonic velocities could not be measured down to the nominal M, .Several degrees above M, a marked increase in echo attenuation oc- curred, accompanied by a change in slope in the curve of elastic constant versus temperature.
To investigate the origin of these anomalies we have performed highsensitivity calorimetry on the same samples used for ulee trasonic measurements.A magnified view of the temper- ature range just above M, is shown in Fig. 2 for Cup 726A1] ]$2Nio», the inset shows the complete ther- mogram.It is clear that the beginning of the anomalous behavior in C' (marked with an arrow in Fig. 1) coincides with the first thermal effect detected calorimetrically, corresponding to the transformation of a small amount of material.The occurrence of transformation of a small fraction (less than 1%) of the sample above I, is a typi- cal feature in bulky samples subjected to a quench.Inter- nal stresses are generated during the quench, that are re- tained in the sample and locally increase the transition temperature.A detailed study of this effect has already been reported on Cu«Zn-Al by one of us.' It is instructive to compare the elastic behavior of Cu- Al-Ni with the behavior of other Cu-based martensitic alee loys.All the elastic constants and their temperature dependence are very similar to the values previously re- ported for Cu-A1-Pd, ' and Cu-A1-Be.' It is of special Cuq 96,Alo», Beo»2 (Ref.11), (d) Cuz 70A1, O, Pdo zz (Ref.16), (e) Cu&»Zno»Alo» (Ref. 19),and (f) Auo»Cu& 2oZn& 88 (Ref. 20).
The lines are the slopes at the origin computed using the values of C' measured ultrasonically.
interest to evaluate C' at M, .Values are given in Table I.
They coincide (within 3%%uo error) with the values found for Cu-A1-Be and Cu-Zn-A1.' Indeed, the present results for Cu-Al-Ni confirm our previous findings that C' at the transition temperature always takes similar values.A phenomenological explanation for this will be given else- where.' We finally compare present values of C' with inelastic neutron-scattering experiments carried out on the same crystals, performed at the High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory.We have used a 20' collirnator in order to be able to measure at low wave vectors.In Figs. 3(a) and 3(b) we present the origin of the TA2 branch for the two crystals investigated.The straight line is the slope at the origin computed using the values of C' given in Table I.A striking feature is that these slopes are lower than the ones obtained by extrapolation to zero frequency of the TA2 branch.To check whether this is a common feature for martensitic alloys, we have collected data for the TA2 branch and C' from the literature"' ' ' and have replotted them on the same scales in Figs.3(c) -3(f).Although in most cases no neutron data exist for q &0.2 it is clear that, within the combined experimental errors from neutron and ul- trasound experiments, the slope computed from C' is al- ways lower than the extrapolation of the phonon branch to q=0.These results show that the long-wavelength transverse acoustic modes are softer than all other modes.Anharmonic effects could be the source of this extra softening.Nevertheless, to our knowledge there is still no theoretical justification for this fact.
To conclude, we have measured the elastic constants of Cu-Al-Ni and their temperature dependence down to the martensitic transformation temperature.We have found that this alloy behaves similarly to other Cu-based mar- tensitic alloys.A comparison of phonon dispersion curves and ultrasonic measurements for a number of noble-metal-based alloys suggests that the longwavelength acoustic TA2 modes are softer than all other modes.
TABLE I .
Elastic constants C» at room temperature, their relative thermal variation I » = CIJ 'dC»ldT, and C' at the transition temperature M, . in TableI(C»=CI+C' -C").It must be mentioned that the present values for C» and C44 are very close to the ones reported by Hausch and Torok' for Cu2 744A1»O4Ni0, 48, but they reported C'=9. 4 GPa, which is larger than our values. presented | 2017-09-14T05:27:30.244Z | 1994-04-01T00:00:00.000 | {
"year": 1994,
"sha1": "e22923770f8013c7c1e0ea32bd1f531b5a42717b",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/9909/1/87880.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e22923770f8013c7c1e0ea32bd1f531b5a42717b",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
520902 | pes2o/s2orc | v3-fos-license | Potential Role of Sirtuin as a Therapeutic Target for Neurodegenerative Diseases
The sirtuins (SIRTs) are protein-modifying enzymes that are distributed ubiquitously in all organisms. SIRT1 is a mammalian homologue of yeast nicotinamide-adenine-dinucleotide-dependent deacetylase silent information regulator 2 (known as Sir2), which is the best-characterized SIRT family member. It regulates longevity in several model organisms and is involved in several processes in mammalian cells including cell survival, differentiation, and metabolism. SIRT1 induction, either by SIRT-activating compounds such as resveratrol, or metabolic conditioning associated with caloric restriction, could have neuroprotective qualities and thus delay the neurodegenerative process, thereby promoting longevity. However, the precise mechanistic liaison between the activation of SIRT and extended healthy aging or delaying age-related diseases in humans has yet to be established.
Introduction
Neurodegenerative disorders including Huntington's disease, Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), and Alzheimer's disease (AD) are characterized by irreversibility, a progressive clinical course, and idiopathic degeneration of specific selectively vulnerable neuronal populations. These debilitating neurodegenerative diseases are inherently associated with the accumulation of misfolded proteins that adversely affect neuronal connectivity and plasticity, and trigger cell-death-signaling pathways. 1 However, the precise sequence of the events that underlie disease progression remains to be identified, and this largely explains the absence of methods and effective therapeutic interventions for this group of diseases. While the misfolded proteins typically exhibit loss of function, mislocalization, and tendency toward aggregation, most of these processes are strongly influenced by aging, which is the predominant and unifying risk factor for neurodegenerative diseases.
It is well established that low-calorie diets, known as "caloric restriction" (CR), extend lifespan in a wide variety of organisms including yeast, Caenorhabditis elegans, Drosophila species, and rodents, and it has been proposed that the sirtuins (SIRTs) might at least partly mediate this effect. 2 Thus, activating molecular pathways that slow the process of aging may provide an outstanding strategy for treating and preventing these conditions. This is where SIRTs may come into play, which are nicotinamide adenine dinucleotide (NAD + )-dependent enzymes that have emerged as important regulators of diverse biological processes and are referred to as either SIRTs or silent information regulator 2 (Sir2)-like proteins. They constitute the class III histone deacetylases and are conserved from bacteria to humans. 3 The founding member, yeast Sir2 (ySir2), is essential for maintaining silent chromatin through the deacetylation of histones. Since the discovery of the involvement of SIRT in apoptosis, cell survival, transcription, metabolism, and aging, these activities have been implicated as disease modifiers. This review highlights the role of SIRTs as potential therapeutic targets for developing treatments for neurodegenerative disorders. Although SIRT1 and SIRT2 play important roles in aging and neurodegeneration, very little is known about their role in the central nervous system (CNS). Therefore, following a brief description of the SIRTs in general, this review focuses on SIR1 and SIR2.
The Sirtuins
SIRTs, a family of NAD + -dependent deacetylases and/or adenosine diphosphate (ADP)-ribosyltransferases, are an evolutionarily conserved class of proteins that regulate various cellular functions such as genome maintenance, longevity, metabolism, and tolerance to oxidative stress. [4][5][6] These enzymes were first identified in yeast as silent information regulators, hence the family name. 7 SIRTs regulate cell functions by deacetylating both histone and nonhistone targets. Sir2 in Saccharomyces cerevisiae is the founding member of the SIRT gene family, and its deacetylase activity is required for chromatin silencing at the mating-type loci, telomeres, and the ribosomal DNA locus. Seven distinct Sir2 homologues have been identified in humans (SIRT1-SIRT7), each having distinct cellular targets and diverse cellular localizations. Robust protein deacetylase activity has been reported for SIRT1, SIRT2, SIRT3, and SIRT5, whereas SIRT4, SIRT6, and SIRT7 have no detectable enzymatic activity on a histone peptide substrate. 8 The current consensus suggests that mammalian SIRTs comprise two nuclear (SIRT1, SIRT6), one cytoplasmic (SIRT2), three mitochondrial (SIRT3, SIRT4, and SIRT5), and one nucleolar (SIRT7) protein (Table 1). 9
Sirtuin 1
SIRT1, which is found predominantly in the nucleus, has the highest sequence homology to ySir2. An early insight into one mechanism whereby Sir2 could increase the replicative lifespan of yeast comes from the discovery that it acts at the nucleolus, inhibiting ribosomal DNA (rDNA) recombination as well as extrachromosomal rDNA circle formation. 10 It is the best-investigated and most-well-understood member of the human family of SIRTs in terms of its endogenous function and activity, and is suggested to play an essential role in lifespan extension (on CR), the oxidative stress response [poly (ADP-ribose) polyMerase], and regulation of forkhead transcription factors (FOXOs) and p53. Other important substrates of SIRT1 include, Ku70, peroxisome proliferator-activated receptor-γ coactivator-1α (PGC-1α), liver X receptor (LXR), and histones H1, H3, and H4, with histone deacetylation causing gene silencing. 11 SIRT1 physically interacts with p53 in the nucleus, an interaction that is enhanced after the induction of DNA damage. Acetylation of p53 results in the activation of p53 target genes such as p21, resulting in cell-cycle arrest, apoptosis, or senescence. Conversely, deacetylation of p53 by SIRT1 decreases p53-mediated transcriptional activation. 12 SIRT1 activity results in the suppression of apoptosis induced by DNA damage or oxidative stress ( Fig. 1).
Since nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) exerts an antiapoptotic effect during tumor necrosis factor-α (TNF-α) activation, inhibition of NF-κB-mediated gene activation by SIRT1 sensitizes cells to apoptosis during TNF-α treatment. Ku70 is a subunit of the Ku protein complex, which is involved in the nonhomologous repair of DNA double-strand breaks. SIRT1 and Ku70 physically interact in vivo, and overexpression of SIRT1 decreases the acetylation level of Ku70, thereby promoting the antiapoptotic Bcl-2-associated X protein-Ku70 interaction.
Members of the FOXO family of transcription factors are involved in cellular processes that range from longevity, me- In mouse embryos, SIRT1 was expressed at high levels in the heart, brain, spinal cord, and dorsal root ganglia. 15 High SIRT1 levels in the embryonic brain suggest that it plays a role in neuronal and/or brain development. This notion is supported by some of the phenotypes associated with SIRT1knockout mice, in which postnatal survival is infrequent, and which have developmental defects such as exencephaly and retinal anomaly. 16 In the adult rat brain, SIRT1 can be found in the hippocampus, cerebellum, and cerebral cortex. The antioxidant vitamin E has been shown to reduce the oxidative damage and reduction of SIRT1 caused by a high-fat and high-sugar diet, while restoring SIRT1 levels. 17 The findings of that study suggest that SIRT1 levels in the brain are affected by oxidative stress and energy homeostasis. There is also recent evidence that SIRT1 deacetylates autophagy genes and stimulates basal rates of autophagy, 18 which has emerged as an important route for the removal of the toxic misfolded protein aggregates that accumulate in neurodegenerative diseases.
Sirtuin 2
The human SIRT2 protein is a closer homologue to the yeast Hst2p than to ySir2. Both proteins are localized in the cytoplasmic compartment, but human SIRT2 is also localized along the microtubule network. 19 SIRT2 has been reported to promote neuronal death. Pharmacological and genetic inhibition of SIRT2 protects neurons against α-synuclein toxicity both in vitro and in flies. 20 In addition to deacetylating histone-H3 peptide acetylated on lysine-14, SIRT2 is capable of deacetylating an acetylated α-tubulin peptide, an ability that Hst2p clearly lacks. Hence, SIRT2 shows a preference for an α-tubulin peptide over a histone peptide, suggesting that SIRT2 has evolved to carry out the deacetylation of tubulin. As tubulin acetylation is implicated in the regulation of cell shape, intracellular transport, cell motility, and cell division, it will be of future interest to address the role of SIRT2 in tubulin deacetylation, as well as in the concept of CNS diseases. The SIRT2 gene is found at chromosome 19q13.2, a region that is frequently deleted in human gliomas. Furthermore, the ectopic expression of SIRT2 in a glioma cell line has been shown to decrease colony formation, suggesting a potential tumor-suppressor role for SIRT2. This could be explained by SIRT2 playing an important role in the control of mitotic exit in the cell cycle, where increased SIRT2 activity severely delays cell-cycle progression through mitosis. 21 SIRT2 was very recently described as an oligodendroglial cytoplasmic protein localized to the outer and juxtanodal loops in the myelin sheath, and which decreases cell differentiation through α-tubulin deacetylation, suggesting a potential role in myelinogenesis. 22
Neurodegenerative Diseases
Many neurodegenerative disorders are characterized by conformational changes in proteins that result in misfolding, aggregation, and intra-or extraneuronal accumulation of amyloid fibrils. The variety and complexity of these diseases are related to the different pathological conformations that the proteins involved can assume. Most conformational diseases, such as AD, PD, and ALS, are caused by a combination of genetic and environmental factors, suggesting that spontaneous events can destabilize a misfolding-prone protein or impair the clearance mechanisms, leading to the accumulation of misfolded aggregates. While aging is a major risk factor because it may compromise both the cellular processing and clearance systems, environmental factors affect the probability of disease onset and progression.
The currently available therapeutic strategies are still not effective enough to slow or prevent these diseases; the development of new therapeutic approaches that specifically target the pathogenic proteins is therefore mandatory. Below we describe some of the representative neurodegenerative disorders that represent potential targets of SIRT-related mechanisms.
Alzheimer's Disease
The histopathological hallmarks of AD are the presence of intraneuronal neurofibrillary tangles and the accumulation of extracellular amyloid plaques in the brains of affected individuals. A link between SIRT1 and AD is also becoming increasingly evident. NF-κB signaling in microglia is known to be critically involved in neuronal death induced by Aβ peptides. 23 SIRT1 protects against Aβ-induced neurotoxicity by inhibiting NF-κB signaling in microglia. Overexpression of SIRT1 and resveratrol treatment has been shown to markedly reduce Aβ-stimulated NF-κB signaling and to exert a strong neuroprotective effect. This finding concurs with the known role of SIRT1 in modulating NF-κB activity. 24 Shortterm CR was shown to substantially decrease the accumulation of Aβ plaques in two AD-prone amyloid precursor protein (APP)/ presenilin transgenic mouse lines, and to decrease gliosis, as marked by astrocytic activation. The authors suggest that CR enhances the clearance of brain Aβ by reducing brain insulin as a competing substrate. The overexpression of SIRT1 or pharmacological activation of SIRT1 by NAD + also promotes α-secretase activity and attenuates the generation of Aβ peptides in embryonic Tg2576 mouse neurons in vitro.
Moreover, in Tg2576 mice, CR resulted in a larger than twofold increase in the concentration of brain soluble APPα (a product of α-secretase cleavage of APP) and a statistically significant 30% increase in ADAM10 (A Disintegrin And Metallopeptidase 10, a putative α-secretase) levels in CR animals compared to controls. 25 Other mechanisms could include lower cholesterol and higher glucocorticoid levels in CR mice. 26 In a recent investigation using resveratrol, which is a well-known CR-mimicking agent, we found that Aβ-induced neurodegeneration was attenuated by the mechanisms involved in the 5' adenosine monophosphate-activated protein kinase (AMPK) pathways (unpublished data). It is thus possible that SIRT regulates one or more of the AMPK kinases. 27 Another plausible explanation is the activation of SIRT1 by CR.
Parkinson's Disease
PD is characterized neuropathologically by the selective and progressive degeneration of dopaminergic neurons in the substantia nigra pars compacta, which is accompanied by muscle rigidity, bradykinesia, resting tremor, and postural instability. There is a growing body of evidence that both genetic and environmental factors contribute to the acceleration of dopaminergic neurodegeneration in this neurological disorder. In particular, mitochondrial dysfunction has been considered one of the most important factors involved in the pathogenesis of PD. While misfolding, oligomerization, and aggregation of α-synuclein have been implicated in PD pathology, the precise mechanisms underlying the neurodegeneration remains to be determined.
Okawara et al. 21 recently investigated whether resveratrol exhibits neuroprotective effects on dopaminergic neurons in organotypic midbrain slice cultures subjected to several different types of insult related to PD pathogenesis. They demonstrated that resveratrol, together with another SIRT-activating compound, quercetin, prevents the decrease of dopaminergic neurons induced by a dopaminergic neurotoxin 1methyl-4-phenyl pyridinium. They suggested that resveratrol exerts neuroprotective effects in dopaminergic neurons via either antioxidative or SIRT-activating activity. Moreover, Outeiro et al. 28 recently described the identification and characterization of SIRT2 inhibitors and demonstrated that pharmacological and genetic inhibition of SIRT2 rescues cell cultures from α-synuclein toxicity. However, it is still unclear whether it is the antioxidant or SIRT-activating activity (or both) that underlies this neuroprotective effect of resveratrol.
Amyotrophic Lateral Sclerosis
ALS is an adult-onset neurodegenerative disease characterized by the selective vulnerability of motor neurons in the spinal cord, brainstem, and motor cortex, causing progressive muscle weakness, atrophy, paralysis, and bulbar dysfunction, and leads to death within 3-5 years of disease onset in most cases. The sporadic form of the disease, which accounts for 90% of cases, remains poorly understood. The pathogenesis of ALS is not fully understood in the vast majority of cases, and the mechanisms involved in motor neuron degeneration are multifactorial and complex. There is substantial evidence to support the hypothesis that oxidative stress can underlie motor neuron death. 29 Mitochondrial dysfunction and neuroinflammation have also been implicated in ALS pathogenesis. Peroxisome proliferator-activated receptors (PPARs), and in particular PPAR-γ , may form part of a major signaling pathway involved in neuroinflammation in ALS. 30 The activation or inactivation of PPAR-γ could provide a viable and promising approach to understanding the mechanism of neuroinflammation in ALS. SIRT1 physically interacts with and deacetylates PPAR-γ , coactivator-1α (PGC-1α) at multiple lysine sites, consequently increasing PGC-1α activity. These findings suggest that PPAR-γ is an important regulator of neuroinflammation, and a new potential target for the development of therapeutic strategies for ALS. 31 More recent studies have demonstrated that SIRT1 is protective in vitro against the cytotoxic effects of a mutant superoxide dismutase 1 that causes familial ALS. 32
Concluding Remarks
It has been demonstrated that CR is one of the most effective means of slowing the pace of aging and extending lifespan in many organisms, from yeast to mammals. In yeast, the longevity gene induced by CR is Sir2. In mammals, SIRT1, an ortholog of Sir2, controls the metabolism of white adipose tissue. Resveratrol, a polyphenolic compound obtained from grapes and red wine, is the most potent natural product activator of SIRT1. Originally identified through the recognition of the French paradox (a phenomenon whereby individuals with high-fat diets have a low incidence of cardiovascular disease due to the regular consumption of red wine), resveratrol has demonstrated therapeutic efficacies in models of cardiovascular, metabolic, inflammatory, and neurodegenerative diseases, and has shown chemopreventative activity. 33 A full understanding of the effects of SIRT manipulation in mammals necessitates the design and generation of additional transgenic and knockout mice to facilitate further investigations into SIRT biology. These models will be critical to elucidating the relationship between SIRTs, metabolism, and aging. SIRT-based therapies (i.e., small-molecule SIRT activators) hold great promise as potential therapeutic modalities for age-related conditions, and especially for neurodegenerative diseases. | 2016-10-06T20:11:54.042Z | 2009-09-01T00:00:00.000 | {
"year": 2009,
"sha1": "b178982bfb7e366372befdabd0a9366393634e18",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2760716?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b178982bfb7e366372befdabd0a9366393634e18",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119416032 | pes2o/s2orc | v3-fos-license | Modulation of the dephasing time for a magnetoplasma in a quantum well
We investigate the femtosecond kinetics of optically excited 2D magneto-plasma. We calculate the femtosecond dephasing and relaxation kinetics of the laser pulse excited magneto-plasma due to bare Coulomb potential scattering, because screening is under these conditions of minor importance. By taking into account four Landau subbands in both the conduction band and the valence band, we are now able to extend our earlier study [Phys. Rev. B {\bf 58}, 1998,in print (see also cond-mat/9808073] to lower magnetic fields. We can also fix the magnetic field and change the detuning to further investigate the carrier density-dependence of the dephasing time. For both cases, we predict strong modulation in the dephasing time.
Numerous experimental and theoretical studies have been devoted to the problem of transient charge fluctuations induced by femtosecond pulse excitation in semiconductors which can be studied through nonlinear-optical effects to elucidate many-body phenomena, such as time-dependent Coulomb correlations. Most of the experimental studies have been performed without magnetic field [1][2][3]. The few femtosecond optical studies in the presence of a strong magnetic field focused on low-density magneto-excitons [4][5][6][7]. With strong resonant laser pulses which excite a dense carrier system-with a density above the Mott ionization density-in a strong magnetic field, one can study the relaxation and dephasing kinetics of a magnetoplasma. This problem becomes important as experimental studies of the relaxation and dephasing kinetics in QW's and superlattices are in progress [8].
Recently, we presented a first kinetic study for a femtosecond laser-pulse excited 2D dense non-equilibrium magnetoplasma in a QW in the framework of the semiconductor Bloch equations combined with Coulomb scattering rates [10]. We assumed an additional weak lateral confinement which lifts the degeneracy of the Landau levels partially. We expanded the density matrix of a two-band (i.e., the conduction band and the valence band) semiconductor in the eigenfunctions of the 2D electron in the presence of the strong magnetic field and the weak parabolic confinement. We formulated the scattering terms for the population distribution functions of the various Landau subbands and for the optically induced polarization components between the Landau-subbands in the conduction and valence band in the form of non-Markovian quantum kinetic scattering integrals [3] and in the form of semiclassical Boltzmann-type scattering rates. We calculated the time-resolved (TR) and time-integrated (TI) four-wave mixing (FWM) signals for two 50 fs pulses by taking into account up to three Landau subbands in both the valance band and the conduction band. The carrier frequency of the two delayed pulses is tuned slightly above the unrenormalized energy gap. We simplified the problem by assuming equal effective electron and hole masses, as it can be approximately realized in strained QW's. Naturally, unequal effective masses will lead to more complicated quantum beat structures in the FWM signals and modify to some extent also the resulting relaxation and dephasing rates. Thus our studies should be seen only as an idealized model calculations. Bare Coulomb potential is used in our calculation because screening is under these strong confinement conditions of minor importance (see e.g. Ref. [9]). Naturally in the limit of vanishing magnetic field a Boltzmann kinetics with a bare Coulomb potential is not justified.
We find in our preceding paper [10] that the FWM signals exhibit quantum beats mainly with twice the cyclotron frequency. Contrary to general expectations, we find no pronounced slowing down of the dephasing with increasing magnetic field. On the contrary, one obtains in same ranges of the magnetic field a decreasing dephasing time because of the increase of the Coulomb matrix elements and the number of states in a given Landau subband. In the situation when the loss of scattering channels exceeds these increasing effects, one gets a slight increase of the dephasing time. However, details of the strongly modulated scattering kinetics depend sensitively on the detuning, the plasma density, and the spectral pulse width relative to the cyclotron frequency.
As discussed in our previous paper, we took only three Landau subbands in our calculation. This is mainly because of the expansion of number of matrix elements of Coulomb scattering increases as N 4 with N being the total number of Landau subbands considered. With N = 3 in our previous calculation, the number of form factors is already 81. However, such low number of Landau subbands limits us to the magnetic fields higher than 10 T. More Landau subbands are necessary in order to extend the kinetics to lower magnetic fields or for larger detunings.
In this report, we take into account four Landau subbands in both the conduction band and the valence band. 256 matrix elements of Coulomb scattering are calculated in the same way as discussed in the Appendix of Ref. [10]. So we can investigate the femtosecond dephasing and relaxation kinetics of magnetoplasma with magnetic field B > 6 T. We can also fix the magnetic field and tune the laser pulses over a few Landau subband transitions. We find strong modulation of the dephasing time both for variations of the magnetic field and the detuning. These modulations could not be seen fully in our previous paper due to the small range of available B fields limited by three considered Landau subbands.
The semiconductor Bloch equations are all the same as those in our previous paper [10]: with ρ ν,n,ν ′ ,n ′ ,k representing the single-particle density matrix with the band indices {ν, ν ′ } = {c, v} and the correspondung Landau subbands {n, n ′ }. The diagonal elements describe the carrier distribution functions ρ ν,n,ν,n,k = f νnk of the n-th Landau subband and the wavevector k, and the off-diagonal elements describe the interband polarization components, e.g. ρ c,n,v,n,k = P nk e −iωt . For the assumed e-h symmetry, f enk ≡ f hnk ≡ f nk and the polarization has only components between subbands of the same quantum number n in the conduction and valence band, which simplifies the problem considerably. The coherent parts of the equations of motion for the distribution functions and the polarization components include Hartree-Fock contributions and can be found in our previous paper. So can the explicit forms of the scattering rates. However in this report, we only take the Markovian limit. The Landau index n in our present study ranges from 0 to 3. We use the same material parameters of the quantum well as our previous paper. We perform a numerical study of the Bloch equations in the Boltzmann limit to calculate TR and TI FWM signals in order to study the effective dephasing time. To do so, we use two delayed Gaussian pulses of a width of 50 fs and a variable delay time τ , E 0 (t) = E 0 (t) + E 0 (t − τ )e iϕ with the relative phase ϕ = (k 2 − k 1 ) · x resulting from the different propagation directions k 1 and k 2 . We use an adiabatic projection technique with respect to this phase in order to calculate the polarization in the FWM direction with wavevector 2k 2 − k 1 described in detail in Ref. [11]. This technique is suitable for optically thin crystals, where the spatial dependence can be treated adiabatically [12]. The intensity of each pulse is given by ∞ −∞ dE 0 (t)dt = χπ with χ denoting the fraction of a π-pulse defined without local field corrections and d being the optical-dipole matrix element. Differing from our previous paper where we discussed both the intermediate density case (χ = 0.1) and high density case (χ = 0.3), in this study we focus only on the intermediate density case with χ fixed to 0.1.
Our main results are plotted in Figs. 1 and 3. In Fig. 1 we plot the effective dephasing time as function of magnetic field B for pulses with detuning ∆ 0 = 26.4 meV, which is the same value used in our previous calculation [10]. The effective dephasing time T ef f is obtained from the decay of the TI-FWM signal with the delay time τ written in the form ∝ exp(−τ /T ef f ). The solid curve is our present calculation with 4 Landau subbands and the dashed curve is our earlier one with 3 Landau subbands. We found they coincide above B = 15 T which is in agreement with our discussion that 3 Landau subbands are only good for high magnetic fields and one needs to include more Landau subbands for lower magnetic fields. We further find the modulation of dephasing time as it first decreases with decreasing magnetic field and increases again when B decreases from 7 T to 6 T. We speculate that more modulations occur as the magnetic field is still lower. However, we cannot push our calculation to lower fields because that would require even more Landau subbands.
This modulation can be well understood in the way of our previous discussion [10]. For fixed pulses, several effects compete with each other when the magnetic field increases. On one hand, the number of Landau subbands which contribute to the Coulomb scattering kinetics decreases. In particular the contributions to the dephasing from the intra-and inter-subband scattering of the higher Landau subbands as well as the inter-subband scattering between the higher and lower subbands decrease. For large populations in one subband the Pauli blocking may further reduce also the intra-band scattering rates. All these effects (we refer to them as effects I in the following) increase the dephasing time. On the other hand, with increasing B field the degeneracy of Landau subbands increases and the matrix elements of the Coulomb scattering become larger. Moreover, an increasing degeneracy also increases the scattering rates. Both the increased degeneracy and the increased Coulomb matrix elements (effects II) reduce the dephasing time. When effects I dominate over effects II, one observes increase of the dephasing time. Otherwise, a decrease of the dephasing time results.
In order to further understand the properties of dephasing, we change the detuning for a fixed magnetic field of B = 7 T. In Fig. 2 we illustrate the pulse spectra tuned at −20 meV which is far below the band gap (solid curve) and at 40 meV which is deep inside the band (dashed curve). We tune the laser pulses from around −35 meV to 40 meV and calculate the effective dephasing time. The resulting dephasing time is plotted in Fig. 3 as function of the detuning ∆ 0 . From Fig. 3 one can see strong modulations of the dephasing time. When the laser pulses are tuned far away from the lowest optical transition, the excitation is very small and the dephasing time is independent on the detuning (and the related carrier densities). However the detuning strongly affects the dephasing time when it is larger than 0. We find the dephasing time reaches minima when the laser pulses are tuned resonantly at P 0 and P 1 . Another minimum is observed when ∆ 0 sits between P 0 and P 1 and the pulse excites comparable populations in both bands.
For fixed magnetic field, the matrix elements of Coulomb scattering are fixed. The dephasing time is mainly modulated by the occupation of Landau subbands with varying detuning of the laser pulse. When the pulses are resonant with an optical transition, the carriers mainly populate the corresponding Landau subband. This makes the Coulomb scattering more efficient as the scattering rate increases superlinearly with the carrier density. Therefore one observes a minimum in the dephasing time. It is noted here that the distribution function in this calculation is smaller than 0.5 even after the second pulse. This rules out a dominant contribution of Pauli blocking which makes the dephasing times longer as discussed before. The third minimum between P 0 and P 1 in Fig. 3 comes from the fact that the center of the pulse sits just between the two lowest optical transitions and it pumps carriers with both of its tails. In this situation the lowest two Landau subbands both get relatively large excitations and therefore lead to the fast dephasing. It is noted that not withstanding the fact that we plotted the detuning to −35 meV, the physically meaningful range is only ∆ 0 > 0, where the carrier-carrier scattering dominates over other processes.
In conclusion, we have discovered modulations of the effective dephasing time of 2D magnetoplasma by either fixing the detuning and changing the magnetic field or fixing the magnetic field and changing the detuning.
We acknowledge financial support by the DFG within the DFG-Schwerpunkt "Quantenkohärenz in Halbleiter". Interesting discussions with D.S. Chemla and H. Roskos are appreciated. | 2019-04-14T02:18:23.327Z | 1998-10-03T00:00:00.000 | {
"year": 1998,
"sha1": "2038f804e4dae234b7234418405d6689b348dce6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9810029",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "263201023d22c24b07e3d89ba40e3f1e2bfdfbf1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
267957890 | pes2o/s2orc | v3-fos-license | Dietary Replacement Effect of Fish Meal by Tuna By-Product Meal on Growth and Feed Availability of Red Sea Bream (Pagrus major)
Simple Summary Fish meal is widely used as a feed ingredient in formulated feeds for marine fish species due to its high nutritional value and palatability. However, the increasing cost and limited availability of fish meal highlight the high need to look for an alternative protein source for fish meal in fish feeds to achieve sustainable aquaculture. Tuna by-product meal, derived from the tuna canning process, shows promise as a viable substitute for fish meal in fish feeds. This study aimed to investigate the effect of replacing fish meal with tuna by-product meal on the growth of red sea bream. The findings of this study suggested that 40% fish meal replacement with tuna by-product meal is viable without compromising growth, feed consumption, and feed utilization, while simultaneously providing the highest economic return for fish farmers. Abstract The effect of substituting fish meal (FM) by tuna by-product meal (TBM) on growth and feed availability of red sea bream (Pagrus major) was investigated. Six experimental diets were crested to be isonitrogenous (51.5%) and isolipidic (14.5%). The control (Con) diet contained 55% FM. FM substitution in the Con diet was made in increments of 20 percentage points (20, 40, 60, 80, and 100%), named as the TBM20, TBM40, TBM60, TBM80, and TBM100 diets, respectively. Juvenile red sea bream were stocked into 18, 300 L flow-through tanks (50 fish/tank). Red sea bream were hand-fed with each diet until satiation for 8 weeks. No statistical differences in weight gain, specific growth rate (SGR), and feed consumption were found among red sea bream fed the Con, TBM20, and TBM40 diets. Furthermore, feed utilization of fish fed the TBM20, TBM40, TBM60, and TBM80 diets was comparable to red sea bream fed the Con diet. The biological indices, biochemical composition, and hematological parameters of fish were not statistically altered by dietary FM replacement with TBM. The greatest economic profit index was achieved in the TBM40 diet. In conclusion, the replacement of 40% FM with TBM in red sea bream diet appears to be the most recommendable approach without producing retarded growth and feed availability, but maximizing EPI to farmers.
Introduction
Red sea bream (P.major) is a representative fish species commonly farmed in Eastern Asia, including the Republic of Korea (hereafter, Republic of Korea) and Japan.The annual aquaculture production of red sea bream in Korea has continuously elevated from 2755 metric tons in 2013 to 8313 metric tons in 2021 [1].Carnivorous fish species typically demand animal-origin high protein levels in their feeds, and the quality of fish feed is largely contingent on their protein sources, constituting two-thirds of the total feed cost [2].Fish meal (FM) remains a primary and costly protein source in formulated fish diets, due to its high nutritional value and excellent palatability [3,4].Fish feeds must contain a higher proportion of FM compared to feeds for terrestrial livestock to fulfill the nutritional requirements [5].Nevertheless, the global production rate of FM has decreased by an average of 1.7% per annum since 1995, due to the regulation of fisheries and declining fish resources [6], and has been stagnant to date.In addition, a substantial amount of FM has been incorporated in feeds for terrestrial livestock [7].Consequently, the limited availability and increasing competition of FM indicate that aquaculture will be constrained by a significant bottleneck in the near future, as long as the fish feed industry depends on the availability of FM.
Several attempts have been made to evaluate various animal protein sources [8-10] and plant protein source [8, [11][12][13] as the substitutes for FM in the red sea bream diets, with substantial achievements.However, the relatively low protein level, imbalanced amino acid (AA) composition, the presence of anti-nutritional factors, and poor digestibility of plant protein sources have limited their widespread use in diets, especially for carnivorous fish species [14,15].Thus, it is necessary for feed nutritionists to search for an animal-origin replacer, which is free of these issues, for FM in fish feeds.
Fishery by-products are increasingly considered the practical replacer for FM in aquafeeds [16,17].In fish processing plants, over 60% of fishery by-products, including heads, skin, trimmings, fins, frames, viscera, and roes are generated as waste, and only 40% of fish products are produced for human consumption [18].The disposal of large amounts of these by-products can cause highly polluting organic matter, which leads to environmental and economic issues [19,20].However, fishery by-products represent excellent sources of high-quality protein and lipids, along with being rich in micronutrients, such as vitamins (A, B2, B3, and D) and minerals (iron, zinc, selenium, and iodine) [21].Tuna by-product is a type of fishery by-product generated from the canning process of the primary market species of tuna, such as skipjack tuna (Katsuwonus pelamis) and yellowfin tuna (Thunnus albacares) [22,23].In recent years, the tuna cannery industry has been increasingly exploring the utilization of by-products generated during tuna processing to innovate new products and enhance profit margins [22].As a result, various feed ingredients derived from tuna by-products, including tuna by-product meal (TBM), tuna silage, and tuna protein hydrolysate, have been developed [22].
Previous studies have reported the potential for substituting FM with TBM in the diets of olive flounder (Paralichthys olivaceus) [23], spotted rose snapper (Lutjanus guttatus) [24], and rockfish (Sebastes schlegeli) [25].Additionally, Uyan et al. [26] found that tuna muscle byproduct powder (TMP) (obtained after the de-boning process of TBM) was an appropriate protein source to replace FM up to 50% in the 58.5% FM-basal diet without adverse impacts on growth of red sea bream.However, diets used in Uyan et al. [26]'s study contained relatively low protein contents (45−46%), which were lower than the dietary protein requirement (52%) of red sea bream [27], and the use of TMP in fish feeds is very restricted because of its limited supply and high cost [26,28].Nevertheless, TBM produced from tuna canning process could be commercially available in Korea, with production of over 30,000 metric tons by Woojin Feed Ind. Co. Ltd. (Incheon Metropolitan City, Republic of Korea) in 2020 [23], although statistical data on the annual production of TBM was unavailable in Korea to date.Despite the commercial importance of TBM in fish feeds, no study on evaluation of the potential substitution of FM with TBM in the diet of red sea bream has been reported.
This study, thus, aimed to evaluate effect of dietary FM replacement with TBM on the growth and feed availability, biochemical composition, and blood chemistry of red sea bream.Additionally, the economic effect of dietary substitution of FM with TBM was also investigated.
Experimental Fish and Conditions
Similar sizes of juvenile red sea bream were obtained from a commercial fish farm (Tongyeong-si, Chungcheongnam-do, Republic of Korea) and acclimated in a 5-ton round shaped tank for 2 weeks.During this period, they were provided with a commercial extruded pellet (50% crude protein and 13% crude lipid) (Suhyup Feed, Uiryeong-gun, Gyeongsangnam-do, Republic of Korea).After acclimation, 900 juveniles averaging 8.6 g were allocated into 18, 300-L flow-through circular tanks (50 fish/tank) in triplicate for the 8-week feeding trial.The tanks were filled with a 1:1 mixture of sand-filtered seawater and underground seawater.Water quality was monitored daily throughout the feeding experiment using a digital multimeter (AZ-8603, AZ Instrument, Taichung, Taiwan).The water temperature, dissolved oxygen, salinity, and pH were recorded at 20.6 ± 1.54 • C (mean ± SD), 7.7 ± 0.27 mg/L, 30.5 ± 0.40 g/L, and 7.5 ± 0.07, respectively.Fish were meticulously hand-fed to visual satiation twice a day (08:00 and 17:00).The experimental conditions were maintained under the natural photoperiod.To maintain adequate water quality, the bottom of each tank underwent daily siphon-cleaning, and deceased fish were promptly removed upon detection.
Experimental Diets
The feed formulations of the experimental feeds are shown in Table 1.The control (Con) diet contained 55% FM (anchovy meal) and 17% soybean meal as the protein sources.Additionally, 17.5% wheat flour and 4% each of fish and soybean oils were contained as the carbohydrate and lipid sources, respectively, in the Con diet.In the Con diet, TBM was replaced for 20, 40, 60, 80, and 100% of FM, designated as the TBM20, TBM40, TBM60, TBM80, and TBM100 diets, respectively.All experimental diets were isonitrogenous at 51.5% and isolipidic at 14.5%.The experimental diets were formulated to fulfill protein and lipid requirements for red sea bream [27].All feed ingredients were finely pulverized, thoroughly mixed, and pelleted using a laboratory pellet extruder with a 3:1 water ratio.The experimental feeds were dried at 40 • C for a couple of days, and stored at −20 • C until use.
Measurement of Biological Indices of Fish
On the completion of the feeding trial, all live red sea bream in each tank were starved for 24 h and then anesthetized with tricaine methanesulfonate (MS-222) at a concentration of 100 ppm.The total number of fish in each tank was counted and the collective weight was measured.In each tank, ten anesthetized fish were randomly selected to calculate biological indices, including condition factor (CF), viscerosomatic index (VSI), and hepatosomatic index (HSI).Growth performance, feed utilization, and biological indices were calculated as follows [28]: specific growth rate (SGR, %/day) = (Ln final weight of fish − Ln initial weight of fish) × 100/days of feeding trial (56 days); feed efficiency (FE) = (Total final weight (g) − total initial weight (g) + total weight of dead fish (g)]/total feed consumption (g)); protein efficiency ratio (PER) = weight gain of fish (g/fish)/total protein consumption of fish (g/fish); protein retention (PR, %) = protein gain of fish (g/fish) × 100/total protein consumption of fish (g/fish), CF (g/cm 3 ) = body weight of fish (g) × 100/total length of fish (cm) 3 ; VSI (%) = viscera weight of fish (g) × 100/body weight of fish (g); and HSI (%) = liver weight of fish (g) × 100/body weight of fish (g).
Blood Chemistry of Red Sea Bream
Blood samples were collected from five anesthetized fish from each tank using heparinized syringes after the measurement of individual weight to determine the biological indices.After centrifugation (2720× g) at 4 • C for 10 min, plasma samples were collected and stored at −70 • C in separate aliquots.These samples were later analyzed for aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP), total bilirubin (T-BIL), total cholesterol (T-CHO), triglyceride (TG), total protein (TP), and albumin (ALB) levels using an automatic chemistry system (Fuji Dri-Chem NX500i, Fujifilm, Tokyo, Japan).
Blood samples were collected from five anesthetized fish from each tank using syringes after measurement of individual weight to determine the biological indices.After centrifugation (2720× g) at • C for 10 min, serum samples were collected and stored at −7 • C in separate aliquots.Superoxide dismutase (SOD) was determined in terms of the percentage reaction inhibition rate of enzyme with water-soluble tetrazolium dye (WST-1) as the substrate and xanthine oxidase using a SOD assay kit (Sigma, 19160, St. Louis, MO, USA), following the standard protocol.After incubation at 3 • C for 20 min, the absorbance of each endpoint assay was measured at 450 nm, which is the absorbance wavelength for the colored product of the WST-1 reaction with superoxide.The inhibition percentage was normalized by mg protein and expressed as SOD units.
Furthermore, turbidimetric assay for lysozyme was performed as described by Lange et al. [30].In short, a 100 µL of test serum was introduced into a 1.9 mL suspension of Micrococcus lysodeikticus (0.2 mg/mL; Sigma, St. Louis, MO, USA) in 0.05 M sodium phosphate buffer (pH 6.2).The reactions were conducted at 25 • C, and absorbance at 530 nm was assessed using a spectrophotometer between 0 and 60 min.The lysozyme activity unit was calculated as the amount of enzyme required to produce a 0.001/min reduction in absorbance.
Analysis of Biochemical Composition of the Experimental Feeds and Fish
The proximate composition of the experimental feeds and the whole-body fish were analyzed according to standard protocol [31].Moisture content was measured by oven drying at 105 • C (6 h for dry samples and 24 h for wet samples).Crude protein content was analyzed using the Kjeldahl method (Kjeltec 2100 Distillation Unit, Foss Tecator, Hoganas, Sweden), while crude lipid content was analyzed using an ether-extraction method (Soxtec TM 2043 Fat Extraction System, Foss Tecator, Sweden).Ash content was analyzed using a muffle furnace operated at 550 • C for 4 h.
Analysis of All AA (except tryptophan) in the FM, TBM, experimental feeds, and whole body of red sea bream were conducted using the ninhydrin postcolumn reaction method through ion-exchange chromatography with an AA analyzer (L-8800 Auto-analyzer, Hitachi, Tokyo, Japan).For each sample, 0.2 g was placed in a decomposition tube, 10 mL of 6 N HCl was added, and then the mixture was hydrolyzed at 110 • C for 24 h with nitrogen gas injection.The filtrate was concentrated using a reduced-pressure concentrator, adjusted to a volume of 50 mL with 0.2 M sodium citrate buffer, and filtered through a 0.20 µm cellulose acetate syringe filter before analysis.Tryptophan content was measured separately using high-performance liquid chromatography (S1125 HPLC pump system, Sykam GmbH, Eresing, Germany).
Fatty acid (FA) in the experimental feeds and whole body of red sea bream were extracted using a mixture of chloroform and methanol (2:1), as described by Folch et al. [32].FA methyl esters were synthesized through transesterification with 14% BF3-MeOH (Sigma, St. Louis, MO, USA) and subjected to analysis using a gas chromatograph (Trace GC, Thermo, Waltham, MA, USA) equipped with a flame ionization detector.The separation was carried out on an SPTM-2560 capillary column (100 m × 0.25 mm I.d.film thickness 0.20 µm; Supelco, Bellefonte, PA, USA).
Analysis of Economic Measurements of the Study
The economic evaluation of this study was conducted using USD as the currency type.The economic conversion ratio (ECR) and economic profit index (EPI) were calculated according to Bicudo et al. [33] and Montenegro et al. [34]: ECR (USD/kg) = feed consumption of fish (kg/fish)/weight gain of fish (kg/fish) × diet price (USD/kg) and EPI (USD/fish) = [final weight of fish (kg/fish) × selling price of fish (USD/kg)] − [feed consumption of fish (kg/fish) × diet price (USD/kg)].The prices of feed ingredients and fish were calculated using the exchange rate USD 1 = KRW 1232 (Korean currency).The price of red sea bream was estimated at 20.29 USD/kg.The price of the experimental diets was computed by multiplying the proportional contributions of each feed ingredient by their respective cost per kg and summing the resulting values for all ingredients in the experimental diets.The price (USD/kg) of each ingredient was as follows: FM = 2.16; TBM = 1.30; fermented soybean meal = 0.70; wheat flour = 0.55; fish oil = 2.76; soybean oil = 1.79; vitamin premix = 8.28; mineral premix = 6.66; choline = 1.30.
Statistical Analysis
All statistical analyses were carried out using SPSS version 24.0 (SPSS Inc., Chicago, IL, USA).Data were evaluated for assumptions including normality and homogeneity of variance, using the Shapiro-Wilk and Levene tests, respectively, and violation was not detected (p > 0.05).One-way analysis of variance (ANOVA) and Duncan's multiple range test [35] were used to compare the means of dietary treatments.Percentage data were arcsine-transformed before statistical analysis.If statistical significance was detected (p < 0.05), the data were subjected to orthogonal polynomial contrast and regression analysis to determine the most suitable model (linear, quadratic, and cubic).
Amino Acid and Fatty Acid Profiles of the Experimental Feeds
FM contained relatively high content of all essential AA (EAA) and non-essential AA (NEAA), except for glycine compared over TBM (Table 2).As dietary FM replacement levels with TBM increased, the content of arginine increased, while all EAA except for phenylalanine and threonine tended to decrease.TBM contained relatively high content total content of n-3 highly unsaturated FA (∑n-3 HUFA), including docosahexaenoic acid (DHA, C22:6n-3), but low content of total content of monounsaturated FA (∑MUFA) and eicosapentaenoic acid (EPA, C20:5n-3) over FM (Table 3).Additionally, with an increase in the substitution levels of FM by TBM, the content of ∑n-3 HUFA in the experimental feeds tended to increase, whereas the content of total content of saturated FA (∑SFA) and ∑MUFA tended to decrease.
Performance of Fish
The survival of fish was not statistically (p > 0.9) altered by dietary FM substitution by TBM (Table 4).Red sea bream fed the Con, TBM20, and TBM40 diets showed statistically (p < 0.0001) superior weight gain to fish fed all other diets.Weight gain of fish fed the TBM60 and TBM80 diets was also statistically (p < 0.05) higher than that of fish fed the TBM100 diet.The SGR of red sea bream fed the Con diet was statistically (p < 0.0001) superior to fish fed the TBM60, TBM80, and TBM100 diets, but comparable to fish fed the TBM20 and TBM40 diets.Additionally, polynomial orthogonal contrast showed significant linear (p = 0.0206 and p = 0.0236, respectively) and quadratic (p = 0.0001 for both) models between dietary replacement levels of TBM for FM versus weight gain and SGR (Table 5).In regression analysis, quadratic relationships were suggested as the most suitable relationships between dietary substitution levels of TBM for FM versus weight gain (Y = −0.000595X 2 + 0.009619X + 32.9571, p < 0.0001, R 2 = 0.8335, Y max = X value of 8.1%) and SGR (Y = −0.000027X 2 + 0.00043X + 2.8092, p < 0.0001, R 2 = 0.8263, Y max = X value of 8.0%).The feed consumption (g/fish) of red sea bream fed the Con diet was statistically (p < 0.03) higher than that of red sea bream fed the TBM60, TBM80, and TBM100 diets, but not statistically (p > 0.05) different from that of red sea bream fed the TBM20 and TBM40 diets (Table 6).Polynomial orthogonal contrast showed significant liner (p = 0.0009) model between dietary replacement levels of TBM for FM and feed consumption.In regression analysis, a linear relationship was suggested as the most suitable relationship between dietary FM substitution with TBM and feed consumption (Y = −0.030333X+ 36.0322,p < 0.0001, R 2 = 0.5833).Values (means of triplicate ± SE) in the same column sharing the same superscript letter are not significantly different (p > 0.05). 1 Feed efficiency (FE) = [Total final weight (g) − total initial weight (g) + total weight of dead fish (g)]/total feed consumption (g). 2 Protein efficiency ratio (PER) = weight gain of fish (g/fish)/total protein consumption of fish (g/fish). 3Protein retention (PR, %) = protein gain of fish (g/fish) × 100/total protein consumption of fish (g/fish). 4Condition factor (CF, g/cm 3 ) = body weight of fish (g) × 100/total length of fish (cm) 3 .5 Viscerosomatic index (VSI, %) = viscera weight of fish (g) × 100/body weight of fish (g). 6Hepatosomatic index (HSI, %) = liver weight of fish (g) × 100/body weight of fish (g).
The FE of red sea bream fed the Con, TBM20, TBM40, TBM60, and TBM80 diets was statistically (p < 0.04) higher than that of red sea bream fed the TBM100 diet.Polynomial orthogonal contrast showed a significant quadratic (p = 0.0040) model between dietary replacement levels of TBM for FM and FE.In regression analysis, a quadratic relationship was suggested as the most suitable relationship between dietary FM substitution with TBM and FE (Y = −0.00001308X 2 + 0.000559X + 0.9685, p < 0.003, R 2 = 0.5473, Y max = X value of 21.4%).The PER of red sea bream fed the TBM20 and TBM40 diets was statistically (p < 0.006) higher than that of red sea beam fed the TBM80 and TBM100 diets, but not statistically (p > 0.05) different from that of red sea beam fed the Con and TBM60 diets.The PR of red sea bream fed the TBM20, TBM40, and TBM60 diets was statistically (p < 0.002) higher than that of red sea bream fed the TBM80 and TBM100 diets, but not statistically (p > 0.05) different from that of red sea bream fed the Con diet.Polynomial orthogonal contrast showed significant linear (p = 0.0051 and p = 0.0022, respectively) and quadratic (p = 0.0014 and p = 0.0001, respectively) models between dietary substitution levels of TBM for FM and PER and PR.In regression analysis, quadratic relationships were suggested as the most suitable relationships between dietary FM substitution with TBM and PER (Y = −0.00003869X 2 + 0.002507X + 1.7743, p < 0.0001, R 2 = 0.6815, Y max = X value of 32.1%) and PR (Y = −0.000935X 2 + 0.068738X + 28.4619, p < 0.0001, R 2 = 0.7798, Y max = X value of 36.8%).
Biochemical Composition of the Whole Body of Red Sea Bream
The content of moisture, crude protein, crude lipid, and ash were in the ranges of 68.4-69.5%,16.1-16.5%,8.4-9.0%, and 4.5-4.8%,respectively (Table 8).None of these parameters was statistically (p > 0. 8, p > 0.7, p > 0.7, and p > 0.9, respectively) altered by dietary TBM substitution for FM.The whole-body AA (Table 9) and FA (Table 10) profiles of red sea bream were remarkably (p > 0.05 for all) unaffected by dietary TBM substitution for FM.
Economic Analysis of the Study
Diet price and ECR were the highest in the Con diet (Table 11).Price and ECR of the diets decreased with elevated dietary FM replacement with TBM.EPI of the TBM40 diet was statistically (p < 0.0001) higher than that of the TBM60, TBM80, and TBM100 diets, but not statistically (p > 0.05) different from that of the Con and TBM20 diets.Polynomial orthogonal contrast showed significant linear (p = 0.0122 and p = 0.0085, respectively) and quadratic (p = 0.0001 for both) models between dietary replacement levels of TBM for FM versus ECR and EPI.In regression analysis, quadratic relationships were suggested as the most suitable relationships between dietary FM replacement with TBM and ECR (Y = 0.000026X 2 − 0.005403X + 1.5146, p < 0.0001, R 2 = 0.9014) and EPI (Y = −0.000014X 2 + 0.00057X + 0.7783, p < 0.0001, R 2 = 0.7778).
Discussion
The utilization of fishery by-products including TBM as a protein source in commercial fish feeds can be the economical and practical solutions for mitigating environmental concerns and reducing feed cost [21][22][23].There were no discernible changes in weight gain and SGR of red sea bream fed the Con, TBM20, and TBM40 diets in this study, which implied that FM replacement up to 40% by TBM in diets led to no undesirable impact on growth of fish.This finding is in accordance with previous studies, in which the substitution of FM with TBM up to 50% and 75% in the feeds of olive flounder [23] and spotted rose snapper [24], and rockfish [25], respectively, did not compromise the growth performance.Likewise, Uyan et al. [26] demonstrated that substituting FM up to 50% with TMP did not adversely change the growth of red sea bream.Previous studies also reported that dietary substitution of tuna liver meal and TBM for FM up to 30 and 75%, respectively, is feasible without negatively affecting growth of Nile tilapia (Oreochromis niloticus) and abalone (Haliotis discus), respectively [40,41].Furthermore, a blend of tuna viscera and corn meal (65:35) was also used as an alternative for FM in the feed for white shrimp (Litopenaeus vannamei), and FM up to 40% could be substitutable without any detrimental effect on growth [42].
All EAA, except for arginine, phenylalanine, and threonine, decreased with elevated replacement of FM by tuna by-product meal TBM.The arginine, lysine, and valine requirements for the growth of red sea bream have been reported to be 2.37% [36], 1.79% [37], and 0.90% [38] of diets, respectively.Arginine (2.58-2.72%),lysine (3.46-3.68%),and valine (2.12-2.30%)levels in all experimental diets were met for their requirements.Unfortunately, the requirements for most of the EAA remain unknown, making it difficult to clearly explain the effects of the deficiency of each EAA on the growth performance of red sea bream.Although a few EAA (arginine, lysine, and valine) requirements for red sea bream were known, decreased content of ∑EAA in the experimental diets, especially in the TBM60, TBM80, and TBM100 diets, could appear to partially affect the growth of red sea bream negatively.
Marine fish species typically require dietary n-3 HUFA, such as EPA and DHA, for desirable growth and survival [42].The dietary requirements of EPA and DHA for juvenile red sea bream were estimated to be 1% (6.85% of total FA) and 0.5% (3.42% of total FA), respectively, when either DHA or EPA was not present, respectively [43].However, when EPA and DHA at a ratio of 1:1 was included in feeds, their requirements could be reduced by 0.25% (1.71% of total fatty acids) in diets for each.Therefore, the experimental diets appeared to fulfill the dietary requirements of both EPA and DHA for red sea bream in this study.
A linear decrease in feed consumption was observed in red sea bream with dietary elevated levels of FM substitution with TBM in regression analysis, and red sea bream fed the TBM60, TBM80, and TBM100 diets showed statistically lower feed consumption than that of fish fed the Con diet in this study, implying that lower feed consumption of fish fed the formers led to poorer growth compared to fish fed the latter.Likewise, low feed consumption, attributed to poor palatability has been reported when high amount of FM was replaced with various animal protein sources in fish feeds [44][45][46].Increased ash content from 9.8% to 13.6% in the experimental diets with elevated FM substitution with TBM could be a reason why red sea bream produced poorer growth and feed consumption in the higher FM-substituted diets in this study.The administration of diets containing high ash content led to adverse effect on fish performances, such as poor growth, high mortality, cataracts, and skeletal abnormalities [47][48][49].A significant reduction in growth rate resulted from lower feed consumption was also observed in chinook salmon (Oncorhynchus tshawytscha) fed the feeds containing higher levels of calcium and phosphorus (dietary ash content of 19.3-19.4%)compared to fish fed a diet without calcium and phosphorus supplementation (dietary ash content of 3.5-13.6%)in the 105-day feeding trial [50].
The Y max values to induce the greatest FE, PER, and PR of red sea bream were estimated to be 21.4,32.1, and 36.8% of FM replacement by TBM in diets, whereas the Y max values to induce the greatest weight gain and SGR were estimated to be 8.1 and 8.0% of FM replacement by TBM in this study, respectively.Results of the multiple comparison in growth performance (weight gain and SGR) of red sea bream seemed to be more similar to those of the Y max values to achieve the greatest FE, PER, and PR, rather than those of the Y max values to achieve weight gain and SGR in regression analysis in this study.
CF has been commonly used as an indicator of the condition, fatness, and wellbeing of fish, and the heavier fish of a given length is generally considered to be the better condition [51].VSI is an indicator of how lipids are being utilized and is positively correlated with dietary lipid levels [52].HSI is a common method for indirectly measuring glycogen and carbohydrate levels accumulated in liver to evaluate the nutritional condition of fish [52,53].In this study, dietary substitution of FM with TBM did not statistically alter CF, VSI, and HSI of red sea bream, implying that the health status of fish was not affected by dietary FM replacement with TBM.This was consistent with a study [23], in which dietary FM substitution with TBM led to no remarkable changes in the CF, VSI, and HSI of olive flounder.In addition, dietary FM replacement with fermented TBM led to no remarkable changes in the HSI and CF of olive flounder [54].
Plasma parameters are the critical indicators of fish's health and physiological stress response [55,56].No discernible differences in plasma parameters of red sea bream in this study implied that dietary FM substitution with TBM did not lead to any adverse effect on fish health.Likewise, Uyan et al. [26] reported that plasma parameters of red sea bream were not altered by dietary FM substitution with TMP.Similarly, dietary FM substitution with TBM did not bring about significant differences on plasma parameters in spotted rose snapper [24] and olive flounder [23].However, in contrast to this study, Oncul et al. [54] emphasized that dietary FM replacement with fermented TBM significantly altered plasma AST and T-CHO of olive flounder.
Innate immunity constitutes a fundamental defense system in fish and is commonly used to evaluate the effects of dietary treatments on fish health and immune function [57,58].Lysozyme plays a crucial role in protecting against infectious diseases by breaking down glycosidic bonds present in the peptidoglycan of cell walls, regardless of whether they are gram-positive or gram-negative [58,59].SOD serves as a crucial antioxidant enzyme, protecting cells against oxidative damage caused by reactive oxygen species [60,61].Dietary substitution of FM with TBM led to no discernible changes in lysozyme activity and SOD of fish in this study, implying that dietary FM replacement with TBM had no negative impact on serum lysozyme activity and SOD of red sea bream.Similarly, previous studies have shown that lysozyme activity and SOD in olive flounder were not altered by dietary FM substitution, regardless of whether unfermented or fermented TBM was used [23,54].
No significant differences in the proximate composition, AA, and FA profiles of red sea bream in this study indicated that dietary FM replacement with TBM did not cause any negative impact on the biochemical composition of fish.These findings were consistent with the findings of previous studies, in which dietary FM substitution with various animal protein sources did not cause any difference in the chemical composition [10,[62][63][64] and AA profiles [25,65,66] of fish.The AA profiles of body proteins seem to be the same regardless of diets because body proteins are synthesized based on the genetic coding from DNA [67].Unlike this study, however, the FA profiles of fish were changed by FM replacement with animal protein sources in fish feeds [62,68].
The price of the experimental diets and ECR decreased with elevated FM substitution with TBM in this study.However, the greatest EPI was achieved in the TBM40 diet.EPI is a crucial parameter to assess economic profitability in considering growth performance, feed consumption, feed cost, and fish selling prices [33,69].Remarkably lower EPI in the TBM60, TBM80, and TBM100 diets compared to the TBM40 diet might be attributed to poorer growth of fish.Likewise, previous studies have proven that the greatest EPI can be achieved by FM replacement with cost-effective protein sources at appropriate level in fish feeds [33,[69][70][71].Therefore, 40% FM substitution with TBM in the red sea bream feed is anticipated to yield the greatest economic return for fish farmers.The feasibility of 40% FM substitution with TBM in the commercial feed of red sea bream needs to be tested in a long-term feeding trial.
Conclusions
FM replacement up to 40% could be made with TBM in the 55% FM-based diet without adverse effects on the growth and feed availability of red sea bream.Furthermore, the greatest EPI was achieved in the TBM40 diet.
Table 1 .
Ingredients and chemical composition of the experimental diets (%, dry matter basis).
Table 2 .
Amino acid profiles (% of the diet) of the experimental diets.
Table 3 .
Fatty acid profiles (% of total fatty acids) of the experimental diets.
Table 4 .
Survival (%), weight gain (g/fish), and specific growth rate (SGR, %/day) of red sea bream fed the experimental diets for 8 weeks.SE) in the same column sharing the same superscript letter are not significantly different (p > 0.05). 1 SGR (%/day) = (Ln final weight of fish − Ln initial weight of fish) × 100/days of feeding trial.
Table 5 .
Relationship between dietary substitution levels of tuna by-product for fish meal versus growth performance (weight gain and SGR), feed availability (feed consumption, FE, PER, and PR), and economic parameters (ECR and EPI).
Table 7 .
Blood chemistry parameters of red sea bream fed the experimental diets for 8 weeks.
Table 8 .
Whole body proximate composition (% of wet weight) of red sea bream fed the experimental diets for 8 weeks.
Table 9 .
Amino acid profiles (% of wet weight) of red sea bream fed the experimental diets for 8 weeks. | 2024-02-27T16:46:13.739Z | 2024-02-22T00:00:00.000 | {
"year": 2024,
"sha1": "02be7e0bb60525d5671278c395741eddc15cb057",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/14/5/688/pdf?version=1708606578",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6b6e6726c1610e98a7699fe69d34e0679be5038",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
259763112 | pes2o/s2orc | v3-fos-license | “We Reach to People Through Different Means”: Factors That Influence HIV Counseling and Testing Among Religious People in Lilongwe, Malawi
A descriptive qualitative study among 12 religious leaders and 10 members from seven different religions using in-depth interviews was conducted to explore the factors that influence HIV testing among religious people in the area of Traditional Authority Chitukula, Lilongwe district. Participants and sites were purposively selected and all interviews were recorded using a digital recorder, then transcribed and translated into English. Data were analyzed using a thematic approach. The study showed that prayer has a prominent role in the management of HIV and AIDS. The barriers to HIV testing include a belief in faith healing, the rules of a specific church, and a perception of a low risk of HIV infection among religious people. Interventions that could promote HIV testing include the implementation of HIV self-testing, mobile and or door-to-door HIV services, church-based HIV testing services, and facility-based services that are integrated with other services to preserve privacy. Religious platforms can be leveraged in the delivery of HIV testing services. The promotion of religious-based prevention interventions needs to consider the foundations and beliefs of each religion and be able to contextualize the methods which can be achieved by continuous dialogue and support from religious leaders.
Introduction
Globally 37.7 million people were living with HIV and AIDS in 2020 and 66% of these resided in sub-Saharan Africa (UNAIDS, 2021).Of the people that were HIV infected 84% knew their status which was below the global target of 90% at the time (UNAIDS, 2021).The prevalence of HIV in Malawi was at 10.6% while in Lilongwe where the study was conducted, the prevalence was 11.5% (Ministry of Health, Malawi, 2018).Malawi has registered progress in the number of people that know their HIV status which is at 90.9% among those that are HIV infected (Ministry of Health, Malawi, 2022).Although there has been an increase in the HIV testing rates, they fall short of the current target of having 95% of people that are HIV infected knowing their status.Malawi adopted differentiated ways of implementing HIV testing to close the gaps in the targets (Ministry of Health, Malawi, 2014).These strategies include active index testing and assisted HIV self-testing using community-based cadres (Ministry of Health, Malawi, 2014).In the 2020 to 2025 HIV testing policy, Malawi aims at rolling out differentiated HIV services among men, children, female sex workers, and their clients, men who have sex with men, refugees, migrant laborers, prisoners, students of higher education institutions, and colleges, and people in uniform (Ministry of Health, Malawi, 2020).Faith-based organizations (FBOs) are recognized as partners in the provision of HIV services (Berkley-Patton, Moore, et al., 2013;Berkley-Patton et al., 2019;Derose et al., 2011;Jobson et al., 2019;Nunn et al., 2013;Pichon & Powell, 2015).FBOs are highly accessible, cover a wider population, and can support fragile public health systems (Ochillo et al., 2017;Rakotoniana et al., 2014).FBOs conduct HIV-related activities (Derose et al., 2011;Rakotoniana et al., 2014;Stewart et al., 2016) and also care for HIVinfected people, promote HIV awareness, and encourage HIV testing (Derose et al., 2011).FBOs embed HIV activities within their health activities or run parallel programs that overlap with other services (Palar et al., 2013).Furthermore, FBOs have jurisdiction over issues of personal behavior, morality, family life, and beliefs through their direct contact with people at various milestones in life (Lesolang, 2014) which remain critical to HIV services.Moreover, FBOs are advantageously positioned to tackle the HIV and AIDS pandemic because they have permanent structures present at the grassroots level in most communities.Agate et al. (2005) argued that the provision of HIV screening services within a church would be effective since most congregants attend church weekly (Agate et al., 2005).This was later confirmed in studies conducted in South Africa and the USA where older men and people were reached with HIV testing respectively through a church platform (Jobson et al., 2019;Kuofie et al., 2019).
The breadth and scope of HIV services offered by FBOs vary with other organizations offering HIV testing on their premises while others find it more feasible to work with testing agencies as opposed to being a certified testing site (Pichon & Powell, 2015).The support from faith leaders optimizes the uptake of HIV services (Lindgren et al., 2013;Mendel et al., 2015;Rankin et al., 2016;Ransome et al., 2018).After all, they are trusted and remain culturally appropriate venues because they have health services as part of their goals in society (Stewart et al., 2018).In Malawi, 77% of the population are Christians while 15% are Muslims, and the remaining 8% practice traditional African religions one of which is the Aaron group.The other African religions practiced include Bimbi, Napusenapuse, Chipangano Church, Church Cha Makolo, and African Abraham.However, there are no statistics that indicate the proportion of people that belong to the Aaron religion because of its discrete nature (Kuofie et al., 2019;National Statistical Office, 2015;Trinitapoli, 2011).The dominance of religious affiliation in Malawi makes FBOs key stakeholders in the HIV and AIDS response.(Kuofie et al., 2019;National Statistical Office, 2015;Trinitapoli, 2011).Thus, religious groupings are key partners in the fight against HIV and AIDS because they provide HIV services amongst other health programs.The Malawi Government is keen to work with various faith-based organizations to accelerate the uptake of HIV prevention strategies by using faith-based platforms as a delivery point, handling issues on faith-based healing, support of adherence to ART among congregants, and training Faith Leaders in HIV and AIDS emerging issues (Ministry of Health, Malawi, 2014).The religious fraternity has been key in the response to HIV and AIDS in Malawi and they operate under the umbrella of the Malawi Interfaith AIDS Association (Ministry of Health, Malawi, 2014).Although FBOs are involved in HIV testing activities their major constraint is financing to manage and run the programs since they are heavily Government or donor-dependent (Ministry of Health, Malawi, 2014).Additionally, religious leaders in Malawi are regarded as community leaders or opinion shapers who influence the uptake of services (Rankin et al., 2016).Arguably, a religious leader's knowledge of HIV services influences church members' behavior (Lindgren et al., 2013).Notably, FBOs remain key in the provision of HIV testing services and could be leveraged to provide more support.
With the adoption of the UNAIDS 95-95-95 goals, there is a need to expand the avenues for HIV testing by exploring religious institutions' roles (UNAIDS, 2014).The suboptimal HIV testing rate impedes the attainment of the UNAIDS goal of 95:95:95 because the last two 95s are dependent on the first 95 which focuses on knowing one's HIV serostatus which can only be achieved through HIV testing (UNAIDS, 2014).Countries are encouraged to scale up testing programs to achieve the target of at least 95% of the diagnosis of people living with HIV by 2030 (Phillips et al., 2019).Although Malawi had made substantial progress in reaching the UNAIDS 90-90-90 goals, only 76.8% of the population were aware of their HIV status which was below the target set at the time we conceptualized Malawi the study (Ministry of Health, Malawi, 2018).During this time, it was noted that 39.6% of men and 25.7% of women were not aware of their HIV status.All this was happening at the time when the National HIV and AIDS prevention strategy for Malawi advocated for voluntary HTC and the creation of demand for HTC (NAC 2015(NAC -2020)).Realizing the gap that was there in reaching the target of HIV testing, this study explored the factors that influence HIV testing among religious people in the area of Traditional Authority (T/A) Chitukula, Lilongwe district.Specifically, this study asked the following question: What are the factors that influence HIV testing among religious people in T/A Chitukula, Lilongwe district?This information will reinforce FBOs' support for HIV and AIDS services.The information from this study will inform the local authorities in Lilongwe and beyond on the roles and ways of strengthening religious institutions as platforms for offering HIV services.
Study Design
A descriptive qualitative cross-sectional study was conducted to gather information on the factors that influence HIV testing among religious people in the central region of Malawi.The design enhanced the understanding of experiences of religious people toward HIV testing, the unique manners by which they encounter HIV testing services, and the variation in their HIV testing experience and perceptions (Jobin & Turale, 2019;Kim et al., 2017).This method was deemed appropriate because it allowed the gathering of information through the immersion of the researcher in the world in which these phenomena are taking place (Kelly et al., 2019;Kim et al., 2017).
Study Setting
The study site was in Lilongwe district in the area of T/A Chitukula.The city of Lilongwe is in the Central Region and is the capital of Malawi.The Lilongwe district has 18 T/As and T/A Chitukula is located in the northeast of Lilongwe with both urban and rural sections.The study was conducted in TA Chitukula because it offers a variety of urban and rural perspectives including multiple religious' groupings.There are three prominent religious groups in TA Chitukula: Christian, Islam, and traditional religion.Among Christian denominations, there are Catholics and Protestants.There are 68 Christian denominations and 2 Islamic groups Quadria and Sukuti.Lilongwe district also has a traditional African religious grouping known as Aaron.The study was conducted among the following religious groups Roman Catholic Church, Church of Central African Presbyterian (CCAP), Assemblies of God, Apostolic, Jehovah's Witness, Islam, and Aaron (Gule wamkulu).Roman Catholic and Church of Central African Presbyterian (CCAP) are among the denominations that offer 43% of health services in Malawi including HIV and AIDS services.
Sample Size
Participants were drawn purposively (Palinkas et.al., 2015) from six prominent religions of Roman Catholic, CCAP, Assemblies of God, Jehovah's Witness, Apostolic faith church, Islam, and the traditional Aaron religion.A total of 22 religious people were recruited comprising 12 leaders (two from each of the six religions) and 10 members.Twenty-five participants were deemed enough to have rich data and at the same time solicit views from both leaders as well as members (Guest et al., 2006).In the end, only 22 interviews were conducted because the remaining three were not available at the agreed times.
Roman Catholic, CCAP, Assemblies of God, and Islam were chosen because of their large followings.The Apostolic Faith and Jehovah's Witness were chosen due to the perceived health beliefs of the two religions as they restrict their members from accessing health services.The Aaron group was chosen because of its widespread practice in T/A Chitukula and its stand on HIV testing was not known.However, there are no statistics that indicate the proportion of people that belong to this religion.
Sampling
Participants were sampled following a purposive approach and they were selected from the six denominations to reflect variations in terms of views from men and women (Palinkas et al., 2015).The two leaders from Aaron were selected using snowball sampling because it is a closed religion (Browne, 2005).In this study, a closed religion was defined as one that is practiced in secret, with services being conducted at the graveyard.The clerk to the T/A helped to identify the first Aaron leader known as ''Wakunjira'' (a name given to Aaron religious leaders) because this religion is closely linked to Chewa culture.The places of worship were the entry point to recruit participants.Religious leaders were first consulted and briefed about the study during worship days.Permission was then given and thereafter, participants were selected.Also, a purposive sample of 10 congregates, 2 from Roman Catholic, 2 from CCAP, 2 from Islam, 2 from Assemblies, 1 from Jehovah's Witness, and 1 from Apostolic were recruited for IDIs.This was done to ensure that the views of congregates are taken on board.
Data Collection
All current leaders, whether male or female, and followers of the above-mentioned religions were eligible for the study.These leaders were interviewed as representatives of their religions to explore their views, beliefs, and experiences about factors that may enhance or hinder HIV testing.The data collection tool was piloted with two people, one from Living Waters and the other one from Anglican churches, and corrections were made before the commencement of the actual data collection.Findings from the pilot study were not used in the research, but only guided the perfection of the data collection tool.The data collection tool was also reviewed and verified for its content by the research supervisor ALNM who is a qualitative research expert.Data were collected from June to August 2017 using a pretested semi-structured in-depth interview guide which was developed based on the study objectives.The semistructured in-depth interview guide was translated into Chichewa by a bilingual individual fluent in both Chichewa and English and back-translated by another person into English.The Principal Investigator checked if the documents gave the intended information.
All the interviews were face-to-face and conducted by the Principal Investigator and two well-trained research assistants.The research assistants were trained on the informed consent form, research ethics, logistics, the indepth interview guide (probing techniques and taking field notes), and verbatim transcription of data.They assisted in the identification of participants, arranging interview time, and place for the interviews.The interview guide contained open-ended questions and probes which were used to facilitate the IDI.
The principal investigator is a Public Health Specialist and she is well trained in research methods which included qualitative and quantitative approaches, and data analysis as part of her Master of Public Health training courses.She introduced herself to the religious leaders and congregates as a researcher studying toward a Master of Public Health degree d and covered the purpose of the study as well.She assured the participants that their refusal to participate in the study will not have any negative consequences on their standing in their church in any way such as being prohibited from practicing their religion or ex-communication from their respective religions.One of the research assistants holds a Bachelor of Public Health degree and the other one holds a Bachelor's degree in Education Sciences.The researchers had similar religious backgrounds with some of the study participants and completely different religious beliefs from others in the study area, so they remained neutral by avoiding asking biased questions during data collection and only reflected on what emerged from the data without adding their views during data processing and analysis.Broadly, some of the questions which guided the interviews were as follows: 1. Explain to me in detail the role of prayer in the management of HIV and AIDS.2. What do you think are the opportunities and challenges of addressing HIV and AIDS in a faith-based context? 3. What do you think are the religious factors that encourage people to go for HIV testing? 4. What do you think are the religious factors that discourage people to go for HIV testing?
Information on personal factors that stimulate the faith community to seek HIV testing, religious beliefs that are associated with HIV testing, knowledge of HIV and AIDS, and how faith leaders deliver HIV and AIDS services to the congregates was collected.All data were digitally recorded and researchers compiled field notes as well.The initial contact with the potential study participants was at their place of worship during designated service days.However, participants were allowed to choose a place and time convenient to them where they could be interviewed.Most of the participants chose their homes as a place for the interviews.The interviews were conducted in a private room where only a participant and an interviewer were present.Data were kept in a secure computer with a password known to the research team only and the field notes were kept in a lockable cabinet.
To maximize the integrity of our findings the researchers reflected on their perspectives regarding religion and HIV And AIDS to avoid influencing both the data collection and interpretation (Jennings, 2012).All key findings were summarized at the end of each interview as a form of member checking to enhance the credibility of the findings and dependability was achieved through multiple discussions amongst the researchers while the description of the context where the study was done maximized the transferability of the findings to similar settings (Leung, 2015).Interviews ranged from 30 to 40 minutes.
Data Analysis
All data were transcribed verbatim by the research team.The transcripts were then translated into English by two transcribers.Each transcript was audited by the PI against the original audiotape.There was a systematic quality check of the transcripts whereby every 7th minute and 20th minute of the transcripts were checked against the audio.The auditing served as a quality control measure of the transcription and translation.Data were inductively and deductively coded from the data and objectives and literature respectively and analyzed using thematic analysis (Braun & Clarke, 2006).The data were analyzed manually; transcripts were read and reread numerous times by the researchers.The use of multidisciplinary team members improved the reliability of the codes as they were discussed before being used and any areas of discrepancies were discussed for resolution (Burla et al., 2008).A thematic analysis approach was applied as follows: We familiarized ourselves with the data by repeatedly reading the transcripts and listening to the recordings and this was followed by the generation of initial codes.The analysis team members compared their understandings of contents in terms of major themes, points of agreement, and areas where there was disagreement.Codes were then sorted into potential themes by considering how different codes may combine to form a theme or how they fit under an overarching theme.We reviewed and refined the themes and that resulted in other themes being collapsed into each other because there was not enough data to support them, others broken down into separate themes, and others created as they were some codes that did not belong to any theme.The themes were defined and refined and were examined for differences and similarities.The refined themes were verified against the audio data.We achieved inter-coder reliability by comparing codes iteratively amongst researchers and discussing areas of discrepancies to consensus (Burla et al., 2008).
Demographic Characteristics of Religious Leaders
Of the 12 leaders that were interviewed, only 1 was female.Their age ranged from 23 to 84 years and two had no education.Only one was catholic and was not married (Table 1)
Demographic Characteristics of Followers
Of the 10 followers that were interviewed, 4 were not married, 3 were males and their ages ranged from 18 to 67 years, and only 2 were not educated (Table 2).
The results from the interviews are presented in three main themes and these include (a) Role of religion in HIV Testing and Counseling, (b) Religious related barriers to HIV testing and counseling, and (c) Interventions for promotion of HIV testing among religious people Role of Religion in HIV Testing and Counseling.Most participants stated that the role of religion in HIV counseling and testing is to encourage their followers on taking the test and to teach people the benefits of it.The knowledge of HIV testing helps the followers to make informed decisions about having a test.Religious leaders in the study viewed their institutions as platforms where information may be passed to congregants.
The role of religion is to encourage believers to go for HIV testing.They should not deceive themselves because they are believers, church elders, or pastors, they cannot contact HIV.It is everyone's responsibility to get tested and know their status and live by the advice given at the hospital.(Religion Group 5, male leader, age 43) Religious leaders admitted that they are putting minimal effort into encouraging their members to take an HIV test during their service unlike in the past when the epidemic was new.The assumption that the congregants are now used to the messaging on HIV testing influenced the level of emphasis placed by religious leaders.
As a church, I should say that we don't talk much about HIV and AIDS the way we used to do in the past.What I noted is that when a problem becomes chronic we tend to get used to it and it no longer becomes an issue.(Religion Group 4, Male leader, age 23) In the past, the government would prompt religious institutions to share messages on HIV testing which is not the case at the moment.This has resulted in religious institutions exerting minimal effort on HIV testing matters within their institutions.
.to say the truth we have been quiet for a long time without telling people about HIV and AIDS and this is because there has not been any new information on the diseases To say the truth if one is reminded of an issue it becomes easier to take steps.(Religion Group 2, Male leader, age 50) Notably, other sectors regard HIV and AIDS issues as personal matters while others believe in prayer for everything and in both instances, HIV testing would not be promoted.It was apparent in the discussion that the most acceptable means of HIV prevention for religious people is abstinence for unmarried people and faithfulness for married people.This is because sex outside of a marriage is considered a significant sin regardless of condom use according to most participants.
HIV can be prevented in several ways but we, as religious people, teach our followers that they can prevent HIV by being faithful to their partners and abstinence from those not married.Let me emphasize here that the church will not allow in any way the use of condoms!(Religion Group 4, Male leader, age 23) In contrast, some religions allow the use of condoms among married couples as a form of family planning or in cases where there are infidelity issues.The following extracts indicate this finding.
My religion allows the use of condoms because it's one of the family planning methods.In Islam, family planning is there Although most respondents expressed that the use of condoms as a prevention strategy is not allowed by their religion, there were variations between religions and even within the same religion between religious leaders and followers.Leaders adhere to their religion's stance that condoms are not allowed, while some members viewed it as a personal responsibility (Table 3).
Role of Prayer in HIV and AIDS Management.Religious people believe that prayer can cure any other disease including AIDS and would prioritize prayer over everything.In such cases, the congregants will be encouraged to pray over every ailment and may be barred from seeking health services.
We (the religious group) don't advocate for HIV testing, as such, we don't broadcast anything about that.We believe in Jesus Christ as a healer.Read Mathews 6 verse 8 which says: Jesus met Peter's father-in-law who was suffering from malaria and he just 1touched his head he was healed.(Religion Group 5, Male leader number 2, age 42) Curative benefits of prayer were also expressed in the ability to reduce the effects of the virus and boost immunity in return as expressed below: On HIV and AIDS, prayer plays a very unique role because a person is advised to repent every sin which he has committed Participants reiterated that prayer yields emotional healing because it brings hope to the hopeless and facilitates the act of caring for one who is sick.
Another role of prayer is that it gives hope to someone who has HIV so that he stops being worried about his status.
(Religion Group 3, Female Leader, age 48) Alternatively, some religious people did not believe in faith-healing and insisted that God created drugs for every illness.
In our religion, we don't believe in faith healing.Because our teaching says that our God cannot create a disease without its drug because the people that develop drugs do so with the gift of wisdom that comes from God. (Religion Group 2, Male leader number 1, age 26)
Religious-Related Barriers to HIV Testing and Counseling
Belief in Faith Healing.Faith-healing deters religious people from seeking HIV testing because they believe that God heals them hence rendering a test irrelevant.Furthermore, the belief in faith-healing delays healthseeking when one is ill because religious people will first consult a faith healer and only approach a health facility if the initial consult fails.
You know what?Just near that building (pointing at a building nearby) my sister and her husband stayed there.The two were strong Pentecostal believers of faith-healing such that when my brother-in-law fell sick, their pastor was just praying for him, but when the situation got worse we forced my sister to take her husband to the hospital.(Religion Group 1, Female member, age 41) Some churches prohibit their followers from seeking medical assistance from a hospital including HIV testing services.Members of such religious groups risk being excommunicated if they attend health services because that would entail a lack of faith in their beliefs.
When a person is sick, first of all, we pray for him, and if still sick we request permission from our leaders to go to the hospital.They do allow us.My priests may ex-communicate me from church if I am found taking a test at the hospital without their permission.(Religion Group 7, male follower, age 23) Perceptions of Risk.Other religious people have a low HIV-risk perception of themselves and attribute it to the availability of spiritual men or prophets in their religion to reveal what someone did in secret.Therefore, they fear adultery and premarital sex because people will know about it.
. in our church, some people are gifted with supernatural powers and can know what one does in private, such that when a member commits adultery, they can know what was done.Their supernatural powers prevent the youths from committing adultery because they will know about it and will report it to the church.(Religion Group 7, Male follower, age 23)
Interventions for Promotion of HIV Testing Among Religious People
There is no specific HIV testing strategy preferred by all religious people.Participants highlighted the need of having multiple ways of reaching members with HIV testing services.These methods include self-testing, mobile or door-to-door, having HIV testing centers within the facility, and limiting testing to hospital facilities only.The leaders stated that they reach people using varying methods and could not state one method but encouraged the promotion of several interventions or strategies.
We reach people through different means, as religious people we cannot force people to follow a specific way.There are many HIV testing platforms and we should let our members use any one of them while respecting people's privacy.(Religion Group 4, Male Leader, age 23) However, the main concern was privacy during the testing process.To safeguard privacy, some religious people stated that self-testing would promote HIV testing and was preferred because a hospital-based test does not offer privacy such that when one has tested HIV positive, other people know because of the facial expressions displayed after testing HIV positive.
So a good way of testing is an oral test whereby they explain to you how to do it at the hospital and you are given a choice whether to do it at the hospital or carry it home and do it yourself.It takes about 20 minutes for you to see the results.(Religion Group 5, Female Member number1, Age 43) Others prefer mobile or door-to-door HIV testing because it is convenient, especially in rural settings where people reside far from health facilities and cannot afford transport.
Banda Kamanga et al.
There are remote areas where transportation costs are expensive, mobile clinics will be beneficial in these areas for people to take tests.(Religion Group 2, Male leader number 2, age 50) Also, other religious people suggested having HIV testing sites within the places of worship.They argued that if HIV testing is conducted at places of worship, their leaders will act as role models; as a result, many people will be tested.
The best way to get more people tested is to engage our leaders and conduct the testing within the religious premises.I remember this worked very well in Mangochi where a certain organization came into the company of our religious leader to run HIV testing services and it resulted in more people taking HIV testing.(Religion Group 2, Male leader number 1, age 26) Participants recommended that the optimal days to reach most people would be during a designated day when services are conducted.It was also stated that worship day gatherings were an opportunity to reach most people who are in attendance.Furthermore, religious leaders are influential to their followers thereby optimizing uptake of HIV testing when they are engaged in the services thus they are an important entry to testing messages and services.Some religions also have structures for the management of HIV and AIDS services.
In our religion, we have structures at many levels.We use these structures to teach people about HIV and AIDS and other health issues in general.(Religion Group 4, Male leader, age 23) Others preferred hospital-based HIV testing but argued that such services should be integrated with other services other than running them vertically to enhance privacy.
It is important to do multiple tests at once.For instance, when they are conducting, let's say malaria test, they should also test for HIV and give results at the same time and in the same room.HIV testing services should not be separated from other services.People die because they don't want to get ARVs from the Lighthouse (publicly) and at times they hire someone to get the drugs for them.(Religion 5, Female member number 1, age 43)
Discussion
Our study shows that the religious sector has a role in the implementation of HIV and AIDS services including creating awareness and promoting prevention.Prayer has a central place among religious people and they believe that it can cure, boost immunity, and offers emotional healing among HIV-infected people.The barriers to HIV testing that are related to religion include the belief in faith-healing which is embedded within the rules of a specific institution and a perception of low risk of HIV infection among religious people.Interventions for promoting HIV testing in the religious sector include; As well, one of the implementations of HIV self-testing, mobile and or door-to-door HIV services, church-based testing services, and facility-based services that are integrated with other services to preserve privacy.
As well, one of the findings is that the role of the church is to create awareness and cement what was asserted earlier that religious platforms can reach more people with HIV testing messages and services (MacCarthy et al., 2015;Nunn et al., 2013).Religious settings are popular and if actively involved will reach the masses with HIV and AIDS services (Ochillo et al., 2017).Additionally, religious leaders have positional power to influence their congregants on the uptake of interventions (Lindgren et al., 2013).Similarly, as reiterated in our study, religious leaders were comfortable with raising awareness and encouraging congregants on HIV testing because such activities are congruent with their mission statements (Derose et al., 2011).It has been argued that involvement and dissemination of HIVrelated activities requires a leader who is passionate about support from lay workers as necessary (Derose et al., 2011).Thus, religious leaders can influence the uptake of HIV testing among their congregants.
The role of religious leaders in HIV and AIDSs could be strengthened by continued training (Rakotoniana et al., 2014;Stewart et al., 2018) which is tailored according to their needs (Stewart et al., 2018).Continuous and sustained capacity building will translate into continued involvement of the religious leaders in HIV and AIDS matters (Anugwom & Anugwom, 2018) which was also alluded to in our study.Contrariwise, in other settings, sharing HIV services is challenging among religious leaders hence needing support and strengthening of skills (Stewart et al., 2016).Religious leaders ought to be trained in the specific roles and responsibilities of HIV testing services.
Religious institutions are platforms where love is advocated for and expressed to HIV-infected people thus offering emotional healing as was reiterated in our study (Bluthenthal et al., 2012;Derose et al., 2011;Stewart et al., 2018).It has been argued that the incorporation of health issues in spiritual care enhances coping with the condition among those infected (Stewart et al., 2018), and improves spiritual wellbeing which is associated with attendance to HIV services (Yates et al., 2018).Contrary to our findings, in other areas, HIV and AIDS are often viewed through a moral lens which inevitably leads to stigma and discrimination (Anugwom & Anugwom, 2018).This is because stigma or fear of it limits the congregants' and church leaders' involvement in HIV and AIDS services within a congregation (Mendel et al., 2015).In other cases, stigma arises because of the belief that HIV infection is a result of punishment from God (Zou et al., 2009).Religious institutions promote love as part of their services which is an attribute that can be extended to HIV-infected congregants.
Teaching congregants about abstinence as expressed in our study is consistent with other studies that assert that such teaching remains congruent with religious missions (Derose et al., 2011) and it has been argued that religious groups use aspects like abstinence as synonymous with good habits and morals (Lindgren et al., 2013).Although churches use external support for HIV testing services, they have retained control in teaching congregants about abstinence and condom use which illustrates their motive to ensure that such messages remain aligned to their doctrines.(Derose et al., 2011).In an earlier study, despite Pentecostal youths being exposed to both faith and secular orientation toward HIV prevention strategies, they upheld the faith-based one which underscores the importance of their belief system in interacting with HIV services (Mpofu et al., 2014) which emphasizes the importance of leveraging on religious platforms for the provision of HIV services.
Our study advocates for the establishment of HIV testing centers within church platforms.Implementation of HIV services within a church is feasible and effective (Berkley-Patton et al., 2010;Berkley-Patton et al., 2012;Berkley-Patton, Moore, et al., 2013;Derose et al., 2011) and reaches more men (Jobson et al., 2019) and could use existing platforms within the church to optimise the delivery of HIV services (Berkley-Patton, Thompson, et al., 2013) by church leaders ( Berkley-Patton et al., 2016).We argue that closing the remaining gap in HIV testing will require using religious platforms to accelerate the closure.It has been asserted that church-based HIV testing could avert stigma and discrimination and remains acceptable since it upholds some cultural context as well because of the trust that exists (Stewart et al., 2016).Implementation of church-based HIV testing will require training and strengthening skills of the providers of the services if they are laypeople (Stewart et al., 2016).The services could latch on to the already existing health ministries within religious groups (Stewart et al., 2018).The church is an available platform that can ably implement HIV-testing services.
The silence on the use of condoms in our study is common among religious sectors because it is against most of their beliefs as they fear that unmarried congregants would be partaking in premarital sex (Anugwom & Anugwom, 2018;Barmania & Aljunid, 2016;Derose et al., 2011;Trinitapoli, 2011).The silence could also stem from the uneasiness that comes with talking about sexual issues (Lindgren et al., 2013) and again the dilemma that is faced by religious leaders in wanting to uphold their religious values against the use of condoms while being cognisant of the secular demands and the divergent views held by followers (Ochillo et al., 2017).Our findings on the limited promotion of condom use concur with what was raised in earlier studies in Malawi where seemingly most religious groups discourage premarital sex (Muula, 2010) with religious leaders strongly stressing the importance of abstinence and fidelity in marriage for both men and women (Rankin et al., 2016).Religious leaders fear that an acceptance of condoms would promote infidelity and undermine the message of abstinence which is core to religious beliefs (Muula, 2010;Ochillo et al., 2017).An earlier study stated that other religious leaders would encourage condom use to their members when privately consulted (Ochillo et al., 2017) while others recommended them when one is at risk of contracting HIV (Rakotoniana et al., 2014).Despite youths being aware of the non-promotion of condom use in churches, some still use them in the vein of practicing safer sex (Ochillo et al., 2017).Youths that are more aligned to a religious group are also less likely to use condoms in a sexual encounter which heightens their level of risk (Skovdal et al., 2011).We contend that there is a need of refining the messaging around condom use among the religious sector to achieve an understanding that highlights the risk element with each decision youths make.Furthermore, as much as religious leaders teach their followers the religious doctrines, the followers seem to make independent decisions that create a level of risk that is not targeted with interventions.The study reveals that members supported the use of condoms, contrary to the stand of their religions which is consistent with an earlier study conducted in Malawi among FBOs (National Statistical Office, 2019;Rankin et al., 2008).These similar findings underscore the importance of continuous engagement with faith and religious leaders for an effective plan for the prevention of HIV.The use of condoms may be challenging for FBOs to implement because they contradict their beliefs and virtues.
Previous studies have shown that faith-healing plays a role in HIV care in Malawi and offers a life free from worry about HIV issues which was reiterated in our study (Manglos & Trinitapoli, 2011).The difference in our study is that in some religions, the belief in faith-healing bars them from seeking any medical services including an HIV test.The belief that prayer will cure HIV as stated in our study extends what was reported earlier in Malawi where congregants were encouraged that the closer they are to God the less likely they will contract the virus (Lindgren et al., 2013).In some instances, the belief in faith-healing is a critical barrier to compliance with HIVprevention strategies and the uptake of antiretrovirals.
The low perception of risk reported in our study among religious people is congruent with earlier studies (Stewart et al., 2019;Williams et al, 2011).Furthermore, our study reported that HIV testing is influenced by low-risk perception, faith healing, and religious doctrines as factors that influence HIV testing remains congruent with findings from a previous study that found that HIV testing is affected by risk perception, illness, level of stigma, and discrimination and anonymity of testing services (Kaai et al., 2012).The low-risk perception among religious people needs to change with messages that are tailored and emphasize that HIV affects anyone including religious people.Contrary to our findings on low-risk perception, other pastors are inclined that their youth congregants are engaged in risky sexual behaviors and could not be at a lower risk of contracting HIV (Stewart et al., 2019).
Door-to-door (Croxford et al., 2020;Mulubwa et al., 2019), HIV self-testing (Hatzold et al., 2020;Hlongwa et al., 2020) approaches as reiterated in our study have been advocated for in earlier studies and notably religious people could benefit from them as well.Although door-to-door HIV testing was cost-effective (Mangenah et al., 2020), the costs associated with it could be further reduced by using the platforms and volunteers within a church that offs-set the costs of running a door-to-door HIV testing approach.
Limitations of the Study
The nature of our study does not allow for generalization therefore the views may apply to those selected however the results provide insights on perceptions of religious leaders and their congregants on HIV testing.The views collected were perceptions and may not be honest responses as a way of stating what is socially acceptable, however, participants were encouraged to talk openly and assured that their identity will remain anonymous even in the reports.Future studies should consider using mixed methods approaches to maximize the results that will be gathered from various religious institutions among larger samples.Additionally, there is a need to expand to other districts or Traditional Authorities to maximize the scope of the results.
Strengths of the Study
The strength of this study lies in the sampled participants as they were drawn from various religions that are common in the area thus providing a broad scope of responses.The variety in responses as per religious beliefs provides policymakers with a variation for them to consider as they interact with religious circles.
Conclusion
Efforts in closing the gaps in HIV testing and reaching the vital few people remaining will require embracing all variations and operating and implementing strategies that remain congruent with other beliefs that influence uptake.Religious platforms are an avenue for HIV testing implementation and education on HIV and AIDS that need to be used more to attain the first 90 in the UNAIDS goals.The promotion of religious-based prevention interventions needs to consider the foundations and beliefs of each religion and be able to contextualize the methods which can be achieved by continuous dialogue and support from religious leaders.There is a need for more research on the implementation strategies that may be used to promote the uptake of HIV testing within religious settings.
Table 1 .
Demographic Characteristics of Leaders.
Table 2 .
Demographic Characteristics of Followers.
Table 3 .
Leaders' and Members' Perceptions of Condom Use.against God, and after that, we pray for him/her till the virus becomes weak and we continue with prayer till the virus is eliminated.(ReligionGroup 7, Male leader, age 45).Prayer just boosts the immune system but does not cure the virus, it does not give adverse effects.(Religion Group 4, Male follower, Age 20) HIV-positive people are there in our religion although they are not many because our full-time preaching is about abstinence and faithfulness not using condoms (Male leader, age 45) What I said earlier on not using a condom is the stand of the church but it is the person's choice to choose what can help him/her.(Male member, age 23) | 2023-07-12T06:40:20.134Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "1b55605bd7d90ecacbd5c3849ab3aa3fd79d53d2",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440231184854",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "5cfab05d98a3c2ae5de4d0c4b164a08ff78ba44a",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": []
} |
56294399 | pes2o/s2orc | v3-fos-license | On the spectrum of QCD-like theories and the conformal window
We report on the spectrum of the SU(3) gauge theory with twelve flavours in the fundamental representation of the gauge group. We isolate distinctive features of the hadronic phase - the one proper of QCD at zero temperature - and the so called conformal phase. The latter should emerge at sufficiently large Nf and before the loss of asymptotic freedom. In particular, we analyse available lattice data for the spectrum of Nf=12 and include a comparison with results with Nf=16; the latter theory, predicted by the perturbative beta-function to develop an IRFP and therefore be in the conformal phase, can serve as a paradigm for the study of theories in the conformal window. Our analysis suggests that the theory with twelve flavours is in the conformal window, possibly close to its lower boundary.
Introduction
The study in [1] for the theory with N f = 12 flavours supports a scenario as depicted in Fig. 1, where the end-point of the chiral phase boundary signals the opening of the conformal window and N f = 12 is inside the conformal window. While conformality for the same theory has been reported by other groups [2,3], contrasting views were presented as well [4].
Any massless theory within the conformal window has exact chiral symmetry and develops an infrared fixed point (IRFP) at which the theory is conformal. Everywhere in the parameter space of the theory, except at the fixed point, observables will show only remnants of conformality. These remnants joint with the realization of exact chiral symmetry lead to features of the spectrum distinct from QCD. In particular, adimensional mass ratios are robust indicators of patterns of symmetries [1,6].
The aim of our study, of which this proceeding is a preliminary account, is to isolate those features and use them in the comparison of N f = 12 lattice results with typical lattice QCD results. The set of data used here consists of the ensembles partly analysed in [1], where more statistics has been collected for some points at the lattice bare couplings β L = 6/g 2 = 3.8, 3.9 and 4.0, on the weak-coupling side of the bulk phase transition [1]. Data are at masses am = 0.07, 0.06 with volume 16 3 x24, am = 0.05 with volume 24 4 and am = 0.025 with volume 32 4 . The action used is the tree-level Symanzik-improved gauge action with Asqtad staggered fermions. We also include the largest volume data reported in [4] at one value of the bare lattice coupling β L = 2.2 and with a different improved lattice action; tree-level Symanzik improvement and two steps of stout-smearing in the staggered fermion matrix. Whenever instructive, we compare with results for the theory with N f = 16 fundamental flavours from [5]. A study of N f = 16 with the same action as N f = 12 in [1] is in progress [7]. We defer to future work a more refined estimate of finite volume effects and a finite size scaling analysis.
In section 2 we analyse the mass ratio m π /m ρ , the Edinburgh plot and additional ratios useful to discriminate between a QCD phase and a conformal phase. In section 3 we consider another indicator of chiral symmetry restoration, the splitting between the vector and the axial ground states. Finally, in section 4 we analyse another key relation between the would be Goldstone boson and the chiral order parameter. We conclude in section 5. Figure 3: Edinburgh plot: N f = 12 data from [4] (red squares), N f = 12 data from this work and β L = 3.8, 3.9 (blue circles), N f = 16 data from [5] (magenta diamonds). The QCD physical point (black star, leftmost) and the heavy quark limit (free theory) point (black star, rightmost) are shown.
The Edinburgh plot and mass ratios.
One first significant spectrum observable is the ratio m π /m ρ , between the mass of the lightest pseudoscalar state (pion) m π and the mass of the lightest vector state (rho) m ρ . In QCD at zero temperature, chiral symmetry is spontaneously broken and the pion is the (pseudo)Goldstone boson of the broken symmetry, implying that its mass will behave as m π ∼ √ m. Instead, the vector mass contains a constant term and a leading correction linear in the quark mass, thus m ρ ∼ m 0 + bm. In the chirally broken phase, modulo lattice artefacts, one should thus expect their ratio to behave as m π /m ρ ∼ √ am -as a function of the bare lattice quark mass am. Within the conformal window chiral symmetry is restored. The lightest pseudoscalar state is not anymore a Goldstone boson, and there is no mass gap. At the IRFP and at infinite volume, the quark mass dependence of all hadron masses in the spectrum is governed by conformal symmetry: at leading order in the quark mass expansion all masses follow a power-law with common exponent determined by the anomalous dimension of the fermion mass operator at the IRFP. Hence we expect a constant ratio. Away from the IRFP, for sufficiently light quarks and finite lattice volumes, the universal power-law dependence receives corrections, due to the fact that the theory is interacting but no longer conformal. Hence, the pseudoscalar-vector mass ratio is constant at the IRFP at infinite volume, and approximately constant in its surroundings and at finite volume, as it is the situation explored here.
In practice, the task remains the one of discriminating between a lattice mass ratio that goes to zero as √ am and a ratio that remains constant and O(1) over a significant range of masses. An important caveat concerns the extraction of mass eigenstates from correlators within the conformal window: correlators in the vicinity of the IRFP will follow a power law decay at leading order, corrected by mass contributions. However, for sufficiently large quark masses and away from the IRFP one expects lattice correlators to decay exponentially as in QCD, possibly with subleading conformal corrections. Given this caveat, we have analysed all correlators in this work assuming : Ratio (am π ) 2 /(am) as a function of (m π /m ρ ) 2 for QCD data (quenched staggered) reported by the MILC collaboration [8]. Notice that quenching should not affect the leading behaviour. β=4.0 β=3.9 β=3.8 Figure 5: Ratio (am π ) 2 /(am) as a function of (m π /m ρ ) 2 for the N f = 12 data at β L = 3.8 (green diamonds), 3.9 (red squares) and 4.0 (black circles).
the standard multi-exponential time dependence. The pseudoscalar staggered correlator could be fitted with two cosine hyperbolic functions (fundamental and excited state) without the parity-odd (oscillating) component. All other staggered correlators could be fitted with a cosine hyperbolic with an oscillating component at intermediate and late times, with the addition of an excited state at early times. The largest uncertainties of the present analysis are related to the nucleon, for which more statistics and better smearing is needed. Temporal extents longer than t = 24, 32 would also facilitate the analysis of correlators at the lightest masses. Fig. 2 shows that the mass ratio for all existing N f = 12 data is approximately constant over a wide range of bare quark masses, as it should be expected for a chirally symmetric theory. Obviously, on the basis of these numerical evidences, we cannot exclude that a change of trend will occur at even lower masses. Searching for combined evidences seems to be the optimal strategy for this case, following the line adopted in [1].
The Edinburgh plot, widely used in lattice QCD studies, is constructed in terms of adimensional ratios of masses and offers a powerful way to combine results of lattice calculations performed at different lattice spacings. Fig. 3 shows the Edinburgh plot for all existing data with N f = 12. For an instructive comparison, we also show lattice results for the N f = 16 theory from [5]; the latter is already known to be in the conformal window by perturbative arguments. The physical point of QCD (leftmost side of figure) corresponds to m π /m ρ 0.18 and m N /m ρ 1.21. On the other side of the figure, a useful theoretical limit is the heavy quark mass limit (rightmost side of figure), where all masses in the spectrum are given by the sum of their valence quark masses, so that m π /m ρ = 1 and m N /m ρ = 3/2. This limit is also equivalent to the free theory limit. A QCD scenario will draw a curve in this figure that extrapolates to the physical point for decreasing quark masses. What we observe in Fig. 3 suggests instead a behaviour that is to be expected for theories in the conformal window. The two mass ratios are "stuck" at a tiny corner, despite the fact that quark masses in the reported data vary on a rather wide range: N f = 12 bare masses from [1] and [4] vary from am = 0.01 to am = 0.07 at various lattice couplings. N f = 16 bare masses from [5] vary from am = 0.025 to am = 0.15. All data in the Edinburgh plot are also away from the heavy quark limit and all have m π /m ρ ∼ 0.8, showing that all simulated masses are sufficiently light and fermions are dynamical. Fig. 3 also importantly suggests that all existing data for the N f = 12 spectrum cover the same dynamical region; a comparison is therefore justified.
Another interesting insight, useful to discriminate between a QCD-like and a conformal behaviour, can be gained through Figs. 4 and 5. At fixed lattice spacing, one can study the ratio (am π ) 2 /(am) as a function of (m π /m ρ ) 2 . This ratio behaves as a constant in QCD to a good approximation, so that parallel horizontal lines are drawn at different lattice couplings. Fig. 4 also shows that the ratio increases with decreasing β L , a signal that the β function for this theory is negative. Within the conformal window the behaviour should be quite different, and in fact analogous to what is observed in Fig. 5: the ratio should behave as (am π ) 2 /(am) ∼ (am) 2δ /(am) ∼ (am) 2δ −1 , with 0.5 < δ 1. Separate constant lines at different lattice spacings are thus no longer observed and data are concentrated around one value of m π /m ρ . Ideally, taken the same value of m π /m ρ at two different lattice spacings, an ordering opposite to the QCD-like case would suggest a positive β function. The latter is true at the strong coupling side of the IRFP. A few points in Fig. 5 have a sufficiently close value of m π /m ρ and show indeed the inverted ordering proper of a positive β function. This is another way to look at the results for the β function reported in [1].
The vector and axial-vector mass splitting
In [1] it was shown that the lightest pseudoscalar and vector masses at various simulated couplings were well fitted by a power-law with close exponents in the range 0.6-0.7, thus excluding the Goldstone nature of the pion and showing that data are away from the heavy quark regime. As a word of caution, we add that the accuracies of the spectrum data and fits are not comparable, as of today, with those achieved by the fits to the chiral condensate in [1]. The latter have been shown to be at infinite volume within statistical uncertainties. For the spectrum, finite volume effects are expected to be present and of the order of about 10%.
The vector and axial-vector mass splitting is another indicator of the restoration of chiral symmetry. In Fig. 6 we show our data for β L = 3.9 and 4.0, and data from [4] for the lightest vector ρ and axial-vector a 1 . The best fits with zero intercept and free exponent are also reported. Best fit values of the exponents are δ a 1 = 0.67(4), δ ρ = 0.68(3), at β L = 3.9, δ a 1 = 0.68(7), δ ρ = 0.67(3), at β L = 4.0, and δ a 1 = 0.79(9), δ ρ = 0.72(2) for the data from [4]. All exponents lie around 0.7. A power-law fit with free intercept favours a slightly negative intercept with non unit exponent, thus disfavouring a chirally broken scenario. The goodness of power-law fits with zero intercept for both vector and axial states suggests their degeneracy in the chiral limit, thus a restored chiral symmetry. Unfortunately, the N f = 16 results in [5] seem to be still affected by rather large finite volume effects, and for this reason we omit them here. We are currently simulating N f = 16 with the same lattice action as N f = 12 [1] in order to meaningfully compare the two theories.
Goldstone boson and chiral order parameter
It was observed in [1] that an additional powerful discriminator between exact and spontaneously broken chiral symmetry is provided by the relation between the would-be Goldstone boson mass and the chiral condensate. The observation was based on the theoretical analysis of [9]. Here, we analyse all available N f = 12 data in light of these theoretical premises. In a chirally symmetric phase the behaviour of m 2 π as a function of the chiral condensate is as illustrated in Fig. 7 (upper curve): it has positive curvature, it extrapolates to zero, and the curvature is due to non zero anomalous dimensions. Importantly, the curvature is opposite to that induced by finite volume effects. Hence, it provides at the same time a powerful indicator of the presence of finite volume corrections for lattice data. In a chirally broken phase the curvature is opposite and it extrapolates to a negative value, Fig. 7 (lower curve). In Fig. 8 we collect our data (right) and data from [4] (left). Both data sets clearly show a positive curvature, and best fits to a power-law (am π ) 2 = A(a 3 ψψ ) 2δ χ with zero intercept do not suggest qualitative differences between data sets. We obtain a best fit exponent δ χ = 0.66(2) for joint data sets at β L = 3.9 and 4.0 (see also [1]) and δ χ = 0.727(5) for the data from [4].
Conclusions
We have reported on the spectrum of the SU(3) gauge theory with twelve flavours in the fundamental representation. In particular, we have isolated a few signatures that are useful to discriminate between the hadronic phase, i.e. QCD at zero temperature, and the conformal phaseproper of theories within the conformal window. In this analysis we have assumed that all our lattice data, even if inside the conformal window, are away from the IRFP and correlators follow a leading Figure 8: (am π ) 2 as a function of (a 3 ψψ ) 2 for all existing N f = 12 data: (right) updated data from [1] for β L = 3.9 (red square) and β L = 4.0 (black circle), (left) data from [4] for β L = 2.2 (largest volumes only). exponential decay law; this seems to be supported by the obtained results. Further investigations at weaker coupling, lighter masses (and longer temporal extents) would provide a useful piece of additional information. We have analysed the ratio m π /m ρ , the Edinburgh plot and the ρ − a 1 mass splitting for all existing data. Points in the Edinburgh plot stick to a region m π /m ρ ∼ 0.8, despite covering a wide range of bare lattice fermion masses. Fits to the ρ − a 1 splitting favour the degeneracy of the two states in the chiral limit. Finally, we have re-proposed the relation between the Goldstone boson mass and the chiral order parameter as a powerful indicator of restored chiral symmetry. All existing data for the N f = 12 theory seem to consistently favour chiral symmetry restoration and an almost universal power-law behaviour for all massive states, which is to be expected inside the conformal window. | 2012-01-09T18:25:14.000Z | 2012-01-09T00:00:00.000 | {
"year": 2012,
"sha1": "d84d4d51f218564424a0d7a6d5effcbdeb17948d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d84d4d51f218564424a0d7a6d5effcbdeb17948d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
209425680 | pes2o/s2orc | v3-fos-license | Pig as a reservoir of CRISPR type TST4 Salmonella enterica serovar Typhimurium monophasic variant during 2009–2017 in China
ABSTRACT CRISPR-based typing was performed to subtype isolates of S. Typhimurium and its monophasic variant Salmonella 4,[5],12:i:- from humans and animals between 2009 and 2017 in China. CRISPR typing classified all isolates into two lineages and four sub-lineages. All isolates from Lineage II and Lineage IB-1 were Salmonella Typhimurium. All of Salmonella 4,[5],12:i: – isolates were distributed in Lineage IA and Lineage IB-2, which all belonged to ST34 by MLST typing. Only Lineage IB-2 contained ST34 isolates from both Salmonella Typhimurium and Salmonella 4,[5],12:i:-. Among the isolates of ST34, TST4 was identified as the most common CRISPR type representing 86.5% of Salmonella 4,[5],12:i:- and 14.5 % of Salmonella Typhimurium mainly from pigs and humans. This study demonstrated that TST4-ST34 isolates were predominant in Salmonella 4,[5],12:i:-, and pig was the main reservoir for Salmonella 4,[5],12:i:- in China, which might have the potential to transmit to humans by pig production.
Salmonella enterica serovar Typhimurium (Salmonella Typhimurium) is one of the most important zoonotic pathogens causing food-borne gastroenteritis across the world [1,2] Human infections with Salmonella Typhimurium are typically associated with contaminated food of animal origin [3]. Recently, a Salmonella Typhimurium monophasic variant (Salmonella 4, [5],12:i:-) has been increasingly isolated from husbandry animals, foods, and humans [4]. Among the common serovars associated with human salmonellosis cases in Europe, the monophasic Salmonella Typhimurium has ranked third after Salmonella Enteritidis and Salmonella Typhimurium in 2017 [5]. In USA, the Salmonella 4, [5],12:i:-was confirmed to be the most increased serotype from 1972 to 2016, and stayed as the top 5 serotype for human salmonellosis during 2011 and 2016 [6]. However, few reports described the prevalence of Salmonella 4, [5],12:i:-in human salmonellosis in China. In 2015, 13 foodborne isolates of Salmonella 4, [5],12:i:-were firstly reported in Guangdong province. A recent study pointed out that Salmonella 4, [5],12:i:-had increased to be the second most frequently encountered serotype in patients in Henan province, China [7]. Comparative analysis of genome sequences and biological properties has revealed that deletion or mutation of the fljB gene causes loss of phase 2 flagellin expression in the monophasic variant [8]. And the MLST type was not an efficient tool to differentiate them [9]. Therefore, new efforts are needed to demonstrate the genetic and phenotypic difference between Salmonella Typhimurium and its monophasic variant with prevalence characteristics of both serotypes. It is also important to understand the phylogenetic relationship between the two serotypes in order to develop new eradication strategies.
The Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) typing has been used as a high-resolution typing method of a broad range of bacteria. Thus far, CRISPR typing has been widely used to subtype Salmonella isolates belonging to identical serotypes including Salmonella Typhimurium, Salmonella Enteritidis, and Salmonella Pullorum [10][11][12][13][14]. Such studies have demonstrated that CRISPR typing is efficient in discriminating isolates from different sources and time periods. Further, the arrangement and microevolution of CRISPR spacers allows typing and subtyping to be performed in a single step. In the present study, we used CRISPR typing to identify genotypic relationships among 173 isolates of Salmonella Typhimurium and Salmonella 4, [5],12:i:-obtained from different hosts during 2009-2017 in China. Our findings demonstrate the presence of a predominant CRISPR type shared by these two serotypes in both humans and pigs, and reveal the pig as a main reservoir for Salmonella 4, [5],12:i:-, which can also infect human.
We used CRISPR typing to genotype 173 isolates of Salmonella Typhimurium (62) and its monophasic variant Salmonella 4, [5],12:i:-(111) obtained from different sources during 2009-2017 in China (Supplementary Table S1). Animal-origin Salmonella Typhimurium and Salmonella 4, [5],12:i:-isolates were collected from commercial farms, slaughterhouse, and retail markets, while human isolates were collected from diarrhea patients in hospitals. Identification of Salmonella 4, [5],12:i:-was performed by slide agglutination with somatic (O) and flagellar (H) antiserum combined with multiplex-PCR approach of fliB-fliA intergenic region and the fljB gene, respectively [8]. CRISPR typing was performed as previously described [15]. Among the 173 isolates, 67 unique spacers were detected in the two CRISPR loci, with 31 in CRISPR1 and 36 in CRISPR2. Based on the spacer arrangement, 30 different alleles were observed in CRISPR1, and 16 different alleles in CRISPR2 ( Figure 1(A)). With the combination of CRISPR1 and CRISPR2 arrays, a total of 34 different Typhimurium CRISPR types were identified and named using a number suffix to TST as previously indicated ( Figure 1(A,B)). Cluster analysis using UPGMA revealed that only seven out of the 34 TSTs shared between isolates from different host. TST4, a combination of CRISPR1 allele 7 and CRISPR2 allele 6, was found to be the most frequent CRISPR type shared by 55% (96/173) of the isolates (Figure 1(B)). TST4 and TST17 were common among isolates from pigs, humans and chicken, which showed the potential transmission between animals and humans. TST20, TST27, TST30, TST31, and TST33 were only detected in isolates of poultry origin and were distant from TSTs of other origins. Two isolates collected from cattle belonged to TST9. This revealed that CRISPR types also reflected the source of isolates [16]. TST4 isolates were observed from six out of nine provinces demonstrating its predominant prevalence in China (Supplementary Table S1). The second most common CRISPR type TST5 was detected in three provinces with only 11 isolates. In Jiangsu province, 13 CRISPR types were detected in 38 isolates from Yangzhou city, but only 4 CRISPR types in 27 isolates from Huaian city. These findings reflected that CRISPR types were closely related to different regions.
Compared with MLST type of the 173 isolates, the CRISPR typing divided the 122 ST34 isolates into 14 TSTs (Figure 1(A,B)), which confirmed that CRISPR typing has stronger discriminatory power than MLST [17]. As shown in Figure 1(B [5],12:i:has become more frequently transmitted to human through contaminated food than Salmonella Typhimurium. Whole genome sequencing analysis of Salmonella Typhimurium and its monophasic variant from Denmark demonstrated that ST34 was the main MLST type in the monophasic variant isolates, which was also shared by Salmonella Typhimurium isolated from humans, food, and veterinary samples [9]. In the present study, we not only confirmed that ST34 is predominant among Salmonella 4, [5],12:i:-isolates, but also demonstrated that TST4 is the main CRISPR type shared by both serotypes among these ST34 isolates, which were mainly from swine or pork meat (Figure 1(B)). Apart from TST4-ST34, which was shared by both serotypes, TST5-ST34 and TST6-ST34 were also shared by the two serotypes isolated from both pigs and humans (Figure 1(B)).
According to data reported by the European Food Safety Authority (EFSA) and the European Centre for Disease Prevention and Control (ECDC), Salmonella Typhimurium ranked second after Salmonella Enteritidis and followed by its monophasic variant serovar associated with human salmonellosis cases in Europe during 2017 [5]. In addition, 39.4% of Salmonella Typhimurium isolates and 81.4% of its monophasic variant isolates from human cases showed multi-drug resistance (MDR), a much higher prevalence of resistance than 28.6% of all Salmonella isolates from human salmonellosis [5]. Thus, Salmonella Typhimurium and its monophasic variant are considered as a serious epidemic threat to public health with apparent worldwide distribution. Thus far, although swine or pork has been considered as the main source of infection in many countries, the genetic relationship between Salmonella Typhimurium and Salmonella 4, [5],12:i:-is not well understood. To identify the genetic relationship between the two serotypes, a phylogenic tree was constructed based on the 34 identified TSTs (Figure 1(C)) using Bionumericus 7.5. As shown in Figure 1(C), they are divided into two main lineages and four sub-lineages. Interestingly, Lineage IA was found to be composed of TSTs specific to Salmonella 4, [5],12:i:-isolates, while Lineage II and IB-1 was exclusively composed of TSTs specific to Salmonella Typhimurium strains mainly of the ST19 type. Interestingly, only Lineage IB-2 contained TSTs shared by both Salmonella Typhimurium and Salmonella 4, [5],12:i:-. Although there was a low number of isolates with TSTs specific to Salmonella Typhimurium or Salmonella 4, [5],12:i:-, this diversity reflected evolutionary divergence among the two serotypes. Both Salmonella Typhimurium and Salmonella 4, [5],12:i:-were observed in TST4, TST5 and TST6, which confirmed a close genetic relationship between these two serotypes.
In addition, CRISPR typing showed higher discriminatory power than PFGE and MLVA, and it could correctly identify all major lineages defined by whole genome single nucleotide polymorphism typing (WGST) of Salmonella Enteritidis isolates [18]. However, CRISPR typing could not efficiently delineate outbreak clusters, which could be resolved by WGST in further study.
In conclusion, CRISPR typing has been widely used as a high-resolution typing method based on the fact that genetic diversity of CRISPR sequences can provide valuable insights into microevolution and evolutionary trajectories of bacterial isolates including Salmonella.
In the present study, we demonstrated that TST4-ST34 strains were predominant in most Salmonella 4, [5],12:i:-isolates and shared by some Salmonella Typhimurium isolates, obtained from humans, pigs, and chicken. Furthermore, the pig was found to be the main reservoir for these TST4-ST34 isolates, suggesting that the monophasic variant might be produced via mutation of Salmonella Typhimurium in pigs. The prevalence of the TST4-ST34 Salmonella 4, [5],12:i:-strains in animals should be considered a matter of public health concern, and monitored by the government to prevent transmission to humans.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-12-21T14:04:34.891Z | 2019-12-20T00:00:00.000 | {
"year": 2019,
"sha1": "5efa76b5ad5942edc12699bf5a101ddf84565bfd",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/22221751.2019.1699450?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1ed89d34708a2db57ee4e4dd9601830679d05c9",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219721582 | pes2o/s2orc | v3-fos-license | Aortic Valve–Sparing Surgical Treatment of Supravalvar Aortic Stenosis in a 65-Year-Old Adult
Supravalvar aortic stenosis (SVAS) is a rare congenital cardiac disease that usually co-occurs with Williams syndrome. In the adult population, a few SVAS cases have been reported in patients affected by homozygous familial hypercholesterolemia. However, because of the rarity of this disease entity, there is no standard surgical treatment for SVAS. Here, we present a case of successful surgical treatment using an autologous excised aortic patch in a 65-year-old patient with SVAS.
Case report
Supravalvar aortic stenosis (SVAS) is a rare cardiac anomaly. It usually occurs in combination with Williams syndrome-with a typical facial appearance and mental retardation [1]-but can also present in adult patients affected by homozygous familial hypercholesterolemia (HFH) [2,3]. Because of the rarity of this disease, the surgical technique for SVAS is not standardized and has evolved from a plain patch technique to simple sliding aortoplasty [4,5]. In adult patients with SVAS, conventional surgical treatment is difficult to apply due to reduced flexibility and atherosclerotic changes of the vasculature. We performed autologous excised aortic patch aortoplasty and ascending aorta replacement sparing the aortic valve in a 65-year-old adult patient with SVAS who did not have either Williams syndrome or HFH.
A 65-year-old female patient with a history of transient ischemic attack. hypertension, dyslipidemia, and paroxysmal atrial fibrillation had SVAS. She took medication for dyslipidemia, hypertension, and atrial fibrillation. Her blood cholesterol level was 230 mg/dL and no other familial member had dyslipidemia. Preoperative echocardiography showed SVAS with a peak velocity of 4.5 m/sec and mild aortic regurgitation with an ejection fraction of 65%. Computed tomography showed severe focal stenosis at the aortic root with diffuse soft tissue thickening and calcification with a diameter of 14×10 mm (Fig. 1).
The operative approach was through a median sternotomy. Cardiopulmonary bypass was instituted with a cannula for arterial return in the ascending aorta and a venous single cannula in the right atrium. The aortic cross-clamping point was decided after manual palpation of the area of calcification. Cardiac arrest was achieved using cold antegrade cardioplegic solution. The aorta was transected several millimeters distal to the point of stenosis. The calcified ascending aorta was removed to a few millimeters below the ascending aorta cross-clamping site. After a careful inspection of the stenotic segment of the sinotubular junction, as well as the conditions of the coronary opening and the aortic valve, we meticulously excised the stenotic calcified tissue, taking care not to damage other tissues. Even though the intimal defect of the sinus portion appeared serious (Fig. 2B), it was not especially remarkable because the aortic wall was thickened. Since the stenotic tissue was close to the coronary opening and aortic valve commissure, the procedure was time-consuming. After the removal of stenotic tissue, an incision was made in the non-coronary sinus of the proximal aorta. Autologous healthy aortic tissue from the previously excised ascending aorta was used for patch aortoplasty. After a saline test to detect possible aortic regurgitation, ascending aorta replacement was (Fig. 2). After surgery, the patient had an uneventful postoperative course with arrhythmic medication and electrocardiogram monitoring. Follow-up echocardiography and computed tomography showed decreased SVAS, with a peak velocity of 2.3 m/sec and an increased sinotubular junction diameter of 21×21 mm (Fig. 1B). The pathologic report of the aortic tissue was simply atherosclerosis with calcification. The patient did not have either HFH or Williams syndrome.
The patient provided written informed consent for the publication of clinical details and images.
Discussion
SVAS is a rare cardiac disease that is often progressive in childhood, and scant data are available on its outcomes in the adult population [6]. In particular, only anecdotal reports exist of SVAS in patients older than 60 years [2]. Irrespective of whether its origin is congenital or acquired, the surgical goal is to enlarge the aortic root and to maintain aortic valve function. Because this patient showed extensive calcification inside the aortic root and ascending aorta, as well as aortic regurgitation associated with old age, we could not rule out the possibility of performing a Ben- . After removal of the stenotic tissue, an incision was made into the noncoronary sinus of the proximal aorta (B). Patch aortoplasty using autologous healthy aortic tissue from the previously excised ascending aorta was performed (C). Finally, ascending aorta graft interposition was performed (D).
https://doi.org/10.5090/kjtcs.2020.53. 3.144 www.kjtcvs.org KJTCVS tall operation. However, after careful observation and meticulous removal of the calcified lesions, we were able to preserve the coronary opening and aortic valve. Since it appeared that removal of the calcified tissue itself was not sufficient to decrease the pressure gradient, we decided to perform an additional patch aortoplasty using the autologous excised ascending aorta. From our experience, we knew that sliding aortoplasty is a good surgical option for handling SVAS [4,5]. However, as this procedure is not suitable for adult patients with stiff aortic tissue, we used a synthetic graft for ascending aorta replacement and performed a modified procedure using patch aortoplasty with autologous aortic tissue. With regard to the choice of patch material, a Dacron patch is stronger than other tissue types (e.g., pericardium) for preventing aneurysm formation. However, because using a Dacron patch would have caused difficulties in handling needle-hole bleeding and resulted in an uneven reconstruction of the aortic wall in terms of its ability to endure aortic pressure, we decided to use autologous healthy aortic tissue that had been removed for the ascending aorta replacement. Using autologous aortic tissue not only avoided the need for foreign material, but also had the advantage of enabling easy handling during the suturing procedure; furthermore, it may prevent future aneurysm formation, which is a possible complication of using pericardial tissue. In conclusion, we were able to treat SVAS in an older patient safely using modified patch aortoplasty without aortic valve replacement. | 2020-06-11T09:02:59.764Z | 2020-06-05T00:00:00.000 | {
"year": 2020,
"sha1": "d49869f5387898e287b3f0d939a8181024095cc8",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5090/kjtcs.2020.53.3.144",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5aeb67c7a747cc15dd4cefc904d934db2efff552",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265492987 | pes2o/s2orc | v3-fos-license | Buffering Capacity of Various Commercial and Homemade Foods in the Context of Gastric Canine Digestion
Simple Summary Knowing and predicting the buffering capacity of a food is of high importance in the context of gastric digestion and health. The aim of this study was to analyze the buffering capacity and the HCl amount needed to acidify a food, both as an indication of the acidity and gastric digestion of commercial and homemade dog foods in relation to their nutrient composition. The study developed prediction equations to estimate the buffering capacity using a set of 30 complete dog foods, each ten different types of commercial dry and wet dog food, and homemade dog food. To the best of our knowledge, this is the first study to evaluate canine food for buffering capacity. Abstract The buffering capacity (BC) of food may act as a key regulatory parameter of canine gastric digestion by influencing the activity of gastric enzymes, the solubility of dietary ingredients, the gastric breakdown of food nutrients, and, subsequently, the absorption of nutrients. To analyse a possible effect of food on gastric pH, the BC of wet, dry, and homemade dog food was quantified via an acid titration method until a pH under 2 was achieved. Wet food had the highest BC; between dry and homemade food, there was no significant difference. Using multiple regression analyses, we were able to establish associations between the nutrient composition and the BC of the dog food. Crude protein content was the most important factor that influenced the BC and HCl use per gram of dry matter (DM) (p < 0.001), whereas the initial pH only tended to have an influence. The ash content also tended to affect the used HCl per gram of DM, and the DM content had a significant (p < 0.05) influence on the BC per gram of DM. The excessively high ash content found in wet food could be a risk factor for gastric dilatation–volvulus syndrome because it could lead to an insufficient pH drop in the stomach. Our data indicate large differences in the BC of typical dog food; so, estimating the BC using the equations developed herein could help to design individualized dog diets, in particular for dogs with health problems such as gastric hypoacidity, gastric reflux, or gastritis. However, more research about the influence of dog-food BC on gastric pH in vivo is needed.
Introduction
Buffering capacity (BC) is an important physiochemical property of food that expresses the resistance of the food to a change in pH with the addition of an acid or a base [1,2], thus having a direct influence on the absorption of hydrogen ions and, therefore, on the regulation of gastric juice pH [3].In the context of gastric digestion, food BC may play a key regulatory role in digestion by influencing the gastric pH, especially by modulating the activity of gastric enzymes, the solubility of dietary ingredients, the gastric breakdown of food nutrients, and, therefore, the absorption of nutrients across the gastrointestinal tract [4].Furthermore, the gastric pH acts as a barrier against foodborne pathogens [5] and may affect the gastrointestinal microbiota [6].
Animals 2023, 13, 3662 2 of 12 Therefore, the BC of food and dietary ingredients has frequently been the focus of research in the context of human gastric digestion [2,[7][8][9].For example, Salaün et al. (2005) analysed the BC of dairy products and it was found that the BC depends on the composition of minerals and proteins [7].In Al-Dabbas et al. (2010), the BCs of legumes, almonds, lettuce stem, carob, liquorice root, and raw cow milk were measured, reporting a strong positive correlation of BC with protein and aspartic and glutamic acid contents [8].In Mennah-Govela et al. (2020), the BCs of thirty commercially available foods were measured, identifying the protein content and initial pH of the food as the most important determiners of BC [2].In more recent research by Ebert et al. (2021), ash, selected minerals, and amino acids, with a pKa in the range of foodstuffs, e.g., aspartic and glutamic acid, were detected as key influencing factors on overall food BC.In the study, wet texturized plant proteins were analysed for their BC, and the results were compared to the BC of pork meat [9].
To the authors' knowledge, there are no studies dealing with the BC of canine food or dietary ingredients.This is surprising, taking into account the importance of BC not only for canine gastric digestion but also for canine health, such as gastric hyperacidity, gastroesophageal reflux, gastric hypoacidity or dilation-volvulus syndrome [10], food allergenicity [11], and dental health [12].In addition, the design of oral canine drugs and pharmaceuticals requires knowledge of the food's BC, too [13].It has also to be mentioned that a dog's diet is very complex, extending from the inclusion of meat, meat products, offal, and bones to dairy, eggs, and marine food products, and further to plant-based ingredients, such as grains, legumes, oilseeds, and vegetables, as well as various mineral supplements and feed additives [14].These dietary ingredients are commonly fed as a commercially available complete food, either as dry kibbles or wet food, but also as a homemade diet.Therefore, both the ingredients and the manufacturing method may affect the BC of the dog food.Knowledge of the BC and pH of canine food may help in predicting the effects of the diet on digestion and gut health in dogs.The aim of this study was to analyze and compare the BCs of a variety of commercial dry and wet dog foods as well as homemade dog food in relation to their manufacturing method and nutrient composition.Our hypothesis was that, besides the protein content of the dog food, the manufacturing method will also have a major influence on its BC in the context of gastric digestion.
Dog Foods Used in the Experiment
In this experiment, a total of 30 complete dog-food samples were investigated, including randomly collected commercially available dry and canned foods as well as homemade dog food, with 10 different foods in each manufacturing method.The tested samples were dog foods prepared for healthy adult dogs (see Supplementary Materials).
The commercial dog foods were purchased from different local supermarkets and pet stores to cover the variability of products with different producers, ingredients, and nutrient compositions.Dog food intended for particular nutritional purposes was not used.For compliance with nutritional standards, the producers of the commercial complete dog foods had to be members of the "Industrieverband Heimtier (IHV) e.V." or the "Österreichische Heimtierfuttermittel Vereinigung (ÖHTV)".Membership in one of these associations obliges the companies to produce their dog food according to the Fediaf Guidelines [15], both in terms of declaration and the current recommended nutrient levels for complete dog food.
The homemade dog foods used common ingredients and were calculated to provide enough energy and nutrients for an adult, inactive dog using an Excel ® calculation program ("CarnivoreDiet" ©, A. Lucke, Vetmeduni, Vienna, Austria) according to nutrient requirements of dogs (National Research Council 2006).The meat used for the homemade diets was bought frozen and already minced in local pet stores.The vegetables were bought fresh in a local supermarket.The homemade dog-food diets were prepared and mixed on the same day as the analysis of the BC took place.The meat was cooked, and only the green tripe was used raw in some homemade diets.The amount of used ingredients to formulate the homemade food was calculated to meet the nutrient recommendations of an adult dog with a 7 kg body weight and weighed with a scale (ME4002 ® , Mettler Toledo, USA).Mineral supplements were added on top of the diet to ensure a well-balanced diet before mixing it together.
Sample Preparation
For the sample preparation of BC measurement, 150 g of commercial dog food or the entire homemade dog food was mixed in a knife mill (Grindomix ® GM 200, Retsch, Haan, Germany).Canned dog food was homogenized five times for each for ten seconds at a speed of 5000 rotations per minute (rpm), dry dog food ten times for each for ten seconds at 5000 rpm, and homemade dog food three times for each for 30 s at 5000 rpm.Different mixing times were necessary to obtain comparable textures of the different dog food types.The texture of the dog food was not measured; the right texture was decided after visual judgment.
Afterwards, 5 g of homogenized food and 15 g of deionized water (B30, Adrona, Riga, Latvia) were weighed (ME4002 ® , METTLER TOLEDO, Columbus, USA) and mixed in a 100 mL beaker.The final weight of each sample was 20 g.The rest of the homogenized food and an aliquot of at least 100 g of unhomogenized dry and wet dog food were frozen for further analyses.Homemade dog food was only frozen homogenised.
For the measurement of the BC, a 0.16 M HCl solution was needed.Therefore, to a 0.1 mol/l ampoule for the preparation of Volumetric Solutions (ROTI ® VOLUM, Carl Roth GmbH + Co. KG, Karlsruhe, Germany), distilled water was added until a total volume of 625 mL was reached.
Measurement of the Buffering Capacity
The measurement of the BC was done by the acid titration method described previously [2].In brief, aliquots of 0.5-2 mL of 0.16 M HCl were added to the sample until an endpoint of pH < 2 was reached [2].The measurements of pH were carried out with a portable pH meter with a DHS electrode (pH 7 ® , Xs-Instruments, Carpi, Italy).The electrode was calibrated at room temperature using a standard buffer solution (Technical Buffer Solution, Mettler Toledo, Greifensee, Switzerland) and had to reach an accuracy from 95 to 105% before measuring the pH.
Before starting the titration, the pH values of the undiluted wet and homemade dog food were measured.Afterwards, the initial pH of the food samples consisting of 5 g food and 15 g deionized water of dry, wet, and homemade food was measured.To do so, the feed samples were stirred with a magnetic stirrer (MR Hei-Standard ® , Heidolph Instruments, Schwabach, Germany) for a total of 5 min at 250 rpm.Then, 0.5 mL of 0.16 M HCl were added, and the samples were stirred for 30 s.After each addition of 0.5 mL of 0.16 M HCl, the pH value was measured.This procedure was repeated until the pH of the sample was below 2. To speed up the titration, the amount of 0.16 M HCl added per step was increased to 1 mL from a cumulative addition of 7 mL and to 2 mL from a cumulative addition of 30 mL of hydrochloric acid.
For quality control, a duplicate sample was measured for each dog food and each pH measurement was performed in triplicate.
Nutrient-Composition Analysis
All food samples were analysed for dry matter (DM), ash, crude protein (CP), ether extracts (EE), acid detergent fibre (ADF), and neutral detergent fibre (NDF) according to the guidelines of the Association of German Agricultural Analytic and Research Institutes [16].The DM concentration was determined by oven-drying the samples at 103 • C for at least 4 h (method 3.1).The ash concentration was analyzed by combustion in a muffle furnace overnight at 580 • C (method 8.1).Ether extract was determined using the Soxhlet extraction system (method 5.1.2) and CP using the Kjeldahl method (method 4.1.1).A Fibretherm FT12 (Gerhardt GmbH and Co. KG, Königswinter, Germany) was used to obtain neutral detergent fibre assayed with a heat-stable α-amylase and expressed exclusive of residual ash (method 6.5.1).The nonfibre carbohydrates (NFC) were calculated as NFC = 100 − (NDF + CP + EE + ash).The homemade and wet-food samples had to be freeze-dried due to their high water content.For this purpose, the samples were deep-frozen overnight at minus 20 • C and then dried for 24 h under high-vacuum conditions (Lyovapor L-200, Büchi Labortechnik GmbH, Essen, Germany).After this process, the cooked and wet food samples were also dried overnight at 103 • C.
Calculations
All calculations were done with Excel (Microsoft Excel, Microsoft Corporation ® , Redmond, USA).First, the average of each triplicate pH measurement was calculated; this was also done with the duplicate of each sample.For further calculations, the average of each sample and its duplicate were used.
The calculation of the buffering capacity was based on acid titration curves [17].
Total buffering capacity = total acid added ∆pH (1) Based on the total buffering capacity, the buffering capacity per gram of DM of the dog food was calculated.Also, the HCl use per g of DM was calculated based on the total HCl use.
Statistical Analysis
For the statistical analysis of the data, the SAS (Version 9.4, SAS institute, Cary, NC, USA) was used.First, the data were tested for normality using the UNIVARIATE procedure of SAS.The homogeneity of the variances was tested graphically after checking the data for outliers using Cook's D in SAS.Then, an analysis of variance (ANOVA) using the MIXED procedure of SAS was performed.The factor food type was defined as a fixed effect in the model statement and the independent food sample nested within the food type as a random effect.The Kenward-Roger method was used to approximate the degrees of freedom.A Tukey adjustment was applied to compare the means.
A multiple regression analysis was performed with the backward elimination procedure to evaluate the influence of different dietary effects on HCl use per g of DM and BC per g of DM with PROC REG of SAS.The variance inflation factor (VIF) was computed to prevent multicollinearity among the predictors.The fitness of the model was tested using R 2 and the root mean square error (RMSE).The p < 0.05 was considered significant and 0.05 ≤ p < 0.10 as a tendency.
Buffering Capacity of Different Feed Types
The results of the ANOVA are shown in Table 2.The undiluted pH of homemade dog food was significantly lower than the undiluted pH of wet food (p < 0.001).There was a significant difference in the initial pH of all three food types (p < 0.001).Wet food had, with an estimated mean of 6.77 ± 0.08, the highest initial pH, followed by homemade food, with a mean pH of 6.31 ± 0.08.The lowest pH was dry food, with a mean of 5.62 ± 0.08.Dry food had, with 8.22 ± 0.34, compared to wet and homemade food, a significantly higher BC (p < 0.001).But if we look at the BC per gram of dry matter (BC/g DM), the wet food had, with 2.72 ± 0.14, a significantly higher BC/g of DM compared to the other food types (p < 0.001).Dry food had a mean BC/g of DM of 1.83, and homemade food had a mean BC/g of DM of 1.86.Matching the result of the BC/g DM, wet food had, with a mean of 13.20 mL, also a significantly higher used HCl per gram of dry matter (HCl/g DM) than the other food types (p < 0.001).Dry food had a mean use of HCl/g of DM of 6.69 and homemade food had a mean use of 8.15 mL.Within a row, the means with different letters differ according to the Tukey test (p < 0.05). 1 Undiluted pH is the pH of the food without any addition of water.
Associations between Nutrients and Variables of Buffer Capacity
Linear regression graphs were generated to evaluate the effect of nutrients on used HCl per gram of DM to reach pH < 2 as well as the measured BC of the food per gram of DM. Figure 1a-c shows linear and positive associations between the nutrient composition of the dog food and the used HCl per gram of DM.Accordingly, the data of Figure 1a revealed that 83% of the variance of used HCl/g of DM could be predicted with the percentage of crude protein in the DM of the dog food.According to this regression analysis, for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM.
of the dog food and the used HCl per gram of DM.Accordingly, the data of Figure 1a revealed that 83% of the variance of used HCl/g of DM could be predicted with the percentage of crude protein in the DM of the dog food.According to this regression analysis, for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM (data not shown).Figure 1d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.
Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c). of the dog food and the used HCl per gram of DM.Accordingly, the data of Figure 1a revealed that 83% of the variance of used HCl/g of DM could be predicted with the percentage of crude protein in the DM of the dog food.According to this regression analysis, for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM (data not shown).Figure 1d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.
Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c).
), dry ( of the dog food and the used HCl per gram of DM.Accordingly, the data of Figure 1a revealed that 83% of the variance of used HCl/g of DM could be predicted with the percentage of crude protein in the DM of the dog food.According to this regression analysis, for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM (data not shown).Figure 1d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.
Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c).
), and homemade ( of the dog food and the used HCl per gram of DM.Accordingly, the data of Figure 1a revealed that 83% of the variance of used HCl/g of DM could be predicted with the percentage of crude protein in the DM of the dog food.According to this regression analysis, for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM (data not shown).Figure 1d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.
Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c). Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c).
Figure 2d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination (R 2 ) is 0.40.The initial pH correlated positively with the BC/g of DM of wet and homemade food, but less with dry food.The initial pH of the dry food does not seem to affect the BC/g of DM.
Figure 3 shows the linear regression between undiluted pH and initial pH, as compared with the ideal line (y = x).The graphic shows the effect of the addition of water to the dog food; the lower the pH of the food, the higher the effect of the addition of water, as indicated by the increasing distance of the regression line from the ideal line.Figure 2d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination (R 2 ) is 0.40.The initial pH correlated positively with the BC/g of DM of wet and homemade food, but less with dry food.The initial pH of the dry food does not seem to affect the BC/g of DM.
Figure 3 shows the linear regression between undiluted pH and initial pH, as compared with the ideal line (y = x).The graphic shows the effect of the addition of water to the dog food; the lower the pH of the food, the higher the effect of the addition of water, as indicated by the increasing distance of the regression line from the ideal line.for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM (data not shown).Figure 1d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.
Mulitple Regression
Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c).
), dry ( for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM (data not shown).Figure 1d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.
Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c).
), and homemade ( for each 1% CP in the diet, each g of DM food ingested would require 0.34 mL HCl to reach a pH < 2. Also, ash (R 2 = 0.57) was a good predictor for the usage of HCl/g of DM, 1.16 mL HCl, for each g of DM of food, were needed for each 1% of ash to reach pH < 2 (Figure 1b).At the same time, the NFC (R 2 = 0.60) content had a negative effect on the amount of HCl needed (Figure 1c).EE and ADF had no relevant effect on the used HCl per gram of DM (data not shown).Figure 1d shows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.
Figure 2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM. Figure 2a shows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure 2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure 2c). Figure 2d shows a polynomial regression of second degree between the used H gram of DM and the initial pH.The coefficient of determination (R 2 ) is 0.40.The ini correlated positively with the BC/g of DM of wet and homemade food, but less w food.The initial pH of the dry food does not seem to affect the BC/g of DM.
Figure 3 shows the linear regression between undiluted pH and initial pH, a pared with the ideal line (y = x).The graphic shows the effect of the addition of w the dog food; the lower the pH of the food, the higher the effect of the addition of as indicated by the increasing distance of the regression line from the ideal line.
Mulitple Regression
A multiple regression was performed to evaluate the joint influence of and discriminate among different potential factors on HCl use per gram of DM and the BC per gram of DM of the food to reach a pH < 2. The results of the multiple regression are shown in Tables 3 and 4. The most significant positive influence on the HCl use per gram of DM is the percentage of crude protein (p = 0.0002) in the dog food, followed by the amount of crude ash (p = 0.0591), and the initial pH of the sample (p = 0.0769) (Table 3).
The most significant influence on the BC per gram of DM is the percentage of crude protein (p < 0.001) in the dog food, which affected positively the BC corrected per gram of food DM.In contrast, the DM content of the food (p = 0.0155) and its initial pH (p = 0.0850) both negatively affected the food's BC corrected per gram of DM (Table 4).
Discussion
The aim of this study was to analyze the BC and the HCl amount needed to acidify the food, as an indication of the gastric acidity of commercial dog food and homemade dog food in relation to their nutrient compositions.Therefore, the buffering capacities of 30 complete dog foods, each of ten different types of commercial dry and wet dog food, and homemade dog food were measured and parameters that influenced the buffering capacity were evaluated.To the best of our knowledge, this is the first study to evaluate canine food for BC.
In our study, we could observe differences in initial pH, BC/g of DM, and used HCl/g of DM among the different food types.The lowest initial pH was dry food with a mean pH of 5.62, followed by homemade food, with a mean pH of 6.31.The highest pH was wet food, with a mean pH of 6.77.A possible explanation for the high initial pH in wet food could be the significantly higher CP content in wet compared to dry food.A study identified that feedstuffs with a high protein content have an initial pH around neutrality [18].Homemade food had no significantly different protein content in the DM compared to dry food but had a significantly higher initial pH.Giger-Reverdin emphasized in her study that feedstuffs which retained water had lower initial pH values than feedstuffs of the same protein content but with a lower water-holding capacity [18].The difference between these two different food types could be explained through the different DM content and preservatives or palatants, like phosphoric acid, citric acid, and mixed tocopherols that are added to dry food to enhance the palatability [19] or inhibit the growth of pathogens [20].In this context, it is also interesting to mention that a low pH in dog food could possibly increase the risk of caries because it lowers the pH of the saliva [4,12].
There are also differences in the BC per gram of DM among the food types.The highest BC per gram of dry matter was wet food, with a mean of 2.72, followed by homemade food with a mean of 1.86.The lowest BC per gram of DM was dry food, with a mean of 1.83.It is no surprise that wet food had the highest BC per gram of DM because it also had the highest CP content.There was no significant difference in the BC per gram of DM between dry and homemade food; this was due to the similar CP content of these two food types.
The highest use of HCl per gram of DM was wet food, with a mean of 13.2 mL, followed by homemade food, with a mean of 8.15 mL.The lowest use of HCl per gram of DM was dry food with a mean of 6.69 mL.According to our results from the multiple regression, ash (p = 0.059) had a tendency to affect the HCl use and wet food had a significantly higher ash content than the other food types.This finding, in combination with the high CP content of the wet food, could explain the high HCl use.
For all food types, there was a wide range in the values of BC for their own food type.This could be explained through their different nutrient composition, but also individual ingredients could have an influence on the initial pH, the BC per gram of DM and the HCl use per gram of DM.However, measuring the BC in commonly used ingredients in dog food would be necessary to prove the theory.In view of the results, the question arises whether the different food types have a different influence on gastric pH in dogs.There are no studies that measured the gastric pH of dogs during digestion and provided information on the BC of the used food.However, it is possible to calculate with our data the amount of gastric acid needed and the time to reach the gastric pH under two.It is estimated that dogs have postprandial gastric acid secretion around 1.5 mL per kg body mass per minute ( [21], p. 42).For instance, if we take a 10 kg dog that is fed a meal of 70 g of food DM and take the mean acidity values of each food type, for wet food 924 mL gastric acid and 62 min are needed to reach a gastric pH under 2. In comparison, for homemade food, it is 570 mL gastric acid and 47 min, whereas for dry food 468 mL gastric acid and 32 min are needed to reach a gastric pH under 2. It is important that the calculated values cannot be applied one to one to the in vivo conditions in the stomach, since the gastric pH is influenced by many other factors too.However, this information could be important for dogs with gastric hypoacidity or gastritis to choose the right food, where a small amount of gastric acid is needed to reach the right pH levels in the stomach.The current dietary recommendation for dogs with gastritis is to feed restrictive CP content that covers the minimum requirements in dogs' nutrition, in order to avoid increased gastric juice secretion ( [21], pp.274-275).For dogs with hypoacidity, it is recommended to feed diets based on easily digestible proteins and fat as energy sources ( [21], pp.274-275).
Significant differences between the nutrient compositions could be found among the food types.Wet food had the highest CP content, with a mean of 44.1%, and was therefore significantly higher than the homemade (mean 29.1% DM) and dry food (mean 26.2% DM).The difference in CP content in dry and wet dog food could be explained through different demands on the manufacturing process of the food.A high level of starch is needed in dry food to maintain the durability of the kibble [22]; the production process of wet food does not require starch, so it mostly contains meat and animal byproducts [23].The lower CP content of homemade food compared with wet food was due to the decision to feed rations with a lower protein content, enough to cover the dogs' requirements in CP, but to cover the energy requirements mainly via carbohydrates and fats.
The highest EE content was in the wet food, with a mean of 23.9% DM, followed by homemade food (mean 18.8% DM) and dry food (mean 12.3% DM).Surprisingly, the measured EE content in wet food was on average 13.3% lower and in dry food 7.9% lower than that declared by the manufacturers.Differences in the EE content could be caused by the different fat content of different parts of the carcass that were used for the dog food.
Also, the different NFC content between the food types can be explained by the manufacturing process.The NFC provides information about the amount of cell ingredients, mostly starch and sugar, and pectin in a diet.This explains the high NFC of dry food, with a mean of 39.9% DM and homemade food, with a mean of 36.3%DM.
In our study, wet food had a mean ash content of 10.4% DM, dry food 7.25% DM, and homemade food 6.3% DM.Furthermore, it is interesting to mention that, in our study, the measured ash content of wet food was on average 8.3% higher than that declared by the manufacturers.These results are quite similar to another study that measured the average ash content in dry food at 7.34% DM and in wet food at 10.1% DM [24].Specifically, the high ash content in wet food could be a risk factor for gastric dilatation-volvulus syndrome, because it could lead to an insufficient pH drop in the stomach, which promotes the growth of gas-producing bacteria [25].A possible explanation for the high ash content of wet food might be higher concentrations of Na and K [23].NaCl is often added to wet food to increase the acceptance and palatability, and the addition of Na alginate, K alginate, or K carrageenin as gelling agents and thickeners [23].Yet, as observed in this study, ash increases the BC of wet food significantly; therefore, our data suggest that the use of gelling agents in wet food needs additional evaluation.
Regression analyses were used to search for various factors influencing BC/g DM and the use of HCl/g of DM.In our study, the content of CP was the most important factor that influenced the buffering capacity of dog food.In the multiple regression, CP had a very strong and positive significant influence on the BC/g of DM and used HCl/g of DM, both with a p < 0.001.Unfortunately, we do not have information about the exact protein composition of the tested food.Other studies have already shown that different protein sources, e. g. plant based or animal origin and different parts of the carcass, e. g. chicken skin or meat, have different BC values [9,26].Also, other studies described protein as one of the main factors that affected the BC of food.For example, in Mennah-Govela et al., the BC of thirty commercially available commonly used food products that could be eaten as purchased, for example, milk, canned chicken, and tofu, were measured in her study and the protein content correlated with the total BC (R 2 = 0.67) and the total acid added (R 2 = 0.82) [2].These results are similar to our results where the crude protein content correlated with the BC/g of DM (R 2 = 0.72) and the HCl/g of DM (R 2 = 0.83).Another study which measured the BC of commonly used feedstuff in ruminant diets concluded that the BC was high when the crude protein was high (>15%) and decreased as the protein content decreased [27].In our study, wet food had the highest content of crude protein in DM, with 44.1% compared to dry food (26.2%) and homemade food (29.1%), and was therefore significantly higher.The BC/g of DM and the used HCl/g of DM were also significantly higher in wet food than in other food types.Further studies that analysed the BC of commonly used ingredients of pig and poultry feed came also to the conclusion that the protein content was an important factor [3,28].
The ash content of the food tended to affect the amount of used HCl/g of DM (p = 0.0591).A significant effect of ash content on the BC/g of DM was not proved after multiple regression.The linear regression between the BC/g of DM and ash R 2 was 0.60 and between the used HCl/g of DM and R 2 was 0.57.In Jasaitis et al. the BCs of fifty-two feeds, representing common ingredients used in ruminant diets, were analysed, and it was found that there was a significant correlation between BC and ash content (p < 0.001) [29].The difference in the results of both studies could be due to a different cation-anion ratio in the ash content from dog food and ruminant feed.Anions also interact differently with different minerals; different compositions of minerals could also be a reason for the different results [7].
The initial pH only tended to have a negative influence on the BC/g of DM, with a p = 0.0850 and on the used HCl/g of DM a positive influence, with a p = 0.0769.This is in contrast to another study in which the initial pH had a negative significant influence on the BC (p < 0.05) but no significant influence on the total acid added (p > 0.05) [2].
Dry food had a mean DM content of 90.6%, which was a higher DM content than wet food (22.7%) and homemade food (26.4%).Also, the BC of dry food (8.22) was higher than the BC of wet food (3.02) and homemade food (2.40).These values could be explained through the measurement and calculation method of the BC.For the BC analysis, samples of 5 g from each food were used and the DM content of each food was not considered.However, if the BC was corrected for the DM, the DM content was considered, wet food had a mean of 2.72, a significantly higher BC/g of DM than dry food (1.83) and homemade food (1.86).This aspect shows why it is so important to compare the different feeds on a dry-matter basis; otherwise, the results would be misinterpreted.However, the multiple regression proved a negative significant influence (p < 0.05) of the DM content of the tested food on the BC/g of DM.This means that, the more water the food contains per gram of DM, the higher the BC/g of DM.
According to the research by Mennah-Govela et al., the particle size had a significant influence on food, with a protein content higher than 19% [2], and a CP content that is relevant to dog food.Accordingly, a smaller particle size resulted in a higher buffering capacity [2].A possible effect could not be evaluated in our study, because the samples were mixed to obtain a comparable texture for the different dog feed types.However, the particle size could be important for the in vivo BC of the dog food.Dry food had, depending on the brand, a different croquette size.Also, in homemade diets, the particle size could play a role in the BC.For example, it could have an influence on the BC if minced meat is fed or meat sliced in pieces is fed.The possible influence of particle size on the BCs of different dog-feed types needs further research.
Conclusions
In our study, the CP concentration of the food was the most important factor that influenced the BC and HCl use, whereas the initial pH had only a weak significant influence.A possible influence of particle size of dog food on the BC needs further research.The BC can be estimated from the protein and ash content of the food, using the equations developed in this study.A high protein and ash content indicate a high BC of the food.However, more research about the influence of dog food BC on gastric pH in vivo is needed.In conclusion, the BC of food is a potentially interesting parameter for the future, as it can provide important information about the effect of food on digestion.And it can help to select the right diet according to the characteristic needs of a dog's gastric digestion and health condition.
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet (), dry (), and homemade () food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet (
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet ( ), dry ( ), and homemade ( ) food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet ( ), dry ( ), and homemade ( ) food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet ( ), dry ( ), and homemade ( ) food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
)
food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
Figure
Figure1dshows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination is 0.63.The initial pH correlated only positively with the amount of HCl for wet and homemade food, but less with dry food.The initial pH of dry does not seem to affect the amount of HCl per g of DM required to lower the pH < 2.Figure2a-c shows the linear regression between the nutrient composition of the dog food and the BC per gram of DM.Figure2ashows that 72% of the variance of BC/g of DM could be predicted with the percentage of CP in the DM of dog food.Increasing the protein content by 1% led to an increase in the BC/g of DM of 0.06.Also, the ash (R 2 = 0.60) led to an increase in the BC/g of DM of 0.2 per % increase of ash content (Figure2b), whereas the NCF (R 2 = 0.50) content in the DM of dog food had a negative effect on the BC/g DM (Figure2c).Figure2dshows a polynomial regression of second degree between the used HCl per gram of DM and the initial pH.The coefficient of determination (R 2 ) is 0.40.The initial pH correlated positively with the BC/g of DM of wet and homemade food, but less with dry food.The initial pH of the dry food does not seem to affect the BC/g of DM.Figure3shows the linear regression between undiluted pH and initial pH, as compared with the ideal line (y = x).The graphic shows the effect of the addition of water to the dog food; the lower the pH of the food, the higher the effect of the addition of water, as indicated by the increasing distance of the regression line from the ideal line.
Figure 2 .
Figure 2. Effect of crude protein content, ash content, NFC content, and initial pH on buffering capacity per gram of DM wet (), dry (), and homemade () food; (a) crude protein in % of DM; (b) ash in % of DM; (c) NFC in % of DM; (d) initial pH.
Figure 3 .
Figure 3. Relationship between undiluted pH of wet and homemade food, and its initial pH (after dilution with distilled water).The bolded line indicates the ideal line of this relationship (y = x).
A
multiple regression was performed to evaluate the joint influence of and discriminate among different potential factors on HCl use per gram of DM and the BC per gram
Figure 2 .
Figure 2. Effect of crude protein content, ash content, NFC content, and initial pH on buffering capacity per gram of DM wet (
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet ( ), dry ( ), and homemade ( ) food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet ( ), dry ( ), and homemade ( ) food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
Figure 1 .
Figure 1.Effect of crude protein content, ash content, NFC content, and initial pH on the amounts of HCl per gram of DM wet ( ), dry ( ), and homemade ( ) food required to lower the pH < 2; (a) CP in % of DM; (b) ash in % of DM; (c) NFC in % of DM; and (d) initial pH.
)
food; (a) crude protein in % of DM; (b) ash in % of DM; (c) NFC in % of DM; (d) initial pH.
Figure 2 .
Figure 2. Effect of crude protein content, ash content, NFC content, and initial pH on buffe pacity per gram of DM wet (), dry (), and homemade () food; (a) crude protein in % of ash in % of DM; (c) NFC in % of DM; (d) initial pH.
Figure 3 .
Figure 3. Relationship between undiluted pH of wet and homemade food, and its initial p dilution with distilled water).The bolded line indicates the ideal line of this relationship (y
Figure 3 .
Figure 3. Relationship between undiluted pH of wet and homemade food, and its initial pH (after dilution with distilled water).The bolded line indicates the ideal line of this relationship (y = x).
Table 1 .
Nutrient composition of different dog food types tested in this study [% DM unless otherwise stated].Within a row, the means with different letters differ according to the Tukey test (p < 0.05).
Table 2 .
Undiluted and initial pH, and buffering capacity of different dog food types.
Table 3 .
Effects of several dietary factors on the amount of HCl/g of DM, as evaluated by multiple regression analysis.
Table 4 .
Effects of several dietary factors on BC/g of DM as evaluated by multiple regression analysis. | 2023-11-29T16:04:41.080Z | 2023-11-27T00:00:00.000 | {
"year": 2023,
"sha1": "df9e409a64e1332bd60f06482cd7a362720a8151",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/13/23/3662/pdf?version=1701137592",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8106d0e6cc4944c28f91796783c5afe117c31aad",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
219922100 | pes2o/s2orc | v3-fos-license | Effects of ascorbic acid and erythorbic acid on melanosis and quality in different shrimp species
Shrimps are easily degradable products due to microbial spoilage and melanosis (Martinez-Alvarez et al., 2005). After fishing, colour changes occur in the shell segments of the shrimps, especially in the carapace, with the effect of environmental factors (sun, temperature, etc.). In this formation, as well as environmental factors, the late removing of the head after catching and no or insufficient cooling of the material are effective. This colour change is called “melanosis" or "Blackspot (Erkan et al., 2007). This is one of the most important problems of the shrimp industry. In the formation of melanosis, phenols are oxidized to the quinones by the enzyme polyphenol oxidase. This mechanism is followed by nonenzymatic polymerization of quinones, which causes high molecular weight and dark or black pigments (Montero et al., 2005). Although these pigments are not hazardous to human health, they are not preferred by the consumer because they cause bad appearance (Montero et al., 2004). Researchers have conducted various studies to prevent melanosis by different inhibitors. Sulphides have been used as major inhibitors of melanosis worldwide. However, because of the frequent allergic reactions that cause health problems in humans, it is being investigated whether there are natural alternatives to the chemical compounds used to prevent melanosis (Benjakul et al., 2006). Melanosis inhibitors are grouped according to their field of activity. These are acidifiers, chelating agents, reducing agents and enzyme inhibitors. Ascorbic acid acts as an oxygen scavenger to reduce molecular oxygen. Inhibition mechanism of ascorbic acid is the reduction of orthoquinones to diphenols. In addition, it delays blackening by oxidizing to dehydroascorbic acid (Golan-Goldhirsh et al., 1984). Erythorbic acid is the stereoisomer of L-ascorbic acid and is used as an antioxidant in various processed foods (Clark et al., 2009). Erythorbic acid was found to be effective in preventing the blackening of apple slices when used with 1% citric acid (Sapers & Ziolkowski, 1987). There are studies on the use of erythorbic acid on fruits and vegetables to prevent the browning (Sapers & Ziolkowski, 1987; Sapers et al., 1989; Sapers et al., 1990; Osuga et al., 1994; Santerre et al., 1988). However, there have been no studies on the use of shrimps. On the other hand, the development of melanosis in shrimps is reported to differ between species. It is reported that this difference is due to substrate level, enzyme concentration and enzyme activity (Montero et al., 2001; Simpson et al., 1987). The severity of melanosis formation in crustaceans varies with species due to differences in substrate and enzyme concentration (Benjakul et al., 2005; Nirmal & Benjakul, 2012). Therefore, in this study, it was aimed to investigate the effect of ascorbic acid and erythorbic acid on the development of melanosis in different shrimp species. For this purpose, the combination of sulphide application, known as the best Abstract
Introduction
Shrimps are easily degradable products due to microbial spoilage and melanosis (Martinez-Alvarez et al., 2005). After fishing, colour changes occur in the shell segments of the shrimps, especially in the carapace, with the effect of environmental factors (sun, temperature, etc.). In this formation, as well as environmental factors, the late removing of the head after catching and no or insufficient cooling of the material are effective. This colour change is called "melanosis" or "Blackspot (Erkan et al., 2007). This is one of the most important problems of the shrimp industry. In the formation of melanosis, phenols are oxidized to the quinones by the enzyme polyphenol oxidase. This mechanism is followed by nonenzymatic polymerization of quinones, which causes high molecular weight and dark or black pigments . Although these pigments are not hazardous to human health, they are not preferred by the consumer because they cause bad appearance (Montero et al., 2004). Researchers have conducted various studies to prevent melanosis by different inhibitors. Sulphides have been used as major inhibitors of melanosis worldwide. However, because of the frequent allergic reactions that cause health problems in humans, it is being investigated whether there are natural alternatives to the chemical compounds used to prevent melanosis (Benjakul et al., 2006).
Melanosis inhibitors are grouped according to their field of activity. These are acidifiers, chelating agents, reducing agents and enzyme inhibitors. Ascorbic acid acts as an oxygen scavenger to reduce molecular oxygen. Inhibition mechanism of ascorbic acid is the reduction of orthoquinones to diphenols. In addition, it delays blackening by oxidizing to dehydroascorbic acid (Golan-Goldhirsh et al., 1984). Erythorbic acid is the stereoisomer of L-ascorbic acid and is used as an antioxidant in various processed foods (Clark et al., 2009). Erythorbic acid was found to be effective in preventing the blackening of apple slices when used with 1% citric acid (Sapers & Ziolkowski, 1987). There are studies on the use of erythorbic acid on fruits and vegetables to prevent the browning (Sapers & Ziolkowski, 1987;Sapers et al., 1989;Sapers et al., 1990;Osuga et al., 1994;Santerre et al., 1988). However, there have been no studies on the use of shrimps.
On the other hand, the development of melanosis in shrimps is reported to differ between species. It is reported that this difference is due to substrate level, enzyme concentration and enzyme activity (Montero et al., 2001;Simpson et al., 1987). The severity of melanosis formation in crustaceans varies with species due to differences in substrate and enzyme concentration (Benjakul et al., 2005;Nirmal & Benjakul, 2012). Therefore, in this study, it was aimed to investigate the effect of ascorbic acid and erythorbic acid on the development of melanosis in different shrimp species. For this purpose, the combination of sulphide application, known as the best melanosis inhibiting agent and the effects of combinations of reducing agents and sulphate have been tried.
Material
In this study, shrimps (Aristaeomorpha foliacea, Plesionika edwardsi and Melicertus hathor) caught from the Gulf of Antalya, Turkey were used as material. The shrimps were obtained directly from fisherman immediately after the catching. A. foliacea and P. edwardsi were obtained by catching with commercial trawlers. Trawl shootings were made at a depth of 200-400m. M. hathor was caught with the shrimp nets. Fishing was done at 10-50 m depths. Each shrimp species were supplied as 20 kg. Average carapace lengths of Aristaeomorpha foliacea, Plesionika edwardsi and Melicertus hathor were 46.31 mm, 18.24 mm and 35.2 mm respectively. The shrimps were transferred to the laboratory in the cold carrying bag with crushed ice immediately after landing.
Treatments
Upon arrival at the laboratory, they were divided into 10 different groups. Nine different solutions were prepared. The concentrations of the solutions were determined by preliminary experiments. After the preparation of the solutions, the shrimps were immersed in the solutions of 15°C (1:2 shrimp per solution) for 5 min. Fluids of shrimps immersed in solutions were drained on the paper towel for 5 minutes and then placed on styrofoam plates and stored at +4°C. Melanosis development was investigated at 24 hours intervals during storage. L *, a *, b * colour values of the same samples were measured and quality control analyses were performed every 24 hours.
Melanosis measurement
The development of melanosis was evaluated by five experienced panellists using the scale developed by Otwell and Marshall (1986). The panellists (three females and two males) from the staff members in the Fisheries Faculty conducted the panel. Panellists were aged between 25 and 50, had experience in evaluating shrimp quality, and were accustomed to consuming. Shrimps dipped in solutions containing antimelanotic agents were evaluated daily by panellists. Each sample was coded in random letters before the panel starts.
The values on the scale developed by Otwell and Marshall (1986) are expressed as follows: 0 = absent; 2 = slight, noticeable on some shrimp; 4 = slight, noticeable on most shrimp; 6 = moderate, noticeable on most shrimp; 8 = heavy, noticeable on most shrimp; 10 = heavy, totally unacceptable.
Total volatile basic nitrogen (TVB-N)
After the 10 g homogenized sample was taken into the flask, 1 g of magnesium oxide and 1-2 drops of silicone defoamer were added. Samples were distilled and the distillate was collected in a flask containing 10 ml of 0.1 N HCL. After distillation, the content was titrated with 0.1 N NaOH using tashiro indicator (Schormüller, 1968).
Trimethylamine nitrogen (TMA-N)
A 10 g sample was blended with 90 mL of 5% trichloracetic acid (TCA) using an ultraturrax homogenizer (IKA Labortechnic, Staufen, Germany) and filtered. A 4 mL aliquot was transferred into test tubes and 1 mL formaldehyde (20%), 10 mL anhydrous toluene, 3 mL KOH (50%) solutions were added. The tubes were shaken and a 5-mL toluene layer was pipetted, to which a 5 mL picric acid (0.02%) had been added. The supernatant was then transferred to a spectrophotometric cell. Absorbance at 410 nm was measured with a UV-Vis spectrophotometer (Shimadzu UV-160A). At the same time, a series of standards were prepared and measured (Schormüller, 1968).
Colour measurements
Colour values were measured using the CR-400 Minolta Chroma-meter (Minolta, Osaka, Japan). Before use, the device was calibrated with a white standard magnesium oxide plate. Colour measurements were measured in 3 different parts (carapace, body and tail) in shrimps and the results were given as mean values. L * (brightness), a * (redness) and b * (yellowness) values were measured in the samples.
Statistical analysis
In the homogenized samples, the analyses were carried out in two parallels and the experiments were carried out with two replications. Variance analysis was applied to the results obtained from the determined trial plan and the different applications were subjected to Duncan's multiple comparison tests and the results were evaluated statistically (Sokal & Rohlf, 2012).
Development of melanosis
It was determined that melanosis values increased significantly (p<0.01) with the storage time and reached the highest value in fourth day. There was a significant difference (p<0.01) between the application groups in terms of melanosis values ( Table 1).
Combination of ascorbic acid and erythorbic acid with metabisulphite was found to be more effective than the application alone. Sulphite combinations had similar effects to sulphite alone. It was determined that melanosis scores were higher in the control group than the other groups. When used alone, erythorbic acid and ascorbic acid were not very effective in preventing melanosis in shrimps. However, it was effective when used together with sulphide. When we compare erythorbic acid and ascorbic acid in terms of efficacy, it is seen that erythorbic acid is more effective. Concentration was significantly effective in preventing melanosis development. The concentration of 4% of reducing agents was found to be more effective than 2%. When the shrimp species were compared, the lowest melanosis scores were determined for P. edwardsi (p<0.01), followed by A. foliacea and M. hathor Table 1 Effects of treatment methods, shrimp species and storage time interaction on melanosis scores in shrimps 1-2 .
Factors
Parameters 3.60a SE 4 0.1 1 Means within the same factor and the same column with different letters (a, b, c, d) are different (p< 0.01). 2 Each number represents the average value of each parameter for all samples of the same treatment 3 E2= Erythorbic acid (2%); E4= Erythorbic acid (4%); A2= Ascorbic acid (2%); A4= Ascorbic acid (4%); S= Sodium metabisulphite; E2S= E2 + S; E4S= E4 + S; A2S= A2 + S; A4S= A4 + S; C= Control 4 SE= Standard Error 5 Each number represents the average value of each parameter for all samples of the same shrimp species. 6 Each number represents the average value of each parameter for all samples with the same storage time.
Quality changes
TVB-N values increased with increase in storage time (p<0.01) and reached the highest value on day 4 (Table 2). No significant difference was found between the groups in terms of TVB-N values (p>0.01). Since all groups and all storage days are evaluated together for variance analysis, the difference between applications is insignificant. On the last day of storage, lower TVB-N results were obtained with the use of E2, sulphide and sulphide combinations for A. foliacea, A4, A2S and A4S for P. edwardsi, A2 and A4 for M. hathor. When the shrimp species were compared, the highest TVB-N value was determined as P. edwardsi and the TVB-N values of A. foliacea and M. hathor species were found to be lower (p<0.01). No significant difference was observed between the TVB-N values of A. foliacea and M. hathor species. It was observed that the TVB-N values of P. edwardsi, in particular, exceed the consumption limit value on the 4th day of storage.
The TMA-N value of the control group was significantly higher than the TMA-N values of the application groups (p<0.01). There was no significant difference in TMA-N content between treatment groups (Table 2). On the last day of storage, the applications of A2, A4, A2S and A4S resulted in lower TMA-N in A. foliacea, while E2S, E4S, A2S and A4S applications for P. edwardsi and E4S and A4S applications for M. Hathor were found more effective. When the shrimp species were compared, the lowest (p<0.01) TMA-N values were determined in M. hathor and the TMA-N values of A. foliaceae and P. edwardsi species were higher (p<0.01). No significant difference was observed between the TMA-N values of A. foliacea and P. edwardsi species (p>0.01).
In this study, colour values (L*, a*, b*) of shrimp samples were measured. There was no significant difference between the groups (p>0.01) in terms of L* values indicating the brightness or lightness value ( Table 2). The values of a* were found to be the lowest (p<0.01) in the control group shrimps, whereas in the other application groups, higher values were found (Table 2). The highest b* values were observed in groups where ascorbic acid and erythorbic acid were used alone. The lowest b* values were determined in metabisulphite treated group (Table 2) when shrimp species were compared, While the highest a* and b* values (p<0.01) were found in A. foliacea, the highest L* value was found in P. edwardsi. The lowest a* and b*values (p <0.01) were found in M. hathor, L* values in A. foliacea. The L* and b * values showed a significant increase (p<0.01) with the storage time and reached the highest value on the 4th day. On the contrary, the a* values decreased with storage time and showed the lowest values on the 2nd and 4th days.
Discussion
It has been determined that metabisulphite is the best application for inhibition of melanosis. Bisulphites show competitive inhibition by binding sulfhydryl groups of the active part of the enzyme polyphenol oxidase. On the other hand, bisulfite inhibition depends on the reaction of the sulphides with the quinones and results in the formation of irreversibly inhibited sulphokinone forms of polyphenol oxidase (Kim et al., 2000). This indicates why the metabisulphite application was the best.
Reducing agents such as ascorbic acid and erythorbic acid are reported to be the best alternative for sulphide. They have been used to prevent blackening in vegetables and fruits. Sliced oyster mushrooms treated with the chemical preservatives sodium erythorbate and citric acid and stored in MAP at 2°C delayed firmness, weight loss and change of colour (Ventura-Aguilar et al., 2017). Ascorbic acid (1 to 1.5%) was found very effective in considerably reducing enzyme browning in apple slices (El-Shimi, 1993). Sodium erythorbate and its combinations with sodium acid sulphate and citric acid were the most effective in inhibiting browning in sliced potato (Mosneaguta et al., 2012). For shrimps, first time these reducing agents were used in our study, these reducing agents were used in our study. For this reason, this study could not compare with any study. In our study, ascorbic acid and erythorbic acid were effective when used combined with sodium metabisulphite. This result suggests that the use of these reducing agents will reduce the need for sulphide. Concentrations of ascorbic acid and erythorbic acid were also effective in preventing melanosis. Further studies may produce better results with different concentrations.
Melanosis formation varies according to species. Factors such as moulting cycle, harvesting, transport, and capture methods that stimulate the defence mechanism to promote the formation of melanosis (Gonçalves & Oliveira, 2016). Moreover, it is stated that in crustacean the intensity of melanosis, the point of beginning and the rate of spread differ among species . The reason this difference is reported as differences in substrate and enzyme concentration by some authors (Benjakul et al., 2005;Nirmal and Benjakul, 2012). In some species, PPO activity is faster than others. PPO activity in deepwater pink shrimp was found to be faster than white shrimp and melanosis spread was slower in black tiger shrimp (Montero et al., 2001). The same authors have reported that this difference in melanosis development may be due to habitat difference. In our study, melanosis in P. edwardsi developed very little. Even, in some applications melanosis not observed completely up to 2nd day. Plesionika edwardsi is a marine species with a wide distribution in low latitudes. It is found at depths between 54 and 700 m. P. edwardsi used in our research was caught with trawl at a depth of 200-400 m. On the other hand, the highest melanosis scores were observed in M. hathor. In our study M. hathor were caught in front of Aksu stream and Beşgöz creek in the Gulf of Antalya at 10-50 m depths. Although studies have been conducted on the prevention of melanosis in other shrimp species, no such studies have been done for three different species of (Aristaeomorpha foliacea, Plesionika edwardsi and Melicertus hathor) used in our study. Therefore, our study will provide information on melanosis development in these shrimp species as well as shed light on the studies to be done with these shrimp species in the future.
In a previous study, it was reported that the treatment with 50 g/kg sulphide together with citric acid and chelates inhibited the melanosis of shrimps (Parapenaeus longirostris) for at least one week during the cold storage . In a study tiger prawns (Marsupenaeus japonicus) from aquaculture were treated with 4-hexylresorcinol (0.1% and 0.05%) in combination with organic acids (citric, ascorbic, and acetic) and chelating agents EDTA (ethylenediaminetetraacetic acid) and disodium dihydrogen pyrophosphate. Prawns with no additive and treated with commercial sulphite formulation were used as control. At the end of the study, it was found that prawns treated with sulphite-based formula presented the lowest score of melanoses up to 8 days (Martinez-Alvarez et al., 2005). The results of our study were different with shorter acceptability times. The differences between the results of our study and those in the literature may be due to the differences in shrimp species and antimelanotic agents used and also the difference in perception of the panellists in sensory analysis.
TVB-N is one of the most commonly used chemical methods for determining the quality of seafood products. TVB-N (mg per 100g meat) was reported to occur at an advanced stage of deterioration in fresh and frozen seafood (Ludorf & Meyer, 1973). The highest amount of TVB-N for shrimp to be acceptable has been reported as 30 mg per 100 g shrimp (Shamshad et al., 1990;Mendes et al., 2005). According to the results of our study, while the TVB-N limit values were not exceeded on day 0 and day 2 in all species, it was seen that the limit values were exceeded in some species and some application groups on day 4. Measurement of TMA-N content is important as it shows the level of microbial degradation in fresh seafood products. The acceptability limit of shrimp TMA was reported to be 5 mg per 100 g (Shamshad et al., 1990;Cobb et al., 1973;Zeng et al., 2005;Okpala, 2004). However, some researchers have suggested different limits for different shrimp species. The TMA-N values did not exceed the limit values of the application groups, except for the values at the end of storage of some applications. The higher TVB-N and TMAN-values in control group compared to application groups shows that ascorbic acid and erythorbic acid are effective in maintaining the quality of shrimp. In a study applied organic acids to shrimps (Penaeus japonicus), the initial TVB-N value of 18.29 mg per 100 g exceeded the limit value in control and acetic acid treated groups during the storage at + 4°C. In another study, the TVB-N value of shrimp (Palaemon adspersus) was determined to be 44.64 mg per 100 g at the end of the 5th day during the cold storage. It was reported that the raw shrimps could be stored for 2 days in cold storage (Erdem & Bilgin, 2004). In deep water pink (Parapenaeus longirostris) and narwal shrimp (Parapandalus narval) the initial TVB-N value of 29 mg per100 g reached the values of 35.09 mg per 100 g and 35.85 mg per 100 g at the end of storage respectively. It is thought that in our study the low initial TVB-N content and the differences with other studies may be caused by shrimp type, shrimp catching method and catching area, antimelanotic agent type, applied concentration and application method.
In the colour analysis, an increase in L* indicates an increase in brightness or whiteness, and a decrease in L* indicates an increase in darkness, the a* value indicates redness and b* value indicates yellowness. The reason for the low a* value in the control group is thought to be due to the bleaching effect of immersion solutions. The lowest b* value determined in the sulphite treated group means that the addition of sodium metabisulphite decreases the yellowness. The reason that the lowest a* value is determined for M. hathor and the highest a * value for A. foliacea is intense red colour of A. foliacea, called "red shrimp", and pale appearance of M. Hathor. The development of melanosis caused shrimp to decrease in L * value (brightness) and decrease in a * value (redness).
Conclusions
Although the antimelonotic effect of erythorbic acid and ascorbic acid were not as much as sodium metabisulphite, it was observed that they are as effective as sulphite when they are combined with sodium metabisulphite. It has also been shown that the use of these reducing agents (ascorbic and erythorbic acid) can reduce the need for sodium metabisulphite. Ascorbic acid and erythorbic acid and these shrimp species were used for the first time in this study. These results not only identify ascorbic and erythorbic acids as new melanosis inhibitors for shrimps, but also provide information on the development of melanosis in these shrimp species. | 2020-06-04T09:07:14.797Z | 2020-06-04T00:00:00.000 | {
"year": 2020,
"sha1": "55895c11d84221e4d800d7f369841794b8aaadf4",
"oa_license": "CCBY",
"oa_url": "https://ojs.unimal.ac.id/index.php/acta-aquatica/article/download/2527/1598",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ac2cfaaa63626a65189c18a0cc14a42d7f95546f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
237418965 | pes2o/s2orc | v3-fos-license | Coevolution of COVID-19 research and China’s policies
Background In the era of evidence-based policy-making (EBPM), scientific outputs and public policy should engage with each other in a more interactive and coherent way. Notably, this is becoming increasingly critical in preparing for public health emergencies. Methods To explore the coevolution dynamics between science and policy (SAP), this study explored the changes in, and development of, COVID-19 research in the early period of the COVID-19 outbreak in China, from 30 December 2019 to 26 June 2020. In this study, VOSviewer was adopted to calculate the link strength of items extracted from scientific publications, and machine learning clustering analysis of scientific publications was carried out to explore dynamic trends in scientific research. Trends in relevant policies that corresponded to changing trends in scientific research were then traced. Results The study observes a salient change in research content as follows: an earlier focus on “children and pregnant patients”, “common symptoms”, “nucleic acid test”, and “non-Chinese medicine” was gradually replaced with a focus on “aged patients”, “pregnant patients”, “severe symptoms and asymptomatic infection”, “antibody assay”, and “Chinese medicine”. “Mental health” is persistent throughout China’s COVID-19 research. Further, our research reveals a correlation between the evolution of COVID-19 policies and the dynamic development of COVID-19 research. The average issuance time of relevant COVID-19 policies in China is 8.36 days after the launching of related research. Conclusions In the early stage of the outbreak in China, the formulation of research-driven-COVID-19 policies and related scientific research followed a similar dynamic trend, which is clearly a manifestation of a coevolution model (CEM). The results of this study apply more broadly to the formulation of policies during public health emergencies, and provide the foundation for future EBPM research. Supplementary Information The online version contains supplementary material available at 10.1186/s12961-021-00770-6.
us through unprecedented times. 1 Moreover, scientific research paves the way for effective policy-making as well. Policy acts as a crucial tool and priority behaviour [28] for social public management. This fact partially explains the explosion of papers during the COVID-19 pandemic. In the face of a public health emergency, policies need to coordinate efforts to combat COVID-19; thus scientific research is included. Consequently, the dynamics between science and policy (SAP) can be understood using a coevolution model (CEM).
The concept of coevolution was originally proposed by Ehrlich and Raven [18]. Four policy-making models were proposed by Zwanenburg and Millstone [92], among which CEM is the only nonlinear model. The CEM reflects two-way feedback and continuous adjustment between SAP until a symbiotic equilibrium is reached under the influence of social and other factors, rather than simply a two-way influence. The CEM is regarded as the best model for effectively capturing the interactions between SAP-making [17,22,73]. Policies provide application-oriented research directions for science [13], accelerate the utilization of discoveries [57], and facilitate optimal resource allocation. The policy aids in constructing a denoising mechanism under the CEM to constantly screen for more appropriate scientific evidence to adopt. A new perspective on the relationship between SAP that differs from a more subjective model, the technocratic and decisionist models, has been shaped under CEM [53,55]. Adjusting scientific evidence for policy-making under CEM is consistent with the goal of achieving evidence-based policy-making (EBPM). EBPM involves the design of policy based on evidence, which embeds scientific evidence throughout the process from policy formulation to evaluation to ensure that policies are scientific, effective, and reasonable [40,64]. Thus, the dynamics between SAP under CEM meet the goal of EBPM effort [53] and provide a starting point to deeply explore EBPM [85].
One of the main challenges currently encountered during the construction of evidence-based policy is how to effectively transfer scientific evidence in policy-making [4,32,64]. The final quality of policy is determined based on the efficiency of elaborately refined scientific evidence operation in all processes constituting policy-making. Scientific evidence adopted in policy is invisible and unavailable to summarize at length to most people other than policy-maker themselves, which increases the difficulty in measuring the efficiency of scientific evidence utilization. Therefore, most of the current studies on the efficiency of scientific evidence operation dwell on the theoretical level [78,87]. Major trends of scientific research and the corresponding policy changes in the early phase of COVID-19 were concluded, laying the basis for capturing the traces of scientific evidence in other policies without information source. It is also one of the innovative points of this study.
China was selected as a representative case for investigation for the following two reasons: First, China is among the first batch of countries identifying the COVID-19 epidemic. For instance, Weible et al. [77] reported that countries with early outbreaks, such as China and Italy, provided an opportunity for other countries to detect the pandemic and assess early policy responses. Second, China addresses the policy-making process standing on scientific research on COVID-19, but lacks relative research for mainland China. Professor Gao Fu, the Director-General of the Chinese Center for Disease Control and Prevention, claimed that the main objective of scientific research in the early period of the COVID-19 outbreak in China was to offer more reasonable reference and judgement for policy-making at the seventh academic conference of the academic divisions, Chinese Academy of Sciences. Yin et al. [85] recently published an article in Science, revealing the coevolution between SAP-making during the COVID-19 pandemic in 114 countries other than mainland China. Atkinson et al. [3] considered the dynamics of the United Kingdom policy response to the COVID-19 pandemic and explored how COVID-19 policy-making shares links with scientific research in the United Kingdom in order to capture real-time information. Therefore, the current study is complementary to CEM study on COVID-19. Third, this study provides a way to measure the dynamics between policy and scientific research when the policies lack references to the original scientific findings. Unlike other countries or regions, most policies in China do not publish reference sources and are not indexed in databases such as Overton, increasing the difficulty encountered in determining the association of Chinese policies with other information. To investigate the relationship between SAP without reference sources, dynamic trends in scientific research were analysed and identified via machine learning clustering analysis of scientific publications. Trends in relevant policies that correspond to changing trends in scientific research were then traced. This method is in contrast to the use of Altmetric data [33], analogy of numbers between publication and citation [73,85], and field investigation of policy-making processes in departments involved in policy-making [41,86] to analyse the policy-science evolutionary relationship. There are two other innovative points of this study.
The time scope of this research is limited to the early stage of COVID-19 in China. This was a period of high uncertainty. With increasing awareness about the virus, scientific research, medical defence and control, and policy-making all needed to continue making ongoing adjustments. With the deep insight and more focus on the virus in the later stage, the need to update scientific research and policy-making decreased gradually (please refer to Additional file 1: Table S1), and thus the interactions between SAP were less frequent and evident than those at the earlier stage. The white paper "Fighting Covid-19: China in Action" issued by the State Council of China reported that nationwide virus control was then being conducted on an ongoing basis from 29 April 2020. However, sporadic cases such as the epidemic outbreak in Beijing in May 2020 had been reported in mainland China. Consequently, the time scope of research cut-off was before 26 June 2020, roughly half a year after China's official disclosure of COVID-19. Second, the earlier the epidemic is brought under control, the fewer losses society must suffer. The outcomes of prevention and control of the epidemic in the early phase played a more vital role in its overall development than in other stages, making this study informative for early prevention and control of emergencies.
The developmental trends of China's COVID-19 research were analysed from the perspective of bibliometrics. Scientific research is a process of problem-solving and resolving disputes [21,36]; therefore, trends of scientific research indirectly reflect whether a scientific consensus has been reached or whether a solution has been developed for a particular problem. Such connections can be revealed when one studies the length of time for which the particular topics remain popular. To clarify the dynamic trends of research, a total of 16 statistically valid time intervals were used. This study focused on the period from 30 December 2019 2 (the initial announcement of the pandemic) to 26 June 2020, corresponding to a total of 18 intervals. Since the first COVID-related publication from China in PubMed was published on 24 January 2020, the total number of valid intervals is 16 (Additional file 1: Table S2). By calculating the variations in the co-occurrence of items over a range of time intervals, the research confirmed the items with a significant change and conducted a cluster analysis. It was found that in the later stage of the epidemic in China, the trend of research on "children and pregnant patients", "common symptoms", "nucleic acid test", and "non-Chinese medicine" began to decline. In contrast, research on "aged patients", "severe symptoms and asymptomatic infection", "antibody assay", and "Chinese medicine" began to rise. Mental health is a long-term "hot" issue in China's COVID-19 research. The formulation of relevant COVID-19 policies in China is constantly evolving, and this is partially in response to these dynamic variations in COVID-19 research. In the early stage of the outbreak in China, the formulation of COVID-19 policies followed a rapidly progressing CEM. It also signifies China's efforts to build EBPM during public health emergencies.
Data collection
The research data for this study were derived from Pub-Med. The alternate names of COVID-19 provided by the Dimension database 3 were used as search terms. 4 The types of literature to be searched were limited to articles and reviews. As of 26 June 2020, the PubMed database had collected a total of 16 739 publications on COVID-19 research, among which 3708 papers were from China. The scheme used in this study adopted an expanding overlapping aggregation/overlapping time series approach [9,39,43]. A total of 16 statistically valid time intervals were used, following the examples provided by Petropoulos and Makridakis [60] and Roosa et al. [62]. Specifically, the time intervals were from 30 December 2019 to 28 January 2020, from 30 December 2019 to 7 February 2020, from 30 December 2019 to 17 February 2020, and so on, cumulatively increasing in 10-day increments until 26 June 2020. The specific intervals and corresponding dates are presented in detail in Additional file 1: Table S2. This strategy of collecting data in overlapping time series can reduce the influence of randomness on the variability of data in short periods and thus can present more stable trends than is possible with the use of continuous time series [9,43]. Simultaneously, to avoid any bias introduced by the selection of the duration of the time interval, a robustness test (see details in Additional file 1: Appendix S3) was conducted. It was found that the variation significance values of co-occurrence keywords calculated under the 10-day interval scheme and under the 20-day interval scheme were highly correlated, with a correlation (R) value ranging from 0.77-0.93 (for six overlapping time intervals). These correlation ranges indicate that the variation significance of cooccurrence keywords can be effectively demonstrated by using different time interval schemes. In particular, the 10-day interval scheme is able not only to reveal more significant variations in the number of COVID-19 papers, but also to reasonably avoid the random fluctuations to which an excessively short interval would be susceptible.
Data processing
All selected literature was imported into VOSviewer (version 1.6.16) in MEDLINE format for co-occurrence analysis. When setting the analysis conditions, the minimum number of occurrences of a keyword was set as 2, indicating that the keyword appeared in at least two documents. The data on the total link strength of co-occurrence items in different time intervals were extracted to calculate the differences in co-occurrence items. It not only recorded the occurrence frequency of a given item, but also reflected the link strengths of other items appearing at the same time as the given item [16].
The data on the total link strengths of corresponding items varied significantly because of the significant differences in the numbers of papers published during different intervals of different durations. According to the results exported from VOSviewer, the maximum total link strength in the first interval was 73, while that in the last interval was 39 814. To make the subsequent analysis more consistent, it was necessary to first normalize the total link strengths by transforming their values into percentages. 5
Identification of items with significant variations
Given that the number of co-occurring items varied in different intervals, 6 data imputation was conducted to determine the missing percentages of the total link strengths of co-occurrence items to facilitate subsequent analysis. The missing data fell into the category of missing not at random; thus this study adopted minimum value imputation as its method [42]. Specifically, the minimum percentages of the total link strength of co-occurrence items in the 16 intervals were extracted separately. Then, data were randomly selected from the range constituted by the 16 minimum percentages for imputation with a 90% confidence interval. After imputation, a one-sample t-test was conducted on the percentage of the total link strength of each co-occurrence item in one interval relative to all prior intervals. This was done according to the following formula: where X denotes the average percentage of the total link strength of the co-occurrence item in all prior intervals, t = X − µ 0 /SE, μ 0 denotes the percentage of the total link strength of the co-occurrence item in the current interval, and the standard error represents the error of the percentages of the total link strength of the co-occurrence item in all prior intervals.
According to the rules of the one-sample t-test, 7 significance analysis could not be conducted on the data of the first and second intervals; thus ultimately 14 groups of t-values were obtained. Next, Student's left-tailed t-distribution test was conducted on the t-values to calculate the significance of the data variations of co-occurrence items across different intervals. Any co-occurrence item having a p-value less than 0.05 was considered significantly changed at that time interval. To avoid any differences in the significance of data variations caused by random imputation, this study performed 10 random imputation iterations on the entire data set. The final imputation and t-test results adopted the average results of 10 random imputation iterations. According to t-test results, the 14 intervals (starting with the third interval), respectively, showed 25,28,37,45,78,28,31,36,40,42,49,54,51, and 56 co-occurrence items (162 in total after deducting redundancy) which showed significant variations in at least one interval.
Classification of items
The interrelationships of the 162 co-occurrence items, which showed significant variations, were explored by classifying them through hierarchical clustering. In light of the significant differences among different items in their percentages of total link strength, Z-score transformation was first performed on the data for each item across the 15 intervals. Further, scikit-learn 8 was used to analyse the transformed data to produce a dendrogram [59]. The dissimilarities among co-occurrence items were calculated according to average linkage and Euclidean distance metric parameters. According to the exported dendrogram (more details please refer to Additional file 2: Cluster mapR1), the 162 items were classified into seven major clusters based on their variation trends in research "heat". Figure 1 shows the variations in research heat of all items in each cluster across different intervals, as measured by using Z-score. A high amplitude in Fig. 1 represents a steady increase in research focus rather than 5 The percentage of the total link strength of a given co-occurrence item equals the total link strength of the given co-occurrence item divided by the sum of the total link strengths of all co-occurrence items. 6 The numbers of co-occurrence items in each of the 16 an instance of constant high research focus. These amplitude variations are referred to as "heat variations" in the remaining part of this paper. Among them, clusters 1-3 consisted of 80 words presenting a continuous increase in the later stage, while the heat variations of 85 words in clusters 4-7 gradually decreased in the later stage.
Stage 1: analysis results of trend variations
Items that showed significant variations were grouped based on available classification, and five representative groups were selected to analyse their heat trend variations as follows:
Patients: a shift from children and pregnant women to aged patients
According to the data, the focus on child-related items such as "child", "child preschool", and "infant" peaked twice in the second and seventh intervals, while the terms "infant, newborn", "pregnancy", "pregnancy complications, infectious", and "caesarean section" corresponding to pregnant women peaked only in the seventh cycle (Fig. 2). Children and pregnant patients present a special set of potential problems, such as a longer incubation period [79], with most recovering within 1-2 weeks after onset [38]. Pregnant women are susceptible to respiratory pathogens due to changes in immune mechanisms and physiological adaptations during pregnancy [47]. In the early stage, owing to the severity of the epidemic, the greatest concern was on the large number of patients, covering the special populations; however, during the middle stage of the epidemic, affecting the overall prevention and control of the epidemic, focus was on the patients belonging to remaining key populationsfor example, the inability of some paediatric patients to describe the route of infection-and these presented difficulties to later prevention and screening [67].
The heat variations of the items related to the abovementioned patient types declined in the later stages as the epidemic eased, 9 while that related to aged patients increased. The heat variations of the item "aged 80 and over" gradually increased. Compared to younger people, the aged population experienced more severe symptoms [29] and higher mortality rates [48,80] due to underlying or previous diseases. Considering that most of the existing patients in the later stage of the epidemic in China are severe cases and also that aged patients recover more slowly after infection [72], the aged patients may become the main patient population in late surviving cases.
Clinical characteristics: a shift from common symptoms to severe symptoms and asymptomatic infection
The focus on the most severe illness-related items such as "L-lactate dehydrogenase", "cytokine release syndrome", "interleukin-6", "critical illness", "hospital mortality", "C-reactive protein", and so on increased in the eighth or ninth interval (Fig. 3). The items "leukocyte count", "lymphocyte count", "lymphocytes", and "neutrophils" (neutrophil-lymphocyte ratio) are not specific to severe disease, but clinical characteristics of severe cases present in a different manner than common cases. These severe disease-related items have also been found to be the most relevant to the studies in later stages [51,61,83]. The symptoms or indicators of common cases, such as "myalgia", "diarrhoea", "fatigue", "sputum", and "respiratory sounds", began to decrease after the third to seventh intervals, respectively, in contrast to the heat variation of the terms relevant to severe cases. However, the clinical characteristics of common cases reached a consensus in the early stages, and the indicators of severe symptoms are in constant turnover due to the complexity of severe disease treatment [46]. Treatment for critical patients still has much room for improvement, as the mortality rate remains high, and thus most studies are constantly updating the critically ill indicators [31,37,90].
Notably, asymptomatic infections have been continuously increasing after the sixth and 11th interval in heat variation. Although trends of asymptomatic diseases and asymptomatic infections present in a distinct way, a complementary state exists between their heat variations. Asymptomatic disease-related research is a latecomer. The initial prevention and control of COVID-19 focused on symptomatic patients. However, with the continuation in further studies, most patients were found to suffer from mild symptoms or were asymptomatic [91]. Asymptomatic patients create new pressure for outbreak prevention and control, and whether they are infectious or not remains controversial [23,25]
Virus testing: a shift from nucleic acid tests to a combination of nucleic acid tests and antibody assays
Nucleic acid test results came under question by many academics in the early stage, due to the issue of false negatives [82,84] and sensitivity [71]. Therefore, the terms "false negative reactions" and "sensitivity and specificity" increased steeply after the second and third intervals. The problem of false negatives was gradually resolved with the improvement of technology; however, the academic community still emphasized the sensitivity and specificity of the assay [10], with the result that the heat variation of "sensitivity and specificity" lasted longer than that of "false negative reactions". In the late stage of the epidemic, when the numbers of suspected and confirmed cases in China declined, antibody assays helped to assure a safe reopening of the economy. At the same time, with the emergence of asymptomatic and imported cases, antibody testing has become an important approach to map previous infections, and therefore, later research on virus testing has shifted to antibody testing. The increase in the vital indicators of "immunoglobulin G" and "immunoglobulin M" continued after the ninth interval (Fig. 4).
Drug research: a shift from non-Chinese medicine to Chinese medicine
As the demand for drugs has gradually decreased because of epidemic mitigation in China, most of the items related to drugs show the trend of dropping in the late stage of research, such as "drug therapy, combination", and the existing drugs "chloroquine", "indoles", "lopinavir", and "ritonavir". Moreover, researchers' perceptions of the effects of some drugs changed based on improved research. Early drug adoption relied on severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) treatments, such as ritonavir [58], but later, it was found that several medicines such as Fig. 3 Heat variation trends of items related to symptoms across different intervals chloroquine or hydroxychloroquine were associated with huge side effects [6].
Of note, the heat variation of Chinese medicinerelated items continues unabated. The heat of "drugs, Chinese herbal" started to increase after the 13th interval, although it declined in the mid-term, and the focus on "medicine, Chinese traditional" increased progressively after the fourth interval and then stabilized. During COVID-19 treatment, a variety of Chinese herbal medicines not only possess obvious efficacy when combined with Western medicine, but also can alleviate the side effects brought by Western medicine. One example is the use of Qingfei Paidu decoction combined with Western medicines such as lopinavir and interferon α2b injection [89]. Lianhua Qingwen capsules exhibit a higher safety profile than the antiviral oseltamivir in the treatment of critical and severe cases of influenza A virus subtype H1N1 in children [52] (Fig. 5).
Long-term attention to mental health
The public's mental health suffered from COVID-19 and may take a longer period to recover compared to physical health. China also began to deploy mental health surveys in January in order to determine the impact of COVID-19 on public mental health [88]. Figure 6 demonstrates that increase in the heat of different research topics related to mental health appeared successively over time; thus overall, mental health maintained a long-term research focus. Research on mental health in China paid close attention to medical staff and people at the epicentre [49] in the early stage of the pandemic, and the attention shifted to the public at large in the middle and late stages [69].
Stage 2: analysis results of policy-making
According to data from China's National Health Commission, 148 policies related to COVID-19 were issued between 30 December 2019 and 26 June 2020. 10 This study compared the onset of the period of maximum heat in each cluster and the issuance date of the corresponding national policy/guidance programme for each of the five trend changes (Table 1. See the list of policies presented in Additional file 1: Appendix S5). By calculating the differences between the two, an average interval of 8.36 days was found between the issuance of a policy and the maximum heat period of research on related topics. The increase in interest for various clusters followed the release dates of relevant policies and guidelines, which may reflect that the policy plays a certain leading role on related research topics. For example, Diagnosis and Treatment Protocol of COVID-19 (Trial Version 7) released on 4 March 2020 specifies clinical early-warning indicators of severe and critical cases for the first time; these indicators include cytokines, interleukin 6, and C-reactive protein. The significance of items relative to these three indicators began rising extensively after 18 March. Moreover, some particular policy outcomes impacted the scientific research, as reported by some literature studies. For example, when referring to the Prevention and Control Protocol of COVID-19 (Trial Version 6), Ge et al. [26] added that aerosol transmission is a potential transmission route. Liu et al. [50] conducted an experimental study of patient inclusion criteria based on the Diagnosis and Treatment Protocol of COVID-19 (Trial Version 5).
The progress of scientific research is also of vital significance for policy-making [28,86] and promotes the adaptive adjustment of policies [81]. In the first 2 months after announcing the pandemic situation, the Chinese government adjusted its diagnosis and treatment protocol seven times and revised its protocol for prevention and control six times. The data indicate that the increased focus on Table 1 Stages of rising heat for various clusters in China's COVID-19 research and release dates of relevant policies/guidelines Note: The number of days in the last column between the first date of "release dates of relevant policies or guidelines" and the first date of "stages of rising heat for clusters of research topics" is taken from the average of the date interval between the two waves of each cluster. Among them, the period of rising heat of terms related to children, pregnant women, and Chinese medicine is included in the period of concentrated promulgation of related policies/programmes; thus, only the time difference between the period of concentrated promulgation of the first wave of related policies/programmes and the period of concentrated heat of related terms is counted
Cluster
Waves Release dates of relevant policies or guidelines (a)
Stages of rising heat for clusters of research topics (b)
Interval between the first date of (a) and that of (b) 3.04 some items preceded the release time of relevant policies or guidelines (Table 2), and some of the studies examined herein explicitly suggested that their research results should be used to guide policy. For example, Gao et al. [24] proposed in papers published on 4 February and 19 February, respectively, that chloroquine is an effective treatment for COVID-19, and suggested that chloroquine should be included in the diagnosis and treatment protocol. The Diagnosis and Treatment Protocol of COVID-19 (Trial Versions 6 and 7), released on 19 February and 4 March, respectively, both included chloroquine as a recommended drug. Later, with the further enrichment of clinical trial data on chloroquine, the "Notice on Adjusting the Usage and Dosage of Chloroquine Phosphate in Treating COVID-19 on a Trial Basis", published on 28 February 2020 further adjusted the usage and dosage recommendations for chloroquine. Moreover, "masks", "infectious disease transmission, patient-to-professional", "patient discharge", and "telemedicine" also showed similar time differences between the stage of rising heat of an item and the release of relevant policies or guidelines (see Table 2). Thus, the research foci identified in this study may affect the formulation of COVID-19 policies and guidelines in China.
Discussion
Notably, this study did not trace scientific research evolution supported by policy attributed to the lack of solid references presenting connection between policies and research output. However, in this study, it was observed that the trends of scientific research and related policy changes were closely intertwined in the early stage of COVID-19. This result is based on the calculation of the coincident periods between heat variation in the main themes of scientific research and the centralized enactment of relevant policies. A similar dynamic trend is a manifestation of two-way feedback and adjustment which was emphasized by the CEM. Furthermore, the available evidence indicates that early COVID-19 scientific research and related policies influenced each other. Consequently, the CEM of policy-making was reflected in the early stage of the outbreak in China, consistent with the policy documents published by government agencies and think tanks from 114 countries as presented by Yin et al. [85]. This result is opposed to the opinion proposed by Khazragui and Hudson [44] and Haunschild and Bornmann [33], that only a single piece of research has a decisive influence on policy. Policy-making and scientific research are both gradual processes. Scientific research is the practice of resolving scientific disputes [14] and aims to approach the truth [2]. Given that policies are usually a step behind technological development, it is necessary for them to be constantly updated [54] or to be given adaptive adjustments in response to emerging issues [81]. Policy-makers rely on the participation of other parties concerned with the dynamic adjustment of policies, and scientists within the same sociotechnological circles need to maintain continuous interactions with policies [17,70]. Importantly, the mission of modern science is not only to create new knowledge, but also to use existing and new scientific knowledge to solve societal problems [63]; thus the policy-making process concerning or depending on science and technology needs to be guided by scientific knowledge [27,77]. The Global Preparedness Monitoring Board calls for responsible leaders to act decisively based on science, evidence and best practices, and the interest of the people. Zhang et al. [89] also argued that the findings of academia pertaining to public health emergencies may offer a key reference for health policy-making. The Chinese government addressed this during the formulation and revision of the diagnosis and treatment and other protocol for COVID-19, wherein the available medical research evidence was fully considered. Moreover, they scientifically and prudently maximized consensus-building in the press conference of the Joint Prevention and Control Mechanism of the State Council. 11 It is additional proof of the existence of CEM in Chinese COVID-19 policy formulation.
The coevolution between SAP in the early phase of the COVID-19 pandemic in China also shows individual characteristics that distinguish it from those of other countries. This can be discussed from the following two perspectives: First, although most of China's COVID-19 policies are promulgated by the government, the role played by scientific experts in the formulation of some policies is elevated from that of adviser to decisionist. All 148 new Chinese policies related to COVID-19 were promulgated and coordinated by government agencies. This result is in sharp contrast to the conclusion by Yin et al. [85], who reported a low likelihood of the national government citing scientific papers in formulating COVID-19-related policy. About 36.49% of all policies were enacted by the Joint Prevention and Control Mechanism of the State Council in Response to the Novel Coronavirus Pneumonia. The mechanism was launched by the State Council on 20 January 2020 to respond to the severe and specific infectious pneumonia epidemic, with the establishment of a Scientific Research Group, including teams belonging to Nanshan Zhong, Lanjuan Li, and Chen Wang. 12 In addition to the medical treatment on the front line, these teams incorporated some effective clinical experience into the treatment protocol, which provided scientific and reasonable judgements and recommendations for the prevention and control of the pandemic, and further revised and improved relevant prevention and control measures. Of note, the Scientific Research Group was able to enact certain medical policies without the involvement of politicians. For example, on 25 February 2020, the Scientific Research Group of the Joint Prevention and Control Mechanism of the State Council in Response to the Novel Coronavirus Pneumonia issued a "Notice on Regulating Medical Institutions to Conduct Clinical Studies on the Drug Treatment of Novel Coronavirus Pneumonia". Therefore, in China, the COVID-19 policy-making approach adopted an "advisers advise and decide" model in the early period, which is different from the "advisers advise and ministers decide" model used in the COVID-19 policy-making in the United Kingdom [3]. Moreover, the "decisionist model" of China's industrial policy-making was discussed by Chen et al. [11]. The "advisers advise and decide" model may be the fruit of the establishment of an advisory system for the formulation of major science and technology policies in recent years [53]. Although most Chinese policies are initiated or coordinated by the government [68], China has attempted to adjust its policysetting agenda to be more scientific [75]. Two examples are presented as follows: the establishment of the National Science and Technology Decision-making Advisory Committee in 2017 [19], and an attempt to develop the National Outlines for Medium and Long-term Planning for Scientific and Technological Development (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020)(2021)(2022)(2023)(2024)(2025)(2026)(2027)(2028)(2029)(2030)(2031)(2032)(2033)(2034)(2035) with the joint participation of the Chinese Academy of Sciences, the Chinese Academy of Engineering, and the Chinese Academy of Social Sciences in the "strategy consultation" mechanism [74].
Second, in China, early policy-making on COVID-19 gave more importance to prior clinical practice, unlike other countries that relied more on statistical modelling. The results of clinical or prevention practice became the main reference for the development of the Diagnosis and Treatment Protocol and Prevention and Control Protocol of COVID-19 in China. The development of COVID-19-related policy in China embraced a model of policy-making while practicing. In other words, whether decisions based on scientific research should be included in policy documents is based on the results of preliminary clinical trials, and revisions are kept for later if any new results are found. The concept that policy-making goes hand in hand with practice is distinct from the traditional notion of some countries that rely on modelling in their policy-making process, for example, during the 2009 swine flu epidemic in the United Kingdom [5]. Modelling remains an important reference for policy formulation in Europe and the United States during the COVID-19 outbreak [12]. In contrast, clinical practice provides more accurate reference information than modelling, and takes into account differences in effects brought by various objects or settings. Modelling provides the reference information at a relatively fast rate and eliminates the need for and risk of conducting clinical investigations in patients under conditions of considerable uncertainty in the early period [35,65]. Boden and McKendrick [7] considered modelling the most ethical method. Modelling results are also included in consideration of the initial phase after weighing the pros and cons between practice and modelling, such as in case of hydroxychloroquine. About 100 drugs were selected for in vivo experiments on the activity of the novel coronavirus via computer simulation screening and so on. Based on multiple rounds of screening, the Scientific Research Group concentrated on a few drugs such as hydroxychloroquine. 13 Hydroxychloroquine was recommended in the Diagnosis and Treatment Protocol of COVID-19 (Trial Version 6) according to the results of initial clinical trials. The clinical trial results for more than 100 patients were accumulated before listing the drug in the treatment protocol. 14 However, the dose, drug regimens, and target patients of hydroxychloroquine standardized in the Diagnosis and Treatment Protocol of COVID-19 (Trial Version 7) were further tested based on additional clinical studies, which showed that overdose of chloroquine may cause damage to the heart and retina. Consequently, in the earliest stage, the candidate methods for treatment and prevention depend on the model results; nonetheless, the formal enrolment in the policy is principally judged by the practical results.
Excluding the difference from other countries, the coevolution of SAP under emergencies also presents some features apart from other domains: science tends to exert a rapid and direct influence on policy formulation in the early period of public health emergencies. The model of policymaking under peacetime is no longer applicable in times of war, such as in the COVID-19 scenario [3,77]. The high frequency of COVID-19-related policy enactment in the early stages of the pandemic in China exhibited a rapid coevolution between SAP, demonstrating that the response to the emergency needs to run at a faster pace than pandemic development [3]. Importantly, this is different from its progress in the formulation of climate change policies, where it has proven to be difficult to reach a consensus between scientists and government officials even after much discussion [41,76]. Edmondson [17] reported that the interactions between SAP with respect to the CEM are also affected by external factors such as catastrophic events. Considering the urgency and unknown nature of the epidemic, rapid response is essential to plan for and mitigate further impact [56]. Furthermore, the process of science-policy interaction during any public health emergency may be simplified, with SAP tending to form a direct relationship with each other.
Van Zwanenberg and Millstone [92] addressed the concept of coevolution of SAP and mentioned that the interaction between SAP might be influenced by cultural, political, and other contextual factors. Gormley [27] argued that policy could be formed after public opinion on scientific issues was shaped by media coverage. The process of SAP interaction may be slowed with the increase in the influencing factors that need to be considered. In the early period of COVID-19, the government mainly focused on exploring and controlling the pandemic; as a result, the research, prevention, and control under important scientific guidance became the priority. Other themes such as economic recovery and social functioning began to be on the agenda of policy-making after the pandemic was under control. Notably, a difference has been reflected in the COVID-19-related policy between the early period and late period (Additional file 1: Appendix S5). It also proves that scientific information has been regarded as principal evidence to be considered in the early-period policy-making of public health emergencies, even science, and technology-based emergencies. The influence that science information brought to policy consultation can be possibly transferred from being direct and fast in the early period to conditional and uncertain in the later stage, as other factors such as culture, economy, and politics also start playing their role. As a result, the role played by science is destined to diminish to a supportive position.
Some controversial aspects of policy advisory science can also be observed. For instance, Weible et al. [77] mentioned that scientific research contributed to inform and legitimize decision-making, which could be used to obscure the government's and policy-makers' responsibility for policy responses and outcomes. Furthermore, Durnová [15] argued that emotions are a crucial part of policy-making. However, the role of emotions and their impact on legitimizing decisions and achieving desired outcomes are likely to be overlooked if excessive attention is focused on the role of scientific research.
Strengths and limitations
This study presents a manifestation of the CEM for SAP, as well as an effective approach to measure the interaction between SAP, while policies simultaneously lack references to the original scientific findings. It not only favours the construction of unbiased approaches to extract scientific evidence for framing policies for future, but also may be beneficial to bridge the gap that exists between SAP, which is discussed based on extensive health and public health literature [8]. This research also serves as a complementary case to the dynamic relationship between SAP at the global scale during the COVID-19 outbreak. However, the other link for the mutual interaction between the two can be further explored by combing scientific paper citation information or interviewing experts and by other methods. Multiple types of scientific information are available for reference during policy formulation. The influence of scientific research on policy-making is not limited to scientific papers, but includes the interactions between scientists and policymakers and clinical trials [66,86]. For example, clinical trial results also act as reference for important scientific information, which may also be cited before they are published. This may also be a reason why Chinese policies do not offer scientific reference. Scientific information other than scientific papers needs to be obtained in the future. Moreover, in this study, it was also noticed that policy plays a certain role in leading research questions in COVID-19, and the examples are presented in part 3.2. How and to what extent policies prompt the scientific research questions could be a direction for our future research. The timescale of this study was about 6 months; future studies may find it profitable to use a longer timescale. A base time interval of 10 days was adopted to investigate variations in research trends, but there may be other better division schemes.
Conclusions
Considering that scientific research is a process of solving problems and resolving disputes, the following findings can be drawn from this study. First, a similar dynamic trend was reflected between scientific research and related policy in early period of the COVID-19 outbreak in China, wherein an average interval of 8.36 days was observed between policy disclosure and the intensive publication period of related research. Second, issues such as aged patients, asymptomatic infections, critical care, and antibody testing may be of concern in the later stages of prevention and control. Mental health has remained under-researched and thus a significant challenge in China's fight against COVID-19, and the sensitivity and specificity of nucleic acid tests still require improvement. Third, the application of Chinese medicine in the treatment of COVID-19 is gaining more recognition in China, while essential involvement of herbal medicine in the treatment of COVID-19 has been further improved.
In their review on the existing global COVID-19 literature, Haghani et al. [30] discovered that most studies have focused on drug safety, where clinical characteristics, treatment, mental health, and nucleic acid and antibody test research are particularly prominent among global studies [1]. This reflects the fact that these issues tend to be common problems faced globally that prompt the strengthening of international collaboration [45], resulting in an unprecedented intensity of international collaborative research during COVID-19. 15 However, the specific circumstances of the epidemic vary from country to country, and based on this, each country has developed its own countermeasures, such as traditional Chinese medicine. Our study collected research trends of COVID-19 research in the United States which did not include Chinese medicine. Differences in pharmacology between Chinese and Western medical systems cannot be avoided; thus, studies on Chinese herbal medicine and COVID-19 appear more frequently in Chinese journals than in English journals [20]. | 2021-09-06T13:35:38.598Z | 2021-09-06T00:00:00.000 | {
"year": 2021,
"sha1": "a5a0501c46ef801605b30b0ef8d0865bfb01d8b0",
"oa_license": "CCBY",
"oa_url": "https://health-policy-systems.biomedcentral.com/track/pdf/10.1186/s12961-021-00770-6",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a5a0501c46ef801605b30b0ef8d0865bfb01d8b0",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248502404 | pes2o/s2orc | v3-fos-license | Efficacy and Patient Tolerability of Omidenepag Isopropyl in the Treatment of Glaucoma and Ocular Hypertension
Abstract Current therapeutic approaches for glaucoma aim to reduce intraocular pressure (IOP), which is the only available and reliable strategy proven to control the risk of disease development and progression. Omidenepag isopropyl (OMDI) is a novel topical ocular hypotensive agent that was launched onto the market for the treatment of glaucoma and ocular hypertension (OHT). After topical instillation and during corneal penetration, OMDI is converted into the active metabolite omidenepag (OMD), which behaves as a non-prostaglandin, selective E-prostanoid subtype 2 (EP2) receptor agonist. The topical administration of 0.002% OMDI once-daily (QD) possesses a 20–35% IOP-lowering effect, comparable to that of prostaglandin analogs targeting F-prostanoid (FP) receptor QD, which are the current first-line for pharmaceutical reduction of IOP. However, the mechanism of action and adverse events (AEs) of OMDI are different from those of FP receptor agonists. OMDI reduces IOP by enhancing both conventional trabecular and uveoscleral outflow facilities without complications of prostaglandin-associated periorbitopathy (PAP) seen with FP receptor agonists. Moreover, OMDI was also effective and well-tolerated in non-/poor responders to latanoprost and showed a stable IOP-lowering effect for one year, and its concomitant use with timolol enhanced the IOP-lowering effect. OMDI demonstrated acceptable safety and tolerability with good adherence and can be used in almost every patient. However, OMDI has some AEs such as conjunctival hyperemia, corneal thickening, macular edema/cystoid macular edema and ocular inflammation. Moreover, OMDI is contraindicated in patients who are allergic to the product, in aphakic or pseudophakic eyes, and in combination with tafluprost eye drops. If used appropriately in the right patients, OMDI could be an effective treatment option for glaucoma and OHT as a first-line alternative to FP agonists. Here, we summarize the results of clinical studies of OMDI and discuss its efficacy and patient tolerability in glaucoma and OHT in this review.
Introduction
The number of blind and visually impaired people due to glaucoma increased markedly, and this upward trend will continue with the growth and aging of populations, making glaucoma an important target for prevention and treatment. [1][2][3][4] Current therapeutic approaches for glaucoma aim to reduce intraocular pressure (IOP), which is the only available and reliable strategy shown to control the risk of disease development and progression. [5][6][7][8] In 1977, the low-concentration topical administration of prostaglandin F 2α (PGF 2α ) or prostaglandin E 2 (PGE 2 ) was demonstrated to decrease intraocular pressure (IOP) by stimulating the respective F-prostanoid (FP) receptor or E-prostanoid (EP) receptor. 9 Then, prostaglandin analogs targeting FP receptors to lower IOP were developed to have a better penetration into the anterior chamber and fewer side effects. 10,11 Today, four prostaglandin derivatives (latanoprost, tafluprost, travoprost, and bimatoprost) targeting the FP receptor, or FP receptor agonists, have been released. They are most commonly used as once-daily eye drops across the world, being the first-line pharmaceutical treatment for glaucoma because of their greatest IOP-lowering effect of all glaucoma eye drops, efficacy in all forms of glaucoma, suppression of diurnal IOP variation, additive effects in combination with other types of glaucoma eye drops, few systemic side effects, and good patient adherence. 12,13 FP receptor agonists decrease IOP by primarily increasing aqueous humor drainage via the uveoscleral outflow pathway by stimulating Gq-protein-mediated increases in intracellular calcium concentrations and various signaling cascades in the ciliary body and trabecular meshwork. [12][13][14] However, despite the high efficacy and tolerability of FP receptor agonists administered once a day, there are some unmet clinical needs. First, in addition to common ocular side effects such as conjunctival hyperemia and eye irritation, the long-term or unilateral usage of FP receptor agonists frequently induces distinctive local side effects, named prostaglandin-associated periorbitopathy (PAP), including hyperpigmentation in the iris and around the eyelids, eyelash growth, blepharochalasis involution, periorbital fat loss, enophthalmos, deepening of the upper eyelid sulcus (DUES), and hardening of eyelids and ptosis. 13,15 Periorbital tissue changes could hinder the long-term management of glaucoma by making IOP measurements difficult, increasing ophthalmic surgical failures of trabeculectomy, and reducing treatment adherence due to cosmetic problems. [15][16][17] Therefore, PAP is a well-known clinical and cosmetic concern in patients receiving FP receptor agonists. 15,16 Second, a small number of patients do not respond to FP receptor agonists and have less than 10% or 15% IOP reduction (non-/poor responders to FP agonists). It was found that 28% of Japanese patients in a retrospective study and 52% of predominantly Caucasian patients in a prospective randomized study were non-/poor responders to latanoprost. 18,19 Therefore, an effective, well-tolerated, alternative monotherapy with a novel mechanism of action would be clinically beneficial to treat glaucoma patients with PAP or non-/poor responders to latanoprost or other FP agonists. In other words, the current developmental goal for glaucoma ophthalmic solutions would be to provide a therapy with IOP-lowering effects equivalent to those of latanoprost throughout the day with a once-daily drop, the possibility of use in combination with other eye drops, presenting fewer side effects, and with less inter-individual variability in efficacy.
Pharmacological Profile of OMDI
OMDI, a selective EP2 receptor agonist with a non-prostaglandin structure, was co-developed by Ube Industries Ltd. (Tokyo, Japan) and Santen Pharmaceutical Co. Ltd. (Osaka, Japan). FP receptor agonists, the current first-line and most prescribed glaucoma eye drops, have inherent problems with non-/poor responders and ocular local side effects. [15][16][17][18][19] Moreover, beta-blockers, another first-line ocular hypotensive agents, are not effective as FP receptor agonists in lowering IOP, and have a problem with systemic side effects. 36 of a new first-line ophthalmic agent for glaucoma and OHT with a novel mechanism of action and efficacy and safety comparable to or higher than those of a FP receptor agonist.
1263
organizational changes seen with FP agonist use. [39][40][41] Moreover, OMD has different effects on 3D human orbital fibroblast organoids as compared to bimatoprost acid, which indicate that it may not induce deepening of upper eyelid sulcus (DUES). 42 Antiglaucoma drugs such as OMD and bimatoprost acid modulate the structures and physical properties of Grave's orbitopathy-related human orbital fibroblast 3D organoids in different manners by modifying the gene expressions of ECM, ECM regulatory factors, and inflammatory cytokines. 43 The EP2 receptor is a Gs-coupled transmembrane receptor found in the ciliary body and trabecular meshwork. EP2 receptor agonists decrease IOP by increasing both trabecular and uveoscleral outflow by stimulating a Gs-proteinmediated elevations in intracellular adenosine 3',5-cyclic monophosphate (cAMP) levels and various signaling cascades in the ciliary body and trabecular meshwork. 13,20,29 In monkeys with laser-induced OHT, treatment with 0.002% OMDI ophthalmic solution for 1 week resulted in a significant 71% increase in the aqueous outflow facility (thought to be the conventional, pressure-dependent trabecular outflow facility) and a significant 176% increase in the uveoscleral outflow compared with the control vehicle treatment (p < 0.05, respectively) ( Figure 2). 24 No significant difference was seen in aqueous humor production for 0.002% OMDI ophthalmic solution compared with vehicle. 24 Therefore, OMDI is thought to lower IOP by promoting aqueous humor outflow via a dual mechanism of action, increasing both the trabecular 1264 outflow facility and uveoscleral outflow as a result of EP2 receptor stimulation. 23,24 Another recent study with human trabecular meshwork (TM) cells, monkey Schlemm's canal endothelial (SCE) cells, and porcine ciliary muscle showed that the IOP-lowering effect of OMD through the conventional outflow pathway is exerted by increasing outflow facilities with the modulation of TM cell fibrosis and SCE cell permeability. 44 In monkeys without OHT, a single topical administration of 0.0006% OMDI ophthalmic solution in combination with 0.5% timolol maleate (timolol), 1% brinzolamide, 0.01% netarsudil mesylate, 0.4% ripasudil hydrochloride hydrate, or 0.15% brimonidine tartrate resulted in an IOP reduction that was significantly lower than that seen with monotherapy with any of these drugs alone (p < 0.05, respectively). 45 Namely, OMDI has additive IOP-lowering effects when administered in combination with other classes of antiglaucoma eye drops such as β-adrenergic antagonists, carbonic anhydrase inhibitors, Rho-associated coiled-coil containing protein kinases inhibitors, and α2-adrenergic agonists, because of its unique mechanisms of action. 45 The pharmacokinetic properties of OMDI ophthalmic solution 0.002% in humans have been investigated in a phase I trial of seven Japanese and seven Caucasian healthy volunteers. 34 Following treatment with 0.002% OMDI ophthalmic solution QD, OMD was rapidly absorbed in both subjects and the maximum plasma concentration (C max ) of the active metabolite (OMD) was achieved (almost 30-40 pg/mL) after 10-15 min. The half-life (t ½ ) of OMD was approximately 30 min, and the plasma concentrations were below the limit of quantification 4 hours after eye drop administration. 34 Moreover, OMD pharmacokinetic parameters were similar between Japanese and Caucasian subjects, and there was no accumulation of OMD after 7 days of dosing. 34 Table 1 summarizes the twelve clinical trials registered in the US National Library of Medicine for DE-117, the OMDI ophthalmic solution (searched on 2022/1/11). These are being or were conducted in the US, Japan, India, South Korea, Singapore, and Taiwan. Currently, nine have been completed and the results of seven have been described in clinical articles. Seven are phase III clinical trials for OMDI ophthalmic solution 0.002%. Two of these, SPECTRUM 3 (NCT03691649) and SPECTRUM 4 (NCT03691662), are in patients with glaucoma or OHT; the other two, RENGE (NCT02822729) and PEONY (NCT02981446), are in patients with open-angle glaucoma (OAG) or OHT; the other one, AYAME (NCT02623738), is in patients with POAG or OHT; and the others, FUJI (NCT02822742) and SPECTRUM 5 (NCT03697811), are in patients with POAG or OHT who are non-/poor responders to latanoprost. Table 2 summarizes the IOP-lowering effect of OMDI ophthalmic solutions in published articles related to clinical trials. For further investigation in a phase III clinical trial program, 3 randomized, masked, controlled, parallel group, multicenter, dose-finding studies (NCT01868126, NCT02179008, and NCT02623738) were consecutively conducted to determine the optimal dose concentrations of OMDI ophthalmic solution in patients with POAG or OHT. As a result, the 0.002% OMDI ophthalmic solution QD was found to have the optimal dose, which demonstrated stable IOP-lowering effects with good tolerance. 25 Moreover, a recent phase II clinical trial (SPECTRUM 6) supported that the 0.002% OMDI ophthalmic solution was more favorable when administered QD than it was when administered twice-daily (BID), considering the benefit-risk profile because of the 3 times higher incidence of local tolerability issues (BID: 41.7%; QD: 14.0%) without significant difference in IOP reduction in the BID arm. 35 A series of three clinical trials conducted in Japan have confirmed the IOP-lowering effects of OMDI: (1) the noninferiority of OMDI over latanoprost (AYAME), (2) the long-term efficacy of OMDI and its additive effect in combination with timolol (RENGE), and (3) the efficacy of OMDI in non-/poor responders to latanoprost (FUJI). A Phase III, randomized, single-masked, active-controlled, parallel group, multicenter, non-inferiority study (AYAME) was conducted to compare the efficacies of treatment with 0.002% OMDI and 0.005% latanoprost ophthalmic solutions QD in patients with POAG and OHT for 4 weeks after a washout period (n = 190). 26 At week 4, the mean diurnal IOP significantly decreased by 5.93 mmHg from baseline of 23.78 mmHg (24.9% reduction) in the OMDI group, and by 6.56 mmHg from baseline of 23.40 mmHg (28.0% reduction) in the latanoprost group (Table 2). However, the difference in mean diurnal IOP reduction from baseline to week 4 between two arms was 0.63 (95% CI: 0.01-1.26) mmHg in favor of latanoprost. The IOP reduction achieved with OMDI was found to be non-inferior to that achieved with latanoprost because the difference is within the standard level for non-inferiority (1.5 mmHg). Therefore, OMDI was shown to be the
DovePress
first ophthalmic solution in more than 20 years to show non-inferiority to latanoprost. 46 The phase III, 3-month PEONY study, with a larger sample size (n = 370) and a study design similar to that of the AYAME study, has been completed in India, South Korea, Singapore, and Taiwan, and its results await publication. The phase III RENGE study (n = 125) also investigated the long-term, 12-month efficacy of 0.002% OMDI ophthalmic solution QD in patients with OAG and OHT divided into three cohorts. 28 OMDI significantly decreased IOP over 52 weeks in a high-baseline IOP group with a baseline diurnal IOP of 22-34 mmHg, a low-baseline IOP group with a baseline diurnal IOP of 16-22 mmHg, and a concomitant-use-of-timolol group with a baseline diurnal IOP of 22 to 34 mmHg (p < 0.001%, respectively). Moreover, the combination therapy of OMDI and timolol resulted in additional IOP-lowering effects, confirming recent basic research results. 45 In the phase III FUJI study (n = 24), 0.002% OMDI QD was also effective for patients who had failed to respond to latanoprost. The 4-week treatment with OMDI was well-tolerated and significantly reduced the mean diurnal IOP from baseline in patients who exhibited under 15% IOP reduction following 8-week treatment with latanoprost (non-/poor responders to latanoprost). 27 Therefore, OMDI showed good IOP results both in patients with newly introduced or switching treatment. Further phase III studies of non-/poor responders to latanoprost (SPECTRUM 5) with a larger sample size over longer time periods are ongoing in the US. Figure 3 demonstrated the relationships between baseline mean diurnal IOP and mean diurnal IOP reduction or reduction rate from baseline with 0.002% OMDI QD treatment in published articles related to clinical trials using the bubble chart. OMDI was shown to lower mean diurnal IOP by 3.7-7.4 mmHg (20-30%) from baseline. Since OMDI is thought to lower IOP by increasing both trabecular and uveoscleral outflows through EP2 receptor stimulation, the IOPlowering effects may be influenced by baseline IOP. 23,24,44 There appears to be a positive correlation between baseline IOP and IOP reduction or reduction rate in Figure 3. One retrospective study supported this idea, as NTG patients with a mean baseline IOP of 15.7 mmHg showed a significant reduction in IOP with the topical administration of OMDI, but the range of IOP reduction was only 12.0%-12.8%. 47 However, this is uncertain due to the insufficiency of randomized clinical trials in eyes with baseline IOP less than 21 mmHg, such as those with NTG. Therefore, further studies are warranted.
Moreover, a prospective, open-label, experimental study demonstrated that 0.002% OMDI ophthalmic solution QD provided stable 24-hour IOP reduction throughout the day and night in OAGs or OHTs, which suggested that OMDI could be useful as a first-line treatment for new patients. 48 Another clinically important aspect of the IOP-lowering effect of OMDI is the presence of non-/poor responders, defined as patients with IOP reductions of less than 10% at both 1-2 months and 3-4 months after initiation of administration. 47,49 The frequency of non-responder patients with NTG for QD OMDI treatment in a retrospective study was 22.4%, which could be a higher percentage than those for FP receptor agonist treatment. 47,49 Additional validation is needed to clarify this; however, careful follow-up is recommended after starting ophthalmic treatment.
DovePress
Clinical Ophthalmology 2022:16 occurred in 5.0% (n = 18). The frequency of ME/CME which was the most common reason for study discontinuation in the RENGE study, being 4.4% (n = 16). 28 The frequency of subjective symptoms was eye pain in 3.6% (n = 13),
DovePress
Clinical Ophthalmology 2022:16 photophobia in 3.3% (n = 12), and blurred vision in 1.7% (n = 6). Anterior chamber cells were identified in 1.4% (n = 5) of subjects, while iritis with anterior chamber cells was only observed in 0.8% (n = 3). It is possible that these frequencies are underestimated, as not all clinical trials reported the presence or absence of all AEs. However, it should be useful to have an overview of the relatively frequent AEs that can occur with once-daily topical administration of 0.002% OMDI. Clinically, OMDI-related conjunctival hyperemia has a relatively high frequency, and concomitant OMDI and timolol treatment results in an increased incidence of conjunctival hyperemia. 28 However, it should not be of large concern, as long as an adequate explanation is provided before prescribing, because it is always mild and weaker than that induced by 0.4% ripasudil and it usually settles down with time, as it does with FP agonists. 50,51 Moreover, the average increase in corneal thickness with OMDI is mild, less than 20 µm, which is within the limits of the diurnal variation in corneal thickness, and has no clinical significance related to visual acuity or IOP measurements in most cases. 28 However, corneal thickening could be a problem for those with a thinner cornea, for example due to a history of LASIK, because some may complain of decreased naked eye visual acuity with OMDI treatment. In addition, the drug should be discontinued or changed if subjective symptoms are severe, or if there are complications of ME/CME or iritis that may lead to vision loss. As a special note, in the RENGE study, all cases of OMDI-related ME/CME occurred in eyes with intraocular lens (IOL) implants, and the incidence of ME/CME in pseudophakia was almost half. 28 Therefore, OMDI has been contraindicated in patients with pseudophakia or aphakia. 28,52 Moreover, it should also be administered cautiously to patients with iritis or uveitis, which could show increased risk of CME. 52,53 It should be noted that the use of OMDI is contraindicated in patients who are allergic to any of its components or who are receiving tafluprost, a FP agonist due to the possibility of it causing ocular inflammation. 52,53 Therefore, it is advisable to avoid the concomitant use of OMDI and other FP agonist eye drops. 52 Moreover, since there are no clinical reports on the concomitant use of OMDI with glaucoma eye drops other than timolol, it is recommended that OMDI should be used with caution until its safety is confirmed. 52 Future clinical studies will be needed to determine the efficacies and AEs of OMDI in combination with other glaucoma ophthalmic solutions. Additional information should be provided by the post-marketing surveillance of OMDI. As above, OMDI is usually well-tolerated and should have good treatment adherence because of the QD dosing regimen, as is the case with FP agonists. 25,34,36,54 In fact, OMDI treatment compliance rates and dosing adherence determinations for 1 or 3 months were reported to be high (≥75%) in a clinical trial. 25 A post-marketing retrospective study of 0.002% OMDI ophthalmic solution also demonstrated that the early persistence of OMDI was good, and initial OMDI monotherapy does not largely differ from initial latanoprost monotherapy. 55 Additionally, the OMDI discontinuation rate until 3 months was 22% in total, because of an insufficient IOP-lowering efficiency (11%), followed by conjunctival hyperemia (4%) and visual acuity disturbance (2%). 55 Therefore, clinicians should also be cautious of early failure in OMDI treatment, and patients should be informed of the risks and benefits of treatment with OMDI to increase the likelihood of long-term adherence to self-medication with eye drops.
Additionally, it is clinically important to note that OMDI has no or little ocular PAP side effects, including eyelid pigmentation, eyelash growth, and DUES, because of the no binding affinity to the FP receptor. 15,56 Previously, a grading or subjective measurement of PAP had been attempted for each PAP component such as conjunctival hyperemia, eyelash changes, and DUES. However, these measurements seem to be too complex to use in real-world situations and do not take into account the underlying mechanisms and difficulty of measuring IOP. 15,57-59 Therefore, we recently constructed an in-house PAP grading system (Shimane University PAP Grading System, SU-PAP) which considers the mechanisms involved in the development of cosmetic PAPs (ie, superficial and deep), and the effect of PAP in glaucoma management (ie, difficult IOP measurements). 15 SU-PAP can classify PAP into four grades, allowing for the assessment of its progression. (Figure 4). Moreover, in previous clinical trials, none of the eyes treated with OMDI showed PAP, [25][26][27][28]35 and patients who were switched from conventional PGF 2α analogs (FP agonists) to OMDI showed significant improvements in PAP signs including DUES over time without significant IOP changes. [60][61][62] Currently, the results of long-term adherence of OMDI treatment are unknown. However, since OMDI does not induce PAP, it may not only be useful in the treatment of patients with PAP or those concerned by it, but also achieve a better long-term adherence than that of FP agonists.
Case of OMDI-Related Macular Edema
The mechanism of ME/CME and iritis caused by OMDI has not been elucidated to date. However, considering that PGE 2 could cause ocular inflammatory reactions similar to those caused by PGF 2α and all OMDI-related ME/CME cases occurred in IOL-implanted eyes in the clinical trial, EP2 stimulation may cause them when the blood-aqueous or bloodretinal barriers in the treated eye had been damaged. 13,53,[63][64][65] Thus, the effectiveness and safety of OMDI in secondary glaucoma subtypes, in which the eyes received ocular surgery or were inflicted with other retinal diseases, should be closely evaluated. We experienced a case of ME/CME associated with the accidental usage of OMDI in patients with glaucoma after uncomplicated cataract surgery and successfully treated with sub-Tenon triamcinolone injection (STTA) and sub-Tenon injection of corticosteroids. Consent to publish the case report was obtained. This report does not contain any personal information that could lead to the identification of the patient.
A 64-year-old Japanese female patient was referred to the Matsue Red Cross Hospital with a complaint of blurred vision in the right eye. Her general clinical history was unremarkable. She had been taking 2% carteolol, a nonselective β-adrenoceptor antagonist ophthalmic solution, QD for ocular hypertension and underwent uneventful cataract surgery with posterior chamber IOL implantation in her right eye 3 years before the onset of symptoms. She was prescribed and started using 0.002% OMDI ophthalmic solution QD additionally for 5 months before the onset. The best-corrected visual acuity (BCVA) was 20/25 and the IOP was 10 mmHg at initial examination. The anterior and posterior chambers were clear, and the anterior chamber flare was 17.9 pc/msec. Fundoscopy and spectral domain optical coherence tomography (SD-OCT) revealed a CME with serous retinal detachment (SRD) ( Figure 5A). The patient wanted to improve her symptoms promptly, so OMDI was discontinued, and the patient was treated with STTA 20mg on the same day. After two months, an improvement in subjective symptoms was obtained, and the BCVA improved to 20/20. The IOP was 12 mmHg, and the flare was 10.7 pc/msec. SD-OCT showed resolution of the CME with SRD ( Figure 5B). The case has been recurrence-free for 3 months, and the IOP has been controlled at 10-12 mmHg. Figure 5 (A) Five months after starting 0.002% OMDI QD treatment, CME with SRD in the right pseudophakic eye was shown on SD-OCT. There was no pathology that could cause CME other than the use of OMDI for IOL-implanted eyes. The discontinuation of OMDI and administration of STTA was decided. (B) Two months later, the recovery of CME with SRD was observed. The horizontal and vertical red arrows indicate the cross-sectional line scans in 5A and 5B. Abbreviations: OMDI, omidenepag isopropyl; QD, quaque die; CME, cystoid macular edema; SRD, serous retinal detachment; SD-OCT, spectral domain optical coherence tomography; IOL, intraocular lens; STTA, sub-Tenon triamcinolone injection.
DovePress
In the RENGE study, all cases of OMDI-related ME/CME were mild or moderate in severity, and after discontinuation of OMDI treatment, standard treatment with topical nonsteroidal anti-inflammatory drugs or steroids was successful, and the time from onset of ME/CME to recovery was reported to be 89.5 days. 13,28 It is well-known that the topical administration of FP agonists after cataract surgery can lead to CME, which is usually treated with the discontinuation of eye drops, corticosteroid eye drops, and nonsteroidal anti-inflammatory drugs (NSAIDs), although the recovery can require 1 month or more. 53,[63][64][65] One case report demonstrated that STTA resulted in the early disappearance of CME associated with prostaglandin treatment after cataract surgery. 63 Therefore, we selected STTA for OMDI-related ME/ CME treatment, which resulted in a relatively early improvement of ME/CME and subjective symptoms. Moreover, since the CME decreased relatively early after sub-Tenon injection of corticosteroids and the flare value also decreased, some inflammation may have been involved in the development of OMDI-related ME/CME.
Conclusion
OMDI is the world's first commercially available selective EP2 receptor agonist, which promotes aqueous humor outflow via the trabecular and uveoscleral outflow pathways. The once-daily topical administration of 0.002% OMDI possesses IOP-lowering effects non-inferior to those of latanoprost QD, without PAP complications. OMDI was also effective and well-tolerated in non-/poor responders to latanoprost in short term. Furthermore, it also showed a stable IOP-lowering effect for a long period, and its concomitant use with timolol enhanced the IOP-lowering effect. Moreover, OMDI demonstrated acceptable safety and tolerability with good adherence, and can be used in almost every patient. However, OMDI shows some AEs such as conjunctival hyperemia, corneal thickening, ME/CME, and ocular inflammation. Especially, since there are no clinical reports on the concomitant use of OMDI with glaucoma eye drops other than timolol, it is recommended that OMDI should be used with caution until safety is confirmed. Additionally, OMDI is contraindicated in patients who are allergic to the product, in aphakic or pseudophakic eyes, and in combination with tafluprost eye drops. If used appropriately in the right patients, OMDI could be an effective treatment option for glaucoma and OHT as a first-line alternative to FP agonists.
Author Contributions
All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.
Funding
There is no funding to report.
Disclosure
Santen Pharmaceutical Co., Ltd. provided information on the status of latest clinical trials of OMDI and assisted us in the preparation of Figures 1 and 2. No involvement was made in the writing or reviewing. The authors report no conflicts of interest in this work. | 2022-04-29T15:23:14.398Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "c02966c7f6d6698c5bfd993d59fd89e5b646e891",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=80277",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "964f0ae910ca04664959441586f74467f1381e63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6521557 | pes2o/s2orc | v3-fos-license | Towards Knowledge Based Risk Management Approach in Software Projects
All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009; Batista Webster et al., 2005; Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994; Basili et al., 2007; Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as “FIRM”. Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are:
Introduction
All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009;Batista Webster et al., 2005;Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994;Basili et al., 2007;Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as "FIRM". Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are: www.intechopen.com Does the proposed knowledge based framework lead to a more effective risk management than the one obtained without using it? Does the proposed knowledge based framework lead to a more precise risk management than the one obtained without using it? The rest of the paper is organized as follows: section 2 provides a brief overview of the main research activities presented in literature dealing with the same topics; section 3 presents the proposed framework, while section 4 its specialization in the FIRM context; section 5 describes empirical studies we executed, results and discussions are presented in section 6. Finally, conclusions are drawn in section 7.
Related works
Efficient risk management methodologies must be devised and implemented in order to avoid, minimize or transfer the risks to external entities. For this reason risk management should be a mature process integrated with all other enterprise processes (Kànel et al., 2010). Unfortunately, risk analysis is rarely fully integrated with project management in Software Engineering. While Boehm (Boehm, 1989) has lain the foundations and Charette (Charette, 1990) outlined the applications, there have been few widely developed and used formal risk methodologies tailored for software development industry. Today risk methodologies are usually based on the identification, decomposition and analysis of events that can determine negative impacts on the projects (Farias et al., 2003;Chatterjee & Ramesh, 1999;Gemmer, 1997;Costa et al., 2007). Different approaches can be adopted to deal with the key risk factors: in (Arshad et al., 2009;Hefner, 1994;Donaldson & Siegel, 2007) some risk management activities and strategies are described. In (Hefner, 1994) authors propose a methodology based on the use of capabilities and maturity models, combined with risk and value creation factors analysis to reduce risk levels. In (Donaldson & Siegel, 2007), authors propose a five step process for incorporating risk assessment and risk derived resource allocation recommendations into project plan development. Furthermore, in (Kontio, 2001;Hefner, 1994) the Riskit approach is presented. It is a risk management process that provides accurate and timely information on the risks in a project and, at the same time, defines and implements cost efficient action to manage them. Other assessment methods for risk and hazard analysis (Petroski, 1994;Croll et al., 1997;Stratton et al., 1998) rely on people making judgments based on their experience. For safety systems a detailed knowledge of what can go wrong is an essential prerequisite to any meaningful predictions regarding the cause and effects of systems failures. In (Petroski, 1994), Petroski takes this argument further by stating that teaching history of engineering failures should be a core requirement in any engineering syllabus and take the same importance as the teaching of modern technology. Without an understanding of history or direct experience for a given application then more is unknown and hence risks are higher (Croll et al., 1997). For this reason there is a big interest towards the techniques and tools for storing and share risk knowledge. Nevertheless, the major part of today known risk management methodologies lack in doing this. They do not use any mechanism, except for human memory, to address these needs. In (Dhlamini et al., 2009) SEI SRM methodologies risk management framework for software risk management is presented. This approach is based on the adoption of three groups of practices supporting the experience sharing and communication in enterprise.
In this sense the proposed framework can be considered a complementary infrastructure for collecting and reusing risk related knowledge. Thus it can be used jointly with all the existing methodologies that it contributes to enhance.
Proposed framework
The proposed framework is made up of two main components: a conceptual architecture and a risk knowledge package structure for collecting and sharing risk knowledge.
Conceptual architecture
Conceptual architecture (Figure 1) is based on two well-known approaches: EFO (Schneider, 2003) and the QIP (Basili, 1989;Kànel et al., 2010). EFO is an organizational approach for constructing, representing and organizing enterprise knowledge by allowing stakeholders to convert tacit into explicit knowledge. It distinguishes project responsibilities from those related to collection, analysis, packaging, and experience transfer activities. In doing so, it identifies two different organizational units: Project Organization (PO) and Experience Factory (EF). The first uses experience packages for developing new software solutions and the second provides specific knowledge ready to www.intechopen.com be applied. To support these two infrastructures the QIP is used. It is based on the idea that process improvement can be accomplished only if the organisation is able to learn from previous experiences. During project execution measures are collected, and data are analysed and packaged for future use. In this sense QIP can be seen as organized in different cyclic phases (Characterize, Choose Process, Execute, Analyze and Package), that used in the organizations, perform and optimize the process of knowledge collection, packaging and transferring.
• CHARACTERIZE: it deals with the characterization of the project, the description of goals, project strategy to adopt and project planning. Such information are carried out by using focused assessment questionnaires which could have different abstraction levels (i.e. HLQ=High Level Questionnaire, FLQ=Functional Level Questionnaire). The information collected is interpreted by using the Knowledge-Base that suggests the appropriate actions to undertake in order to manage project risks. • CHOOSE PROCESS: on the basis of the characterization of the project and of the goals that have been set, choose the appropriate processes, using the knowledge packages if present, for improvement, and supporting methods and tools, making sure that they are consistent with the goals that have been set. • EXECUTE: it deals with the project plan execution and includes all the activities to perform for project execution. In this activities project and risk management knowledge is produces throughout project artefacts produced (e-contents) i.e. project documents, code, diagrams etc., identified risks together with the adopted mitigation actions (RRP -Risk Response Plan). They are stored in the E-Project Repository. • ANALYZE: this phase continuously collects, analyses and generalises the information related to the executed/closed projects. After the closure of a project, such phase implies the comparison between planned and actual results, the analysis and generalization of strengths and weaknesses, risks occurred, response plans used and their effectiveness. • PACKAGE: this phase packages experiences in the form of new, or updated and refined, models and other forms of structured knowledge gained from this and prior projects, and stores it in an experience base in order to make it available for future projects. The proposed architecture supports the synergic integration between PO and EF. Such integration makes knowledge acquisition and reuse process incremental according to the QIP cycle that determines the improvement of the entire organization.
Structure of a knowledge package on the risk
The EFO model results to be independent from the way knowledge is represented. Nevertheless, its specialization in an operative context requires it to be tailored by using a specific knowledge representation approach. Knowledge can be collected from several and different sources: document templates, spreadsheets for data collection and analysis, project documents, etc. In this work, an innovative approach for knowledge packaging has been defined. It is based on the use of decision tables (Ho et al., 2005;Vanthienen et al., 1998;Maes & Van Dijk, 1998). In particular, a set of decision tables have been used to formalize knowledge first and then make it available for consultation. Knowledge means: project attributes exploitation of relations among the attributes, risks identified during project execution and consequent list of mitigation actions. According to the decision tables structure, an example of how they www.intechopen.com have been used for representing knowledge o n r i s k i s p r o v i d e d ( F i g u r e 2 ) . I n t h e CONDITION quadrant there are project attributes (for example cost, time, available personnel, communication etc.); in CONDITIONAL STATES quadrant there are the possible values of project characteristics riskiness (based on scales different by type, at least ordinal, and depending on the attribute value which they relate to); in the ACTION QUADRANT there are risk drivers which must be taken into account (for example schedule, scarceness of personnel, etc.) together with the possible mitigation actions to carry out (for example increase the number of human resources allocated on a project, define a new date for project conclusion, etc). Finally in the RULES quadrant there are relationships between project characteristics, risk drivers and corresponding mitigation action to carrying out.
Fig. 2. An example of decision table oriented to risk management
This structure allows formalizing manager experience in risks management and, at the same time, to verify the effectiveness of the mitigation actions (Risk Response Plan). It also allows extending and updating previously acquired knowledge by adding, removing or modifying project attribute, risk driver and mitigation actions. For example, if a mitigation action results to be ineffective it can be deleted from knowledge package; project characterization, in ANALYZE and PACKAGE phases, can be enriched by adding new project attribute (i.e. context parameters). www.intechopen.com
Framework specialization
In order to obtain the information about FIRM context for formalizing the questionnaires and consequently the structure of the decision-tables, we carried out interviews with 50 FIRM project managers (according to the risk questionnaire in (Costa et al., 2007)). They deal with projects executed in a period of seven years. Collected data were analyzed to identify the suitable questions for risk investigation, the related risk drivers and mitigation actions. All this information was formalized as decision tables and was used to populate risk knowledge base. The steps followed were: • Collected data by interviews were analyzed in order to extract risks from the projects occurred during their execution; • Common risks were identified and their abstraction led us to define Risk Drivers (RD); • Each identified risk was related to the effective mitigation actions (MA) executed; • The most suitable questions to detect risks were identified and then related to risks; • Questions, risks and mitigation actions were classified in relevant functional areas (Communications, Procurement, Cost, Quality, Resource, Schedule, and Scope). The products of these activities were: • two assessment questionnaires used to identify potential risk drivers; • a knowledge base made of a set of decision tables used for formalizing the relationships between functional areas, risk drivers and mitigation actions
Assessment questionnaires
To identify project risks, usually the risk management process implies the use of assessment questionnaires during Risk Evaluation activity. Each questionnaire is made up of questions that support the project manager in discovering potential risks. Typically, risk managers are supported through two different kinds of assessment questionnaires, their aim is to characterize the project by analyzing the different project management functional areas in order to assess, point out and further manage, the risks affecting a project. In the FIRM context, two different types of questionnaires were used (example in figure 3): High-Level Questionnaire (HLQ): questionnaire that assesses the general aspects of the projects, its aim is to generally characterize a project. Functional-Level Questionnaire (FLQ): more specific questionnaire that points out specific issues related to the project (i.e. potential risks to mitigate), there is one specialized section for each project management functional area. The questions of the questionnaire are answered by using a Low (L), Medium (M), High (H) scale.
The project manager starts with the HLQ for highlighting the general aspects of his project and then he uses one or more of the FLQ sections to discover the critical risk drivers and the mitigation actions related to a particular project management function (i.e. FLQs support the RRP definition). A generalization of the relationships between HLQ, Project Management Functional Area assessed within FLQ and RD is shown in Figure 5.
It is important to underline that the use of questionnaires for knowledge execution is much diffused in industrial context, but typically, these relations between the different questionnaires and between questionnaire results and the consequent mitigation action www.intechopen.com choice are tacit knowledge of the risk manager. Thus even when risks investigation is supported by assessment questionnaires it is usually quite subjective. This implies the need of a risk knowledge package for collecting individual knowledge/experience previously acquired by managers during the execution of a project.
The following section presents the knowledge base (i.e. a set of decision
Knowledge base
A set of decision tables have been used to formalize the relations between HLQ and FLQ; and to guide the project manager during the risk investigation activities. This set can be considered as an experience base. In particular, the following decision tables were introduced: • 1 decision table for the HLQ: it relates to the general aspects of the project to more specific issues such as the functional areas of the project management that need further investigations (figure 4).
• 1 decision table for each functional area of FLQ: it relates to the specific issue of the functional area to the critical risk driver and consequent mitigation actions (figure 5).
Scenario of consulting activity
The project manager answers to HLQ, each question included in HLQ corresponds to a condition (project attribute) of the related decision table; then the table interprets these responses and the actual actions are extracted. These actions are related to the functional areas of project management that need further investigation, therefore the actions guide the project manager in answering corresponding sections in the FLQ. Each section in FLQ corresponds to a specific decision table and then each selected question corresponds to a condition (specific issue) of the table which interprets these responses and then extracts the action. These actions are related to risk drivers and correspondent mitigation actions to carrying out. Therefore project managers might use issues, risk drivers and mitigation actions extracted in order to build the final Risk Response Plan ( Figure 6). For example, according to Figure 4, one of the tuple corresponding to HLQ answers is (Cost, Size, Effort, Technology, Schedule, Structure, and Impact) = (M, H, H, L, L H, and L) for this tuple "Communication" is one of the critical areas to investigate. In figure 5, Communication area is investigated and one of the tuple obtained by the related FLQ is (Type of project Organization, Relationship of the organizational units in the project effort, Preparation and commitment to project status reporting) = (M, L, M). For this tuple, two selected RD corresponding to row 1 and raw 12 of decision table in Figure 5 are selected and two MA corresponding to row 2 and 13 are suggested.
Empirical investigation
The proposed framework has been investigated through two different types of empirical investigations: post-mortem analysis and case study. Post-mortem analysis can be defined as "a series of steps aimed at examining the lessons to be learnt from products, processes and resources to benefit on-going and future projects. Post-mortems enable individual learning to be converted into team and organizational learning" (Myllyaho et al., 2004). Case studies (Yin, 2003;Kitchenham et al., 1995), instead, are investigations on real projects being carried out in an industrial setting. Consequently, all variables are defined a priori, but the level of control is low. These are strongly influenced by the context of the enterprise providing the experimental environment. Also, the independent variables of the study may change due to management decisions or as a consequence to a natural evolution of the process variables considered during project execution. Generally, a case study is carried out to investigate a phenomenon within a specific range of time. A case study can be used as a means to evaluate the efficiency of a possible innovation or as a comparative study which evaluates and compares results deriving from the application of an innovative method, technique or tool and the one already in use within the enterprise. Both post-mortem analysis and case study were executed on industrial project data of a large software firm. The goal of this firm is to embed the risk assessment/treatment in its primary processes in order to support its project execution by the experience acquired in former projects. Therefore FIRM, jointly with the Department of Informatics of Bari, has introduced the approach for highlighting and managing the risks occurred. To execute post mortem analysis, also called simulation, 54 projects of FIRM have been analyzed, all executed in a period of seven years, and seven of them, considered homogeneous in terms of duration, project size and development team experience, were selected. Furthermore to execute the case study, 5 projects have been analyzed in-itinere in order to directly evaluate the appropriateness of the proposed framework. Both investigations aim at evaluating the proposed approach with respect to the same factors, in the same context and with the same viewpoint. For these reasons the experiment definition and the metric model adopted, explained in the following, are the same.
Experiment definition
The aims of the empirical investigation are to verify whether risk management resulting from the application of Proposed Approach (PA) is more efficient and precise than risk management carried out using traditional Management Support (MS), i.e. the traditional risk management.
www.intechopen.com
Effectiveness means the ability to undertake mitigation actions that, for each expected risk, avoid that a risk degenerates in one or more problems. While Precision is the ability to foresee all the occurred risks. Research goals are thus formalized as follow: RG1. Analyze the proposed approach for the purpose of comparing it to risk management obtained by only using management support with respect to Effectiveness from viewpoint of FIRM risk manager in the context of industrial FIRM projects RG2. Analyze the proposed approach for the purpose of comparing it to risk management obtained by only using management support with respect to Precision from viewpoint of FIRM risk manager in the context of industrial FIRM projects there is statistically significant difference in precision between PA and MS. Independent variables represent the two treatments: risk management using proposed approach (PA) and risk management using only management support (MS). Dependent variables were quality characteristics of research goals, i.e. effectiveness and precision. Both these variables were operatively quantified by using the metrics presented in the next paragraph.
Metric model
The following metrics were used to quantitatively asses the research goals: Figure 7 shows relationships among metrics in terms of sets.
Fig. 7. Relationships between metrics
Note that Effectiveness can be equal to zero or, at maximum, equal to 100%. When Effectiveness is: • Tends to 100% all the Expected Risks are well managed, in particular when NOP tends to zero; • Tends to 0% when no one of the Expected Risk is well managed. In particular when NOP tends to NMR. Therefore Effectiveness means the capability to manage the risks and to put to use the related mitigation actions in the way to avoid they became problems during the project execution. For this reason Effectiveness is as grater as smaller is the NER that became problems. Precision can tend to zero or, at maximum tend to 100%. When Precision: • Tends to 100%, when all the possible risks were detected, in particular when UR tends to 0. • Tends to 0% at the NUR increasing, in particular it means that number of UR is much greater than NER. In fact Precision means the capability to foresee all the risks that can occur during project execution NUR decreases. At the beginning of each project and iteratively during project execution, a manager points out a set of Expected Risks. Part of this set, composed by the most critical and relevant risks for the project, will be managed, while the remaining ones, will not. In general terms, a risk is managed when a risk response plan is developed for it. Action strategy defined by the manager for each MR in some cases is successful and in other cases transforms a risk into an OP. The last case is indicative of ineffectiveness of the strategy action adopted in the project execution. Finally it is also possible that some problems (UP), raised during project execution, are related to UR.
Data analysis
Proposed approach was validated using "Post Mortem Analysis" and "Case Study". Both in Post Mortem Analysis and in Case Study according to the experimental design, statistical analysis were carried out. First of all descriptive statistics were used to interpret data www.intechopen.com graphically. Data collected during experimentation have been synthesized through descriptive statistics. Finally, data have been analysed through hypothesis testing, where initial hypothesis were statistically validated with respect to a significance level. The dependent variables were tested in order to investigate the significance of the differences observed in the values collected. In next paragraphs the results of data analysis are given. The first paragraph (6.1) refers to post mortem analysis and the second one (6.2) to the case study
Post mortem analysis
This investigation aims at evaluating the PA effectiveness and precision by observing the model behaviour used on legacy data related to projects already executed with the traditional approach. Data set includes 14 observations, 2 for each project. (Wilcoxon, 1945) is the nonparametric alternative to t-test for dependent samples. Since normality conditions were not always met, non parametric test was chosen. We used Shapiro-Wilk W test to verify if normality conditions were always satisfied or not. The test points out a significant difference in the Effectiveness between the two approaches. Therefore the null hypothesis can be rejected and we can conclude that the proposed approach is more efficient than traditional risk management. Figure 9 shows the median values of precision of PA and MS. Table 2 reports the values of the p-level obtained by using Wilcoxon test, applied to Precision of the two approaches. The test points out a significant difference between the two approaches. Therefore the null hypothesis can be rejected and we can conclude that the proposed approach is more precise in risk management.
Experimental Group p-level Results
Precision 0.0253 reject H 0Precision and accept H 1Precision Table 2. P-level value of the Wilcoxon test for Precision value 6.2 Case study data analysis This kind of investigation evaluates PA effectiveness and precision, compared with MS, measuring it "on the field" during the execution of some processes. For this purpose, 5 projects that conducted with the both approaches were selected. As for the post mortem analysis, also in this case, the collected values appeared as not be normally distributed and thus the Wilcoxon non parametric test was used for the hypotheses testing the α-value was fixed at 5%. The test points out a significant difference in the Effectiveness between the two approaches. Therefore the null hypothesis can be rejected and we can conclude that the proposed approach is more efficient than the manager approach. The Case Study allows to reject the null hypothesis with the error probability lower than in the case of the post-mortem analysis. Figure 11 shows the median values of precision of PA and MS. Table 4 reports the values of the p-level obtained by using the Wilcoxon test, applied to Precision of the two approaches. There is, also in this case, a statistically significant difference in the Precision between the two approaches, i.e. the null hypothesis can be rejected concluding that the proposed approach is more precise than the manager approach. Also in this case, the test points out a significant difference between the two approaches. Therefore the null hypothesis can be rejected and we can conclude that the proposed approach is more precise in risk management.
Lessons learnt
An additional experimentation data analysis allowed us to make some general qualitative considerations completing the comparison between the PA and the MS.
To make these analyses we consider the issues areas that were listed in the FLQ (Figure 3): • Customer Relationship We decided to consider the FLQ issues areas because we valuate this detail level the better one on the base of the number of collected data. According to the Post Mortem data, the critical areas (the areas that were characterized by the higher number of problems) were: Resource Management, Quality Management, and Scope Management. Resource Management consists of human resources and infrastructure management. Infrastructure management requires the identification and acquisition of all necessary equipment capable to carry out the project. Quality Management consists of planning, constructing and evaluating product and service quality. This function requires, in particular, planning and conducting of quality assurance reviews, or reviews aimed at evaluating the quality of the process. Finally, Scope Management consists of Defining the product or service expected by the consumer (product scope) and the corresponding work necessary to achieve it (project scope); also monitoring changes during the project execution. For the critical areas we observed that MS finds a lower NER than the PA. Moreover, while in MS only a few part of NER are managed, in PA all the NER are managed. Moreover, in the PA the NUR is lower than in MS, it could be a consequence of the better capacity of PA to find risks and to manage them. These observations could consequently motivate the quantitative Precision Post Mortem results. According to the Post Mortem data, we found in the Case Study the same critical issues areas but in this case there was a decreasing of the PA criticality. This reduction could confirm that the approach based on the EF tends to improve the capacity to manage the risk in the critical areas towards the past experiences that were acquired in these areas. In fact the higher number of experiences, of data and of practices is usually related to the most critical areas. According to this consideration, we observed that the reduction of occurred problem in PA is consequence of the increase of the number of mitigation action efficacy.
Conclusions
This paper proposes a Knowledge based Risk Management Framework able to collect, formalize and reuse the knowledge acquired during past projects execution. As instrument for supporting such methodology, an appropriate set of assessment questionnaires and decision-tables have been proposed. The innovative use of decision tables allowed to capture risk knowledge during the entire project lifecycle and to improve the knowledge collected in the Knowledge Base. Thanks to knowledge expressed through decision tables, the proposed approach allows to combine the results of each document for evaluating the effects and the possible mitigation actions. In other words it allows express: The relations between generic and specific issues; The relations between issues, risks and actions to undertake to mitigate the risks as they occur. To evaluate the proposed approach the framework has been transferred and investigated in an industrial context through two different types of empirical investigations: post-mortem analysis and case study. Research goals aimed at assessing whether the proposed approach was more effective and precise for supporting risk management in software processes compared to traditional risk management approaches for Management Support. Data analysis pointed out a statistically significant difference between the proposed approach and the traditional one in software process risk management with respect to effectiveness and precision. Such empirical results confirm that better structured risk knowledge, customizable according to the context, helps a manager to achieve more accurate risk management. Moreover we observed that the proposed approach allowed, especially in the critical areas such as Resource Management, Quality Management, Scope Management, to obtain better results. Obviously, in order to generalize the validity of the proposed approach further studies extended to other contexts are needed. For this reason, the authors intend replicating the empirical investigations. In many human activities risk is unavoidable. It pervades our life and can have a negative impact on individual, business and social levels. Luckily, risk can be assessed and we can cope with it through appropriate management methodologies. The book Risk Management Trends offers to both, researchers and practitioners, new ideas, experiences and research that can be used either to better understand risks in a rapidly changing world or to implement a risk management program in many fields. With contributions from researchers and practitioners, Risk Management Trends will empower the reader with the state of the art knowledge necessary to understand and manage risks in the fields of enterprise management, medicine, insurance and safety. | 2015-07-20T22:48:03.000Z | 2011-07-28T00:00:00.000 | {
"year": 2011,
"sha1": "e5540c20cc8144574135f08cb5f8aad18a1c7ac9",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/17364",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f739b2793ae856c25be39972a7c0aaf2c8aa61fb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
117798494 | pes2o/s2orc | v3-fos-license | Geometry of deadbeat synchronization
The deadbeat synchronization of identical discrete-time nonlinear systems is studied from a geometric point of view. An array of deadbeat observers coupled via a deadbeat interconnection is shown to achieve synchronization in finite number of steps provided that a compatibility condition is satisfied between the observer and the interconnection. As an illustration to the theory, an example is provided where an array of third order observers achieves deadbeat synchronization.
Introduction
Two or more dynamical systems are said to synchronize when their solutions converge to a common trajectory. The generality of this definition allows many seemingly different cases to make examples of synchronization [1]. One such example is the following pair of discrete-time linear systems where y i = Cx i and all the eigenvalues of the matrix A − LC are on the open unit disc. The second system (1b) is the classic linear observer [2] and its construction readily yields x 1 (k) − x 2 (k) → 0 as k → ∞ regardless of the initial conditions. Another linear example to synchronization is the following array of systems with rather simple dynamics where [γ ij ] ∈ R q×q is a (connected) coupling matrix. That is, [γ ij ] satisfies: (i) the entries of each row sum up to unity, which implies that λ = 1 is an eigenvalue with the eigenvector [1 1 . . . 1] T , and (ii) all the remaining eigenvalues are on the open unit disc. In this case the solutions y i (k) converge to a fixed point in space and the systems are said to reach consensus [3,4]. At first sight the arrays (1) and (2) may not seem to be relevant, but they are in fact the two limiting cases of the following general structure Note that (3) boils down to (1) for [γ ij ] = 1 0 1 0 and to (2) for A = C = L = I. An effective way to study the synchronization behavior of an array is through understanding the smaller pieces that it is made of [5,6,7]. So if the array (3) is what we are trying to understand then it is worthwhile to focus on its two limiting cases: the array (1) and the array (2). In this line of thinking, the very first question that one is tempted to ask is the following. Given that the system (1b) is an observer for the system (1a) and that the array (2) reaches consensus, does the array (3) synchronize? However, a counterexample is easy to construct and this naive guess has to be abandoned.
Having dispensed with the first question, we point our attention, among a number of possibilities, to the following. Given that the system (1b) is a deadbeat observer [8,9,10] for the system (1a) and that the array (2) reaches consensus in finite number of steps [11,12], does the array (3) synchronize? This guess turns out to be more fruitful than the first one. In fact not only synchronization is achieved in this case, but it is achieved in deadbeat fashion, that is, in finite number of steps. Motivated by this simple observation on linear systems, we aim in this paper to establish sufficient conditions that guarantee deadbeat synchronization in an array of coupled identical discrete-time nonlinear systems. What we particularly study here is the synchronization behavior of an array of deadbeat observers that are coupled through a fixed interconnection scheme, which itself, if considered separately as the righthand side of an array, enjoys deadbeat synchronization. We show that deadbeat synchronization is achieved under a compatibility condition between the observer and the interconnection.
The literature accommodates few results on deadbeat synchronization. Motivated by possible applications in secure communications, one of the earliest results on the subject is presented in [13], where conditions for the synchronization of two systems of type Lur'e, coupled via a scalar output signal are given. Later, certain improvements to this work are reported, for instance, in [14,15,16], where synchronization still requires that the number of systems is two and the output is scalar. To the best of our knowledge, the problem of deadbeat synchronization has not yet been considered in a general setting where the number of (identical, nonlinear) individual systems are arbitrary and the output signals, through which the systems are coupled, are not necessarily scalar. The contribution of this paper is hence intended to be in understanding better the mechanism behind synchronization in discrete time from the deadbeat point of view.
The remainder of the paper is organized as follows. The next section contains some preliminary material. In Section 3 is the construction of the deadbeat observer, which is of geometric nature [17,18] and makes a special case of what is presented in [10]. The reader will find an illustrative example following this construction. As mentioned earlier, the observer together with the system being observed make a particular case of synchronization where there are only two systems. To reach a natural generalization of this scenario we take two mental steps. First, we remove the distinction between the two systems by allowing each to observe the other. In other words, we dispense with the drive system (leader)-response system (follower) hierarchy. Second, having removed the distinction between the observer and the observee, we allow the number of systems involved to be arbitrary. At that point a method is required to couple this array of observers. Therefore we introduce in Section 4 what we call deadbeat interconnection, which basically is a nonlinear generalization of the time-invariant map (coupling matrix) that appears in linear deadbeat consensus. Then in Section 5 we bring together the observer construction of Section 3 and the interconnection scheme of Section 4 to define the array of coupled observers. There we establish the deadbeat synchronization of this array under a compatibility condition that concerns both the observer and the interconnection. In Section 6 we provide a nonlinear example where an array of third order deadbeat observers are shown to achieve deadbeat synchronization. Certain issues are discussed in Section 7.
Preliminaries
The set of nonnegative integers is denoted by N, the set of rational numbers by Q. A vector of all ones is denoted by 1. The m × m identity matrix is denoted by I m , or sometimes simply by I when what m should be is either obvious or immaterial. The symbol ⊗ denotes Kronecker product. The null space of a matrix M ∈ R m×n is denoted by N (M ) and M ⊥ denotes some real matrix, whose columns form a basis for N (M ). For square M we let M 0 = I. Given a map h : X → Y, h −1 denotes the inverse map in the general sense that, for y ∈ Y, h −1 (y) is the set of all x ∈ X satisfying h(x) = y. That is, we will not need h be bijective when talking about its inverse. For f : . . x T q ] T ∈ R qn . We sometimes use " * " as a placeholder for "don't care."
Deadbeat observer
This section is dedicated to the description of the nonlinear deadbeat observer. Later, in Section 5, when we establish the conditions for deadbeat synchronization of an array of coupled observers, the construction presented here will be of key importance. Unlike linear systems, there is not a standard deadbeat observer construction for nonlinear systems. Even the definition of a deadbeat observer may not be unique. The definition and the construction that we present in this section are adopted from [10]. The section ends with an illustration of the construction.
Definition
Consider the following discrete-time system where x ∈ X ⊂ R n is the state, x + is the state at the next time instant, and y ∈ h(X ) =: Y ⊂ R m is the output or the measurement. The solution of the system (4) at time k ∈ N, starting at the initial condition x(0) ∈ X is denoted by x(k). Now consider the following array The solution of the system (5b) at time k ∈ N, starting at the initial condition x(0) ∈ X is denoted byx(k). Note thatx(k) depends also on x(0). We now use (5) to define deadbeat observer.
Definition 1 Given g : X × Y → X , the system is said to be a deadbeat observer for the system (4) if there exists an integer p ≥ 1 such that, for all initial conditions, the solutions of the array (5) satisfŷ x(k) = x(k) for all k ≥ p. The integer p then is called a deadbeat horizon.
Construction
To be used in the construction of the observer we define certain sets associated with the system (4). For x ∈ X we let We finally let We make the following two assumptions to guarantee that the observer construction will work. We note that these conditions are only sufficient. For less restrictive assumptions see [10].
Assumption 1 The map f : X → X is bijective.
The following result tells us how to design a deadbeat observer under these assumptions.
Theorem 1 Suppose Assumptions 1-2 hold. Then the system
is a deadbeat observer for the system (4) with deadbeat horizon p.
Theorem 1 will later become a corollary of our main result (Theorem 4). Hence we omit the proof. Let us now give an example to this construction (6).
Illustration
Consider the system (4) with . This map appears in [19] where it is reported to exhibit chaotic behavior for certain values of real numbers a and b. Let us now construct a deadbeat observer taking h(x) = ξ 3 . Note first that f is bijective (for b nonzero) and its inverse is Since h(x) = ξ 3 we can write Then by (7) [ The intersection of the sets (8) and (9) yields which means that Assumption 2 is satisfied with p = 3. The dynamics of the deadbeat observer then read For the parameter choice a = 1 and b = 1/3, Fig. 1 shows the simulation results for the initial conditions x(0) = (1, −1, 1) andx(0) = (0, 2, 1). The description of the deadbeat observer is now complete. As mentioned earlier, we are headed towards understanding the collective behavior of an arbitrary number of identical observers that are interacting. To be able to proceed, we therefore first need to be precise with what we mean by interacting. To this end, we introduce in the next section the so called deadbeat interconnection. This interconnection scheme will be evoked in Section 5 to characterize the coupling of the array whose synchronization we will study.
Deadbeat interconnection
Here we provide a generalization of the case where the linear array (2) reaches consensus in finite number of steps, which happens when the characteristic polynomial of the q × q coupling matrix [γ ij ] is d(s) = s q−1 (s − 1). When referring to such [γ ij ] we will use the term deadbeat coupling matrix. A primitive example for q = 4 is given as for all k ≥ 3. This means that the solutions of the array (2) should satisfy That is, convergence is exact. Now we give the generalization.
if the following conditions simultaneously hold.
• There exists an integer r ≥ 1 such that, for all initial conditions, the solutions of the array The integer r then is called a deadbeat horizon.
Some examples are in order. Let G ∈ R q×q be a deadbeat coupling matrix and Y = R m . Then the linear map with y = (y 1 , y 2 , . . . , y q ) is a deadbeat interconnection for any Q ∈ R m×m . Note for Q = I that the array (2) makes a special case of this construction. Not all linear deadbeat interconnections must have this structure (11) though. For instance, for Y = R 2 and q = 2, the map y → Γy with can be shown to be a deadbeat interconnection for which no G, Q ∈ R 2×2 exist that yield G ⊗ Q = Γ. Our last example is nonlinear. Let [γ ij ] ∈ R q×q be a deadbeat coupling matrix and Y = R. Then the map γ = (γ 1 , γ 2 , . . . , γ q ) with is a deadbeat interconnection.
Recall that in the standard observer setting, where there are only two systems in the array, i.e., the response system (the observer) and the drive system (the system being observed); the driving signal for the observer is the output of the system being observed. When one considers the general setting, where the array contains an arbitrary number of systems, the driving signal of each system (observer) is a function of the outputs of all the systems in the array. An array with q many systems means there will be q such functions. The deadbeat interconnection (γ 1 , γ 2 , . . . , γ q ) we defined in this section is nothing but a particular collection of those coupling functions. Having defined deadbeat observer and deadbeat interconnection, now we proceed to study the array that we form by bringing the two together.
Deadbeat synchronization
This section is where we finally gather the conditions that yield deadbeat synchronization of an array of coupled identical observers. Let us begin with defining the phenomenon under investigation. Definition 3 Given the maps g : X ×Y → X , h : X → Y, and (γ 1 , γ 2 , . . . , γ q ) = γ : Y q → Y q , the following array is said to achieve deadbeat synchronization if there exists an integer τ ≥ 1 such that, for all initial conditions, the solutions satisfy x i (k) = x j (k) for all k ≥ τ and all i, j. The integer τ then is called a deadbeat horizon.
This definition lets us write the formal statement of the problem to which we propose a solution in this paper: Under what conditions on the triple (g, h, γ) does the array (14) achieve deadbeat synchronization? Now, instead of directly listing the assumptions and establishing the main result, we prefer first to present the linear case, which we mentioned in the introduction in a slightly less general form (3), that motivated all the analysis in this paper. The conditions on which this linear result is founded we take as a justification for some of the assumptions we will have made.
Theorem 2 Let A ∈ R n×n , C ∈ R m×n , and L ∈ R n×m be such that A − LC is nilpotent. Let [γ ij ] ∈ R q×q be a deadbeat coupling matrix. Then the array (15) can be written as Since G is a deadbeat coupling matrix we have G r = 1ℓ T where ℓ ∈ R q is the left eigenvector of G for the eigenvalue λ = 1 satisfying ℓ T 1 = 1. That all the remaining eigenvalues are at the origin allows us to find a transformation matrix V ∈ R q×q satisfying as well as that is strictly upper triangular. Without loss of generality we assume that J is in Jordan form. That is, J = diag(J 1 , J 2 , . . . , J σ ) where each (Jordan) block J α , α ∈ {1, 2, . . . , σ}, is strictly upper triangular. Then the size of the largest Jordan block is no greater than r × r. Employing the coordinate change z := (V −1 ⊗ I n )x we can write where Since J is block diagonal with blocks strictly upper triangular, N is also block diagonal N = diag(N 1 , N 2 , . . . , N σ ) with the structure for all α ∈ {1, 2, . . . , σ}. Note that the size of the largest of the blocks N α is no greater than rn × rn. And this largest block must vanish in rp steps because (A − LC) p = 0. By the time the largest block has vanished the other blocks must have also vanished. We deduce therefore Equations (16) and (17) yield for all k ≥ rp. In other words, all the solutions x i (·) converge (in deadbeat fashion) to the trajectorȳ Let us go through the conditions required for Theorem 2 in order to generate their nonlinear counterparts. One condition is that the matrix A − LC is nilpotent. This is equivalent to that the system (1b) is a deadbeat observer for the system (1a), which suggests for the nonlinear case that each individual system of the array is a deadbeat observer. Recall that, under Assumptions 1-2, we know how to construct a deadbeat observer, see Theorem 1. Another condition in Theorem 2 is that [γ ij ] is a deadbeat coupling matrix. This translates into that the map y → ([γ ij ] ⊗ Q)y is a deadbeat interconnection. Hence the following assumption, which we will later need for the main theorem.
Consider now, in the light of Theorem 1 and under Assumptions 1-3, the following array Does the array (18) achieve deadbeat synchronization? The answer to this question is negative and that is why we will eventually need to make a fourth assumption. A counterexample, which is indeed linear, is as follows.
We take X = R 4 and Y = R 2 . Let f (x) = Ax and h(x) = Cx with Assumption 1 is satisfied since A is nonsingular. Assumption 2 is also satisfied (with p = 2) because where H can be computed as Note that (A − AHC) 2 = 0. As for the deadbeat interconnection, we take γ(y) = Γy where Γ ∈ R 4×4 is as in (12), which is known to satisfy Assumption 3 without admitting matrices G, Q ∈ R 2×2 to realize G ⊗ Q = Γ. (We especially want to emphasize here that the special structure (11) of the interconnection assumed in Theorem 2 is not merely for demonstrational convenience. The structure (11) does indeed play a role in achieving synchronization.) Hence the triple (f, h, γ) = (A, C, Γ) satisfies Assumptions 1-3. Under our parameter choice the array (18) enjoys the following form which does not achieve deadbeat synchronization. Because if it did then it would necessarily require that the matrix Φ ∈ R 8×8 has at least (q − 1)n = 4 of its eigenvalues at the origin. However, the characteristic polynomial of Φ turns out to be d(s) = s 8 − 3.5s 7 − 1.5s 6 + 11.5s 5 − 2.5s 4 − 8s 3 − 2s 2 .
This example justifies that Assumptions 1-3 are not sufficient and additional conditions are needed for the deadbeat synchronization of the array (18). Let us now provide (in Definition 4) one such condition. First, however, we need to introduce some notation associated with the triple (f, h, γ).
Definition 4
The triple (f, h, γ) is said to be compatible if, for all σ ≥ 1 and where p is as in Assumption 2.
Here is our last assumption.
We need some sort of justification for this assumption. When studying nonlinear systems, a first step towards forming an opinion on whether an assumption is too restrictive or not for the goal to be achieved is to see what it boils down to for linear systems. For this purpose we want to point out that, for the linear array (15), compatibility is implied by Assumptions 1-3 whenever the matrix Q is nonsingular. The next theorem formalizes this.
Theorem 3 Let A ∈ R n×n be nonsingular and C ∈ R m×n be such that there exists L ∈ R n×m satisfying (A− LC) p = 0. Then, given Q ∈ R m×m nonsingular and a deadbeat coupling matrix G ∈ R q×q , the triple (A, C, G⊗Q) is compatible.
Proof. Since G is a deadbeat coupling matrix, for some integer r ≤ q − 1, we have G r = G r+1 = 1ℓ T where ℓ ∈ R q is the left eigenvector of G for the eigenvalue λ = 1 satisfying ℓ T 1 = 1. That Q is nonsingular allows us to write Y σ = N ((G σ − 1ℓ T ) ⊗ I m ). Given σ, let {v 1 , v 2 , . . . , v α } be a basis for N (G σ −1ℓ T ) and {e 1 , e 2 , . . . , e m } be a basis for R m . Then the set i, j {v i ⊗e j } makes a basis for Y σ . Now, given x ∈ R qn , suppose (I q ⊗ CA k )x ∈ Y σ for all k = 0, 1, . . . , p − 1. Note that we can find scalars α kij such that Since A is nonsingular, that we can find some L satisfying (A−LC) p = 0 implies [C T A T C T . . . A (p−1)T C T ] is full row rank. Hence, for each k = 0, 1, . . . , p−1, we can find M k ∈ R m×m such that We can then write The next theorem is our main result, which says that if a number of identical deadbeat observers are coupled through a deadbeat interconnection, the array achieves synchronization in finite number of steps provided that the observer and the interconnection satisfy the compatibility condition of Definition 4.
To prove our claim we suppose that it holds for some k. Let x, z be such that Then we can write for all k and α. Note that it is enough to establish this for α = 1. Again we employ induction. Suppose (20) holds with α = 1 for some k. Then Now, by (19) and (20) which completes the demonstration of Lemma 1.
Note that the array (18) leads to the following system in X q where . The next two results concern the system (21). Lemma 2 Consider the system (21). For all k ≥ p − 1 the solution x(k) =: x k satisfies for all α ∈ {1, 2, . . . , p}.
Proof of Theorem 4. Consider the system (21). Given some k ≥ p and σ ≥ 1, suppose hx k−α ∈ Y σ for all α ∈ {1, 2, . . . , p}. Then by Lemma 3 we have for all α ∈ {1, 2, . . . , p}. By compatibility therefore hx k ∈ Y σ−1 . Hence we established Now note that Y σ−1 ⊂ Y σ by definition and Y r = Y q , where r ≥ 1 satisfies γ r Y q = Y 0 because γ is a deadbeat interconnection. In the light of these facts, (23) implies hx τ −α ∈ Y 1 for all α ∈ {1, 2, . . . , p}, where τ := rp. Evoking Lemma 3 once again we have The interpretation of this for the array (18) is for all α ∈ {1, 2, . . . , p}. Note that Assumption 2 and the first property listed in Lemma 1 imply x = h −1 hx ∩ [x] + p−2 for all x ∈ X . Then we can write where for the last step we used the property [x] + −1 = X . By (24) we can then write Recall that f is bijective. Hence x i (τ ) = x j (τ ). This equality emerges from arbitrary initial conditions. The time-invariance of the array (18) therefore implies that x i (k) = x j (k) for all k ≥ τ and all i, j.
is itself a deadbeat interconnection (over X q ) under the assumptions of Theorem 4.
Remark 5
Observe that Theorem 1 directly follows from Theorem 4 by letting the interconnection γ : Y 2 → Y 2 be such that γ 1 y = γ 2 y = y 1 .
An example
As an illustration of Theorem 4, we now provide an example where an array of nonlinear observers achieves deadbeat synchronization. The below pair (f, h) is borrowed from [10].
with (a, b, c) = x ∈ X = R 3 and Y = R. The map f is bijective, hence Assumption 1 holds, and we have which is singleton. (We refer the reader to [10] for derivations.) Therefore Assumption 2 is satisfied with deadbeat horizon p = 3. Regarding the deadbeat interconnection γ, our choice is the one given in (13) with [γ ij ] ∈ R q×q a deadbeat coupling matrix. Hence Assumption 3 also holds. The question now is whether the triple (f, h, γ) is compatible or not.
Then by (26) we can write Hence the result.
Notes
The deadbeat interconnection considered in this paper is fixed. A possible relaxation is suggested by the proof of Theorem 4. Namely, deadbeat synchronization would still be achieved with time-varying interconnection γ(k, y) provided that the sets Y σ stayed fixed and the relation γ(k, Y σ ) ⊂ Y σ−1 was satisfied at all times k. This is very closely related to what we mentioned in Remark 2 regarding the linear array (15). Further generalization in this direction seems nevertheless not to be an easy task.
A practical design problem is how to construct a deadbeat interconnection compatible with a given (f, h) pair. A primitive solution to this problem is to select an interconnection that admits a connected graph that is a (directed) tree. In that case, in an array of q systems, q − 1 of the systems would each be driven by exactly one other system, i.e., for each i ∈ {1, 2, . . . , q − 1} we would have γ i (y) = y j for some j = i, and one system (the root) would be driven by no one, i.e., γ q (y) = y q . However, when one starts considering interconnection schemes that include cycles in their graphs, the problem seems to lack an obvious systematic solution. | 2012-12-02T16:10:12.000Z | 2012-07-12T00:00:00.000 | {
"year": 2012,
"sha1": "6e67d48d6e8c3d2e75d5aeafaa4697b547ec0f6e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6e67d48d6e8c3d2e75d5aeafaa4697b547ec0f6e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
249206889 | pes2o/s2orc | v3-fos-license | Self-Care Ability of Patients With Severe Mental Disorders: Based on Community Patients Investigation in Beijing, China
Background Severe Mental Disorders have become a topic of increasing interest in research due to their serious consequences for the quality of life and functioning. In the pages that follow, it will be argued that the self-care ability and its influencing factors among patients with severe mental disorders in Beijing, according to the questionnaire survey in 2019. Methods Proportionate stratified sampling was used to select representative patients as samples. The demographic characteristics of were obtained from the Management Information System for Severe Mental Disorders and the questionnaires. The self-care ability was measured by self-made scales. Descriptive statistics, t-test, and multiple linear regression were used to analyze the data. Results We surveyed 662 people and found that the deficiency of self-care ability is common in patients with severe mental disorders. Self-care ability was positively correlated with educated levels and guardian takes care of alone, and negatively correlated with age, course of disease and physical disease (P < 0.05). From a dimensional perspective, the daily basic activity was positively correlated with educated levels and negatively correlated with physical disease (P < 0.05); the housework ability was positively correlated with gender, educated levels and medication adherence, and negatively correlated with source of income and physical disease (P < 0.05); the social function was positively correlated with educated levels, guardian takes care of alone and medication adherence, and negatively correlated with age, source of income, course of disease and physical disease (P < 0.05). Conclusion The self-care ability of patients with severe mental disorders is affected by many factors, including patient characteristics and social factors. Therefore, targeted interventions are needed to help patients restore their self-care ability, which requires the joint efforts of the government and the whole society.
INTRODUCTION
The World Health Organization defines Severe Mental Disorder (SMD) as a group of conditions that include moderate to severe depression, bipolar disorder, and schizophrenia and other psychotic disorders (1). The patients usually have moderate to severe impairment of work or non-work activities, as well as impairments in social function and daily basic mobility (2). To make matters worse, SMD is usually accompanied by physical diseases, such as cancer, cardiovascular disease, diabetes, stroke, tuberculosis, and AIDS, etc. (3). The WHO claims that around 1 in 9 people in settings affected by conflict have a moderate or severe mental disorder, and SMD patients die 10 to 20 years earlier than the general population (4). In addition, depression disorders are listed as one of the top 10 causes of DALY (5). Because of the disease, patients often have obstacles in personal wellbeing, social relationships, and work productivity (6). WHO said the depression and anxiety cost the global economy US$ 1 trillion each year (7). Mental, neurological, and substance use disorders also make up 10% of the global burden of disease and 30% of the non-fatal disease burden (4). In China, there are six types of SMD: schizophrenia, bipolar disorder, paranoid disorder, schizoaffective disorder, mental disorders in epilepsy and mental retardation (8). Studies have mentioned that between 2012 and 2030, the loss of productivity due to mental illness in China will reach US$900 million (9). So, improving the self-care capacity of people with mental disorders is a necessary measure to promote population health and reduce the burden on patients, families, and society.
To better help people with mental disorders recover, WHO launched the Comprehensive Mental Health Action plan 2013-2020 in 2013 (10). This plan supports the establishment of organizations to help people with mental disorders and psychosocial disabilities. It also suggests that a multi-sectoral collaboration is required to offer assistance at different stages of the life course such as educational opportunities, employment, participation in community activities, etc. In China, the national "686 Program" (also called "Central Government Support for the Local Management and Treatment of Serious Mental Illness Project") provided treatment and assistance for patients with poor families and implemented "unlocking" actions for patients locked at home, to increase the rate of patients' recovery and return to society (11). In response to the call of promoting the self-care ability of SMD patients to recover, a series of policies focusing on community rehabilitation have been issued all over the country. Jilin, Jiangsu, Jiangxi and other places provide rehabilitation services such as labor skill and social ability training for patients through community pilots. Such a community-based model of mental disorders to help patients recover also has been implemented in the United States, Britain, France, Japan, and other countries (12,13). Hunan Province, drawing on the experience of the United States (14), introduced more humane care called "the clubhouse model." This model regards patients as members and helps patients actively integrate Abbreviations: SMD, severe mental disorders; WHO, world health organization. into society by establishing an open, relaxed and positive rehabilitation environment.
Beijing, as the capital of China, also attaches great importance to the rehabilitation of people with mental disorders. According to the 2020 Annual Report on the Monitoring of Severe Mental Illness, the number of SMD patients registered in Beijing is 81,347 (15), and the cumulative number of patients has been increasing in recent years. The Implementation Program for precision care for persons with disabilities in Beijing (2018-2020) proposes a "1+6" plan system, which requires communities to have the infrastructure to provide rehabilitation training, selfcare training, and social adaptability counseling. In 2018, Beijing launched a pilot construction of a psychosocial service system to help patients recover their physical and mental health more comprehensively with the help of the community. Because of the financial burden and care burden caused by severe mental disorders, many measures have been implemented. If we can improve the self-care ability of patients, it is not only a good thing for patients themselves or their families, but also of great significance to maintain social stability. However, only a few publications are researching the self-care ability of SMD patients in China. Li et al. (16) investigated the self-care ability of patients with mental disorders in three medical institutions, and the results showed that patients in the hosting centers and rehabilitation centers had strong self-care ability. Chen et al. (17) investigated the self-care ability and social support of elderly patients in the community, and the results showed that the elderly have impaired daily life function and lack social support. In the past ten years, there has been less research on the self-care ability of SMD patients abroad. Based on the necessity for further research, this study aims to investigate the current situation of self-care ability in SMD people and the factors that affect it, in the hope of obtaining more targeted assistance to patients and providing the basis for improving relevant policies.
Sample and Data
In Beijing, when the SMD patients were diagnosed by psychiatric hospitals, their information was recorded in the Beijing Municipal Management Information System for Patients with Severe Mental Disorders in order to provide assistance. We used this system to select a representative sample to conduct a cross-sectional study of SMD patients in Beijing. The demographic characteristics and self-care ability of patients were obtained from the System and questionnaires (The questionnaire can be found in Multimedia Appendix 1 and the raw data can be found in Appendix 2). To ensure the quality of the investigation, we recruited medical students with social medical background as investigators. Investigators are trained for 1 day on the application of the scale and data collection before the commencement of the investigation. The investigators are required to explain the purpose of this study to ensure that all respondents participated voluntarily and signed informed consent. This study passed the ethical review conducted by the Medical Ethics Committee of Capital Medical University. Questionnaires are distributed one-on-one and answered anonymously, guardians are interviewed face-to-face, and questionnaires are unified and collected by investigators.
The samples were obtained by proportionate stratified sampling in SMD Patient Management System using the following steps: first, the total sample size and sampling districts was determined. Four districts were selected according to the functional areas of Beijing (including the core area of the capital function, Urban Fringe's Sustainable Development, urban development zone, ecological conservation area), plus Tongzhou District (deputy city center). As a result, XiCheng, ChaoYang, ChangPing, TongZhou, and MiYun were selected. The selected districts are shown in Figure 1. Second, the sample size of the district was calculated based on the proportion of patients in each district, and the sample districts are divided according to urban and rural areas, with a random selection of 1 street and 1 township from each of the two areas. Third, the sample size of the street/township was determined based on the ratio of the number of people in each street/township, and then to make the sample representative, patients were grouped by gender-age and the number of patients in each group was calculated based on the gender-age ratio in Beijing in 2017. Finally, the patients were selected randomly according to the calculated number of samples of each group. Because the guardian is the main caregiver of the patient, who has a better understanding of the patient's self-care ability and more accurate evaluation, so the questionnaire was answered by the guardian.
The inclusion criteria for survey subjects of this study are as follows: (1) Patients have been registered in the management system and diagnosed with one of the six types of severe mental disorders (the six severe mental disorders are schizophrenia, bipolar disorder, paranoid disorder, schizoaffective disorder, mental disorders in epilepsy and mental retardation); (2) The guardian is the primary caregiver of the patient and has a sound cognitive function; (3) The patient and guardian are volunteered to participate in the survey. The following exclusion criteria were applied: Patients who have not been profiled in the management system for SMD patients. Nine hundred and thirty questionnaires were distributed in five districts of Beijing, of which 910 responded, accounting for 97.9%. After deleting the missing values and illogical responses, the final sample consisted of 662 people.
Instruments
In this research, the self-designed questionnaire was used to evaluate the self-care ability of patients. There are several steps to select the items for the questionnaire. First of all, the questionnaire was designed by referring to the Activities of Daily Living (ADL) scale, which has been used in many previous studies and proved to have good reliability and validity (18,19), and the factors that might affect the self-care ability. Secondly, the questionnaire had been revised after experts' evaluation. Thirdly, a pre-survey was executed with a small sample to further modify the instruments. The final questionnaire is divided into two parts. The first part is the demographic characteristics. Previous studies have shown that factors such as age, course of illness, and gender have an impact on mental health (20, 21). Taş (22) showed that self-care ability point average of individuals who were single, did not have children, or had a member of family with mental disease was significantly lower. Based on experience, information on the patient's gender, age, household registration, education level, etc. was collected from System and questionnaires. The data on medication adherence and mental stability need to be rated by the guardian on a Likert scale, as well as fill in information about the patient's diagnosed physical disease through a questionnaire. The second part is the evaluation of self-care ability by referring to the ADL scale. The scale consists of 18 items and is subdivided into 3 dimensions. These three dimensions are daily basic activities (9 items, including meals, dressing, bathing, etc.), household activities (3 items, including sweeping, cooking, and laundry), and social activities (6 items, including shopping, making phone calls, managing finances, etc.) (23,24). Each item is rated as "patients can't do it at all, " "patients need help from others to do it, " "patients themselves can do it completely, " assigned 1 to 3, the resulting score ranges between 18 and 54, the higher the score, the better self-care ability.
Cronbach's α was used to measure the reliability and Kaiser-Meyer-Olkin (KMO) was used to measure the validity. The results have shown that Cronbach's α is 0.940 and the KMO was 0.929, which demonstrated high levels of reliability and construct validity.
In order to make the mean values comparable across dimensions, the scores are standardized with 100 as the full score. Descriptive statistics were used for reporting the characteristics of the sample, as frequencies and percentages for count data, mean values, and standard deviations (SD) for measurement data. To examine the impact of demographic characteristics and other factors on the self-care ability of patients, there are two steps in statistical analysis. The first step is to use the t-test to compare self-care ability according to two-state independent variables (such as male and female) and ANOVA was used to compare self-care ability according to more than two states variables (such as educated levels). The second step is multiple linear regression. Since the results of the regression analysis did not require comparison across dimensions, the unstandardized scores were used for the analysis to improve accuracy. Self-care ability is divided into three dimensions: daily basic activities, household activities, and social activities. So, multiple linear regression was used in each dimension. All statistical analysis is achieved through SPSS 26.0 and variables are considered statistically significant at the typical 95 % level.
Description of the Basic Characteristics of the Sample
The basic characteristics of all respondents in this study are shown in
Variables Assignments and Univariate Analysis Results
The variable assignment situation and the univariate analysis affecting self-care ability are shown in Table 2. It can be seen that the self-care ability of patients had different degrees of damage. Among them, the scores of basic daily activities were high, while the scores of domestic activities and social functions were low, indicating that these two abilities were seriously damaged. The results also indicated that: age, education, course of disease, and physical disease were statistically significant in all dimensions. Mental stability, which means the mood fluctuation is controllable and will not affect the daily life, was statistically significant only in total ability, gender was statistically significant only in the dimension of housework activities, and medication adherence was statistically significant only in the dimension of social function. In terms of disease type, compared to other diseases, patients with schizophrenia and bipolar disorder have better overall self-care ability and better ability to perform basic daily activities than patients with other mental illnesses; bipolar patients have better social functioning than patients with other mental illnesses.
Multiple Linear Regression Analysis Results
The multiple linear regression was used to analyze the impact of demographic characteristics, disease-related characteristics, and medication adherence on self-care ability. The results are shown in Table 3. Patients who were young, highly educated, had a short course of illness, no physical illness and a guardian caring for them alone had a high level of comprehensive selfcare; Patients with a high level of education and no physical diseases had a greater ability to perform basic daily activities; Patients who were female, had a high level of education, relied on wage as a source of income, had no physical diseases and had good adherence with medication were more able to perform household activities; Patients who were younger, had a high level of education, relied on wage as a source of income, had a short course of illness and good adherence with medication, did not have a physical disease and were cared for alone by a guardian had greater social functioning. In general, educational level and physical disease had an impact on all dimensions of self-care. Regression coefficients of each dimension passed the test at the 0.05 significance level.
DISCUSSION
The status of people with severe mental disorders in many countries is not very good. Berlim et al. (25) suggested that the quality of life in patients with mental disease is poor in Brazil, which is true in the United States, Germany, and South Africa (26)(27)(28). This study also found that the self-care ability of patients with severe mental disorders in Beijing was impaired. The results showed that most patients were older, had long illnesses, and nearly half had the physical disease at the same time. The characteristics of respondents are similar to Fleury et al. (29) and Shumye et al. (30). The average scores of basic daily activities are more than 90, indicating that patients have a strong ability to eat, dress, and other simple behaviors generally. The weakening of housework and social function is more serious, with an average score of <80, which is much lower than the ability of basic daily activities. Although this study did not explore the reasons for the weakening of ability, it has been analyzed in previous literature. Some scholars suggested that one of the reasons is the lack of perseverance and enthusiasm for life, resulting in the inability or unwillingness to participate in housework and social activities actively (31,32). On the other hand, prejudice and discrimination are common in society, and the patients have a sense of stigma (33) makes them lack social identity (34), resulting in more serious damage to social function. So, it is necessary to popularize mental health knowledge in the whole society. In this way to enhance the awareness of mental illness prevention and change people's prejudice against mental disorders. This research also found that patients with schizophrenia and bipolar disorder have better self-care ability compared to other psychiatric disorders, and Lan et al. (35) also indicated that patients with these two disorders have better treatment compliance, which suggests that we can improve patients' self-care ability by increasing compliance. Therefore, mental health knowledge should be further disseminated to make patients subjectively willing to accept treatment, while objectively promoting patients to accept treatment through follow-up visits by community physicians. This study found that some demographic characteristics have a significant impact on patients' self-care ability. First of all, the patient's educational level affects all the dimensions of selfcare abilities including daily basic activity, housework, and social function, which showed the higher the level of education, the stronger the self-care ability. Caron (36) has pointed that patients with a high level of education have a higher awareness of the disease and are more willing to accept treatment and cooperate with communities in rehabilitation activities (36). Other factors also affect the patient's self-care ability but have different effects on the different dimensions. For example, gender only affects housework ability, while age only affects social ability. Women are often the main contributors to domestic activities in traditional families in China, so female patients have greater housework ability (36). The older patients have poorer memory and understanding ability, which reduces their willingness and ability to participate in social activities. This inference was similar to a study by (20) who found that age was a factor influencing the quality of life among older people in Songkhla (20). Therefore, classified management can be implemented for patients. For patients with higher education, the goal is to promote their selfcare ability to return to the normal level. Because the patients with higher education have higher treatment compliance than other patients, who are easier to restore health. For the elderly vulnerable group, the goal is to help them improve their selfcare ability as much as possible. The community should visit the patients regularly to understand their physical and mental status and medication. Welfare institutions such as nursing homes should pay more attention to the mental health of the elderly because the elderly are more likely to have psychological problems due to the lack of family companionship.
Another finding of this study is that the longer the course of the disease, the weaker the patient's ability to do housework and social function. Mental disorders, as a chronic disease, have the characteristics of long course and frequent recurrence (37). Patients tend to lose confidence and become negative, gradually changing from complementary treatment to resistance therapy in the process of long-term treatment. In addition, previous studies have shown that mental disorders are often associated with physical diseases (38)(39)(40). Impaired physical and mental health prevents patients from engaging in housework and complex social activities (41). So, clinical research needs to pay attention to complications and promote the recovery of overall function through combined therapies. What's more, it is necessary to promote the reorganization of health care to facilitate treatment and recovery of people who are struck by comorbid mental and physical disorders.
Insist of drug treatment can help improve the symptom. Ansari et al. (42) study has found that improving medication compliance is of great significance to the management of schizophrenia. But researches have shown that the medication compliance of patients with mental disorders is generally poor (43)(44)(45)(46). Thus, interventions to improve medication adherence among SMD patients are urgently needed. Sun et al. (47) pointed out that there was a significant correlation between economic and medication compliance. So, the Community Free-Medication Service policy (CFMS) was implemented in Beijing in 2013 to reduce the financial burden caused by taking medicine. The existing research also suggested that family and community play an important role in improving medication compliance (48,49). Therefore, health education should be carried out for patients and families. At the same time, we should also improve the quality of doctors and community service. Through case management and medication consultation to improve patient's medication compliance.
The main contributions of this study are as follows: firstly, the sample was selected in the community rather than in psychiatric hospitals like most previous clinical studies. For the reason that with the development of deinstitutionalization, SMD patients gradually return to the community and the focus should be on the community. Secondly, the study described the overall situation of mental disorders and the participants covered all kinds of SMD people, rather than patients with a certain type of mental disorders (such as bipolar disorder, depression, etc.), the results are more comprehensive. In addition, compared with developed countries such as the United States, Britain, and Germany in previous studies (50)(51)(52), this article provides empirical evidence under different research context. China is a country with low and medium economic development levels with a large population. Unlike other developing countries such as India (53), China's political system has its particularity, and so does its medical and health care system. This study also has some limitations. One is that the questionnaire was answered by guardians for the reason that they know more about the patient's self-care ability with the perennial care. The view of evaluation will be more comprehensive if the doctor's opinions are added. Besides, due to the limited research time and resources, this study is a simple cross-sectional design. It is difficult to explain the causal relationship between mental disorders and impaired self-care ability. What's more, mental disorders often have complications, which can also damage the patient's self-care ability. What kind of disease impairs the self-care ability needs more in-depth research. At the same time, patients with serious impairment of self-care ability may be inconvenient to be investigated, so there is an error between the survey results and the actual situation. After that, the people selected in this survey can be followed up regularly to further explore the relationship between mental disorders and self-care ability.
CONCLUSION
A cross-sectional study was used to explore the self-care abilities of SMD patients. The results showed that most patients had impaired self-care ability and it is influenced by many factors. It is suggested to strengthen the assistance to vulnerable groups, and pay attention to the psychological intervention of patients. The impairment of self-care ability caused by complications also needs attention. There should be a more meticulous comprehensive treatment plan. Family and community intervention can be used to improve patients' medication compliance and help patients recover their selfcare ability.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The protocol of this study was approved by the Medical Ethics Committee of Capital Medical University (No: Z2020SY123). All respondents were voluntary and written informed consent was obtained. All data collection is anonymous. the statistical analysis. JZ, YC, and CC wrote sections of the manuscript. All authors contributed to the manuscript revision, read, and approved submitted version. | 2022-06-01T13:37:44.343Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "9906238d96d2d548facd1be9a69e1b8907194712",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2022.847098/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "9906238d96d2d548facd1be9a69e1b8907194712",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1531251 | pes2o/s2orc | v3-fos-license | Functional analysis of human T lymphotropic virus type 2 Tax proteins
Background The Tax proteins encoded by human T lymphotropic virus type 1 (HTLV-1) and type 2 (HTLV-2) are transcriptional activators of both the viral long terminal repeat (LTR) and cellular promoters via the CREB and NFkB pathways. In contrast to HTLV-1, HTLV-2 has been classified into four distinct genetic subtypes A, B, C and D defined by phylogenetic analysis of their nucleotide sequences and the size and amino acid sequence of their Tax proteins. In the present study we have analysed and compared the transactivating activities of three Tax 2A and one Tax 2B proteins using LTR and NFkB reporter assays. Results We found that with the exception of the prototype Tax 2A Mo protein, the other two Tax 2A proteins failed to transactivate either the viral LTR or NFkB promoter in Jurkat and 293T cells. Loss of activity was not associated with either expression levels or an alteration in subcellular distribution as all Tax 2 proteins were predominantly located in the cytoplasm of transfected cells. Analysis of the sequence of the two inactive Tax 2A proteins relative to Mo indicated that one had six amino acid changes and the other had one change in the central region of the protein. Mutations present at the amino and the extreme carboxy termini of Mo resulted in the loss of LTR but not NFkB activation whereas those occurring in the central region of the protein appeared to abolish transactivation of both promoters. Analysis of the transactivation phenotypes of Tax 1, Tax 2A Mo and Tax 2B containing mutations identified in the present study or previously characterised Tax mutations showed that domains required for LTR and NFkB activation are very similar but not identical in all three Tax proteins. Conclusion Our results suggest that loss of activity of two Tax 2A proteins derived from different isolates is associated with multiple amino acid changes relative to Mo in domains required for the activation of the CREB or CREB and NFkB pathways and that these domains are very similar but not identical in Tax 2B and Tax 1. The loss of Tax function in 2A viruses may have implications for their biological and pathogenic properties.
Background
HTLV-1 and HTLV-2 are closely related human retroviruses which have a preferential in vivo tropism for CD4 + and CD8 + T lymphocytes respectively. HTLV-1 is the causative agent of adult T cell leukaemia (ATL) and a neurodegenerative disorder, tropical spastic paraparesis or HTLV-1 associated myelopathy (TSP/HAM) [1][2][3][4]. In contrast, the role of HTLV-2 in human disease is less clearly defined; however increasing evidence suggests that infection may also be associated with rare lympho-proliferative and neurological disorders [5][6][7].
In addition to the essential retroviral proteins Gag, Pol and Env, HTLV encodes a number of regulatory and accessory proteins that modulate viral gene expression and play important roles in viral pathogenesis. The most widely studied of these is the transactivating protein Tax [8]. Tax is known to alter cellular signalling pathways by interacting with a number of cellular transcription factors including activating transcription factor/c-AMP response element-binding protein (ATF/CREB) and NFkB. Specifically Tax enhances transcription of the viral genome by interacting with CREB/ATF which increases its affinity for conserved binding sites within the LTR and cellular promoters. With respect to the NFkB pathway, cytoplasmic Tax acts by binding the IKK γ which induces the phosphorylation and degradation of IkB-α, the inhibitor of NFkB, thereby allowing the NFkB complex to migrate to the nucleus and induce gene expression.
The different subtypes of HTLV-1 encode Tax proteins (Tax 1) of equal lengths. In contrast, HTLV-2 has four distinct genetic subtypes, A, B, C and D, defined by phylogenetic analysis of their nucleotide sequences and the size and amino acid sequence of their Tax proteins. The Tax proteins of HTLV-2 (Tax 2) vary in length, with Tax 2B and -2C having similar lengths to Tax 1, 356 and 353 amino acids respectively, although the C-terminal sequences of these proteins are divergent [9,10]. Tax 2A lacks a 25 amino acid C terminal sequence having a stop codon which truncates the protein at amino acid 331. HTLV-2D encodes a Tax protein of 344 amino acids that as yet remains uncharacterised [11]. Studies comparing the relative transactivation functions of Tax 1 and Tax 2 indicate that, with the exception of Tax 2A, there are no significant differences in transactivation activities via CREB and NFκB pathways between the Tax proteins of these two viruses and suggest that Tax 2B may have the same pathogenic potential as Tax 1 [12].
Several studies have identified functional domains in Tax 1 which are required for NFkB and LTR activation. These regions include activation domains at the amino and carboxy termini, a CREB binding domain and zinc binding domain within the first 60 amino acids [13,14]. Tax 1 and Tax 2 contain nuclear localization signals (NLS) at the amino terminus between amino acids 1-60 [15] and 1-40 [16], respectively, and nuclear export signals (NES) located between amino acids 188 and 202 [17,18]. Using mutations previously characterised in Tax 1, Tax 2A was found to contain similar but not identical functional domains as Tax 1 [19]. Various studies reported that Tax 1 shuttles between the nucleus and the cytoplasm, and depending on the cell line is predominantly located in the nucleus [20,21]. A recent study has shown that in contrast to Tax 1, Tax 2A and Tax 2B are predominantly found in the cytoplasm of either a HTLV-2 infected cell line or cells transfected with Tax 2 expression plasmids [22]. Using chimeric plasmids containing domains from Tax 1 and Tax 2 it could be shown that amino acids 90 to 100 are involved in the cytoplasmic localization of Tax 2.
In a previous study we reported that some Tax 2A proteins exhibit poor transactivation of both the CREB and NFkB pathways and this appeared to be related to decreased levels of Tax 2A expression [12]. The aims of the present study were firstly, to examine the ability of different Tax 2A proteins to transactivate the viral LTR and a NFkB promoter in relation to expression levels, sequence variation and sub-cellular distribution and secondly, to determine if Tax 2A and Tax 2B have similar functional domains. We show that two Tax 2A proteins were non-functional relative to the prototype 2A Mo protein in either Jurkat or 293T cells. Loss of activity was not correlated with Tax 2A expression levels or altered sub cellular distribution but appears to be due to the presence of amino acid changes. We identified previously uncharacterised mutations in the non-functional Tax 2A proteins that result in either defective LTR and NFkB activation or defective LTR but not NFkB activation. These mutations resulted in similar but not identical transactivation phenotypes in Tax 2B.
Transactivation phenotypes of Tax 2A Lor and Gar
In the present study we examined the transactivation phenotypes of two Tax 2A proteins Lor and Gar and compared this with the prototype 2A isolate, Mo. Lor was derived from a HTLV IIA infected cell line and Gar was derived from cultured PBMCs from a HTLV-2/HIV-1 co-infected patient (W. Hall unpublished). All Tax coding sequences were cloned in the same expression plasmid and were tagged with a HIS tag to allow the simultaneous detection of all Tax proteins. A HTLV-1 LTR-LUC reporter was used in this study to assess the activity of Tax 2 proteins as previous studies have shown that there is no significant difference in the ability of Tax 2 proteins to activate the LTR from either HTLV-1 or HTLV-2 [19]. Functional assays were performed in Jurkat cells as these cells are lymphocytes and represent the natural targets of HTLV in vivo.
In initial studies we employed well characterised Tax mutants in our assays as had been reported in other studies. Specifically we tested the transactivation activities of the Tax 2A Mo mutants designated M22 (S130A/L131F), which was shown in previous studies to result in LTR but not NFkB activation, and M47 (I319R/L320S), which was shown to abolish activation of the LTR by Tax 2A Mo while not affecting NFkB activation [13,14]. These mutants displayed the expected transactivation phenotype (Table 1). Similar results were also obtained with the Tax 2B M22 and Tax 2B M47 mutants. Tax 2A Lor and Gar failed to transactivate either the viral LTR or NFkB promoters in Jurkat and 293T cells compared to the prototype Tax 2A protein Mo or Tax 2B ( Figure 1A and 1B, respectively). Wildtype Mo was repeatedly found to activate the LTR and NFkB promoters less efficiently than Tax 2B, for example 60% and 40% in Jurkat cells and 40% and 20% in 293T cells, respectively. Mo, Lor, Gar and Tax 2B were all expressed at similar levels in 293T cells ( Figure 1C).
Sub cellular localisation of Tax 2 proteins
A previous study demonstrated that Tax function was related to its sub-cellular localisation, with the highest levels of LTR and NFkB activity being observed when Tax was predominantly located in either the nucleus or cytoplasm, respectively [22]. We sought to determine if the intracellular distribution of Lor and Gar was altered compared to that of Mo and Tax 2B. Immunofluorescence studies showed that Gar and Lor were found predominantly in the cytoplasm but also appeared as intense specks in the nucleus of 293T cells (data not shown) and Cos 7 cells and displayed a similar intra cellular distribution as Mo and Tax 2B ( Figure 2). These results clearly indicate that the sub cellular distribution of Lor or Gar was not contributing to their loss of activity.
Sequence analysis of Lor and Gar Tax proteins
The sequences of Lor and Gar were determined and compared to that of Mo. Lor had six amino acid changes spanning the entire protein at positions G21D, L87I, P92L, T204A, W248R and L308V ( Figure 3B). Gar only contained one amino acid change at position Y144C. G21D and L308V are located in a domain previously found to be involved in LTR activation while L87I and P92L are close to a domain previously found to be involved in the cytoplasmic localization of Tax 2 proteins ( Figure 3A) [19,22]. W248R and Y144C are located in the central region of Mo which was shown in previous studies to be important in the activation of both CREB and NFkB pathways by Mo [19].
Ability of Tax 2A mutants to transactivate the HTLV-1 LTR and NFkB promoters
Initially site directed mutagenesis was used to sequentially replace each mutation present in Lor with the correspond-ing wildtype Mo residues starting from the amino terminus (Table 2; L1 to L5). Activation of both the LTR and NFkB promoters was only restored in Lor L5 when the mutation at position W248R was replaced by the corresponding wildtype Mo residue indicating that this position is critical for Tax 2A activity. Lor L6, which contains all the mutations found in Lor except for W248R, failed to activate the LTR while displaying wildtype levels of NFkB activity. All Lor mutants were expressed at similar levels ( Figure 4). Insertion of individual mutations found in Lor into Mo showed that most mutations and particularly G21D, L87I, and P92L substantially reduced the ability of Mo to transactivate the LTR and without affecting NFkB activity ( Table 3). Analysis of the subcellular location of mutant proteins in Cos 7 cells using immunofluorescence did not reveal any discernable alterations in their distribution relative to wildtype Mo (data not shown). The mutation L308V did not appear to affect the ability of Mo to transactivate either promoter. One mutation at position T204A appeared to enhance the ability of Mo to activate both the LTR and NFkB promoters to levels above those obtained with Tax 2B. This mutant was expressed at a similar level to wildtype Mo ( Figure 4). As expected the mutation at position W248R abolished the ability of Mo to activate either the LTR or NFkB promoters. However this mutant appeared to be expressed at a lower level than wildtype Mo or other Mo mutants ( Figure 4). Insertion of the only mutation found in Gar Y144C into Mo abolished its ability to activate either the LTR or NFkB promoters. To determine if the residue at position Y144, and not only the mutation Y144C, is important for Mo activity an arginine instead of a cysteine was introduced at this position. Mo Y144R displayed the same phenotype as Y144C indicating that this position is important for Mo activity irrespective of which residue is present. Insertion of Y144C into Tax 1 only reduced its activity while W248R abolished both LTR and NFkB activation by Tax 1. While Tax 1 W248R was expressed at a similar level to Tax 1 WT, Tax 1 Y144C was very poorly expressed ( Figure 4).
Transactivation phenotypes of Tax 2B mutants
Given the high degree of homology between Tax2A and Tax 2B we sought to compare functional domains in both proteins by introducing the mutations found in Gar and Lor into Tax 2B (Table 4). In a similar manner to its effect on Mo and Tax 1, W248R abolished the ability of 2B to activate either the LTR or NFkB promoters and similar to its effects on Tax 1 Y144C appeared to only reduce the activity of Tax 2B. However the introduction of an arginine instead of a cysteine at this position (Y144R) into Tax 2B abolished its activity indicating that this position is important for function but may depend on the amino acid present. Mutations at positions G21D, L87I, P92L and L308V appeared to have similar effects on Tax 2B activity as they had on the activity of Mo in as much as they substantially reduced LTR activation while not affecting the activation of NFkB. As was previously noted wildtype Mo was found to activate the CREB and NFkB pathways less efficiently than Tax 2B (Table 3). This difference was abolished by the introduction of the mutation T204A into Mo. An alanine occurs naturally at this position in Tax 2B, the mutation of which to a threonine (A204T) results in similar transactivation activities as Mo (Table 4). This indicates that this residue is responsible for the differences found in the activities of both proteins. All Tax 2B mutants, including 2B A204T (data not shown), were expressed at levels similar to wildtype Tax 2B except for W248R which appeared to be poorly expressed in a manner similar to Mo W248R.
Discussion
Even though Tax 1 and Tax 2 share approximately 70% homology, previous studies comparing the activities of Tax 1 and Tax 2 proteins have shown that functional differences exist between the two proteins and suggest that this could account at least in part for differences in the pathogenic properties of HTLV-1 and HTLV-2 [23]. Specifically Tax 2A was reported to be unable to induce micronuclei formation or to activate the ICAM-1 promoter in T cells compared to Tax 1 [24,25]. Furthermore while all Tax proteins inhibit p53 activity, Tax 2A was found to do so less efficiently than either Tax I or Tax 2B [26]. In transformation studies, Tax 2A was found to transform primary human T cells with the same efficiency as Tax 1 and while Tax 2A and Tax 2B could transform Rat-1 cells they did so less efficiently than Tax 1 [27]. Other studies showed that in contrast to Tax 2, Tax 1 suppressed hematopoiesis in transduced CD34+ progenitor cells and suggested that this may be attributed to its ability to upregulate the cyclindependent kinase inhibitor p21 cip/waf1 promoter more efficiently than Tax 2 [28,29]. In addition Jurkat cells that constitutively express Tax 1 were shown to inhibit the kinetics of cellular replication to a higher degree compared to Tax 2 [30]. In the present study we investigated the ability of two Tax 2A proteins Lor and Gar to transactivate the LTR and NFkB promoters in relation to expression levels, sequence variation and sub cellular location compared to Tax 2A Mo and Tax 2B. Lor and Gar failed to activate either promoter compared to Mo or 2B eventhough the expression levels of all Tax 2 proteins were similar. Compared to Mo, we identified six amino acid changes in Lor spanning the entire protein and one mutation in Gar located in the centre of the protein. Lor was derived from a HTLV-2A infected BJAB cell line which was positive for p24 production by FACS analysis (data not shown) indicating that the mutations present were not affecting the function of Rex. It was not possible to determine if the amino acid changes in Lor arose during culture or if they were present in the original virus. A previous study found that compared to Mo the prevalence of amino acid changes in some functional Tax 2A proteins was low (1-2%) [31] which is similar to that found in the non-functional Lor protein. The Tax cDNAs in that study were derived from non-cultured PBMCs obtained from infected individuals thus eliminating the possibility that the mutations arose as a result of cell culture. Examination of those Tax 2A sequences revealed that they included only one of the mutations described in the present study, at position T204A which appears to be present in most Tax 2A sequences.
In the present study most of the individual mutations appeared only to affect the ability of Mo to activate the LTR and had little affect on NFkB activation. The amino terminal mutations are located in previously described functional domains in Tax 1 and Tax 2 proteins including a nuclear localization signal, zinc finger domain and more recently a domain in Tax 2 between amino acids 90-100 shown to be involved in the cytoplasmic location of Tax 2 proteins [13,14,16,22]. However analysis of the subcellular location of mutant proteins using immunofluorescence did not reveal any discernable alterations in their distribution compared to wildtype Mo. We found that all Tax 2 proteins were predominantly located in the cytoplasm and also to a lesser extent in the nucleus. These results agree with a recent study where they also found that in contrast to Tax 1, Tax 2 proteins are predominantly found in the cytoplasm [22]. Two mutations in the central region of Tax 2A at positions 144 and 248 appeared to abolish both LTR and NFkB activation indicating that these mutations may disrupt an essential functional or structural domain involved in the activation of both pathways by Mo. The mutation at position W248R resulted in defective LTR and NFkB activation both in the presence of other mutations in Lor and when it is introduced singly into Mo. The replacement of this mutation with the corresponding wildtype residue in the Lor mutant L6 restored a wildtype NFkB phenotype but resulted in defective LTR activation. The overall phenotype of L6 was probably due to the combined effects of the other Lor mutations present in the L6 protein which individually were found to substantially reduce LTR activation by Mo without affecting activation of the NFkB pathway. A mutation in close proximity to 248, at position 258, was described in previous studies to abolish Tax 2A activity while Tax 1 containing this mutation failed to transactivate NFkB but retained the capacity to transactivate the HTLV-1 LTR [13,19]. In the present study insertion of the mutation at position 248 into both Tax 1 and 2B also abolished their activity indicating that this mutation may disrupt a shared functional or structural domain required for activation of both pathways by all three Tax proteins. As opposed to its effects on Mo, the mutation at position Y144C only reduced the ability of Tax 2B and Tax 1 to activate the LTR and NFkB promoters indicating that this domain is not as critical in Tax 1 and 2B for activity as it is in Mo. However the insertion of the amino acid arginine instead of the hydrophobic amino acid cysteine at this position abolished Tax 2B activity. It is not clear why the expression of some Tax mutants, such as Mo W248R and Tax 2B W248R, was substantially reduced compared to the corresponding wildtype proteins. This is in contrast to the expression levels of Lor and Lor mutant proteins L1-L4, which were not affected by the presence of W248R. Wildtype Mo was repeatedly found to activate both the LTR and NFkB promoters less efficiently than Tax 2B. However this difference was abolished by introducing one mutation at position T204A into Mo which resulted in similar or slightly higher levels of activity to those obtained with wildtype Tax 2B indicating that depending on sequence of both proteins, Mo and Tax 2B display equivalent levels of activity. These results differ from a previous study carried out in our laboratory which found that compared to Tax 1 and Tax 2B some Tax 2A proteins including Mo were unable to activate the CREB pathway in Jurkat or 293T cells [12]. We speculate that these differences may be related to the poor expression of Tax 2A proteins reported in that study and possibly to differences in experimental conditions.
Conclusion
In conclusion, the present study shows that compared to Mo, certain Tax 2A proteins are non-functional and that loss of activity is clearly associated with the accumulation of amino acid changes and not with levels of expression or alterations in sub-cellular localisation. Failure of Tax 2A mutants to activate either the CREB or NFkB pathways or both, was previously reported to be related to an inability to transform T cells [32]. This, together with our findings, suggests that the prevalence of mutations in Tax 2A proteins which inactivate both pathways may influence the pathogenic properties of certain HTLV-2A viruses. allow the simultaneous detection of all Tax proteins using a single antibody, Tax coding sequences were amplified by PCR using reverse primers that contained an additional sequence for six histidine (HIS) residues before the stop codons. For cloning purposes all primers contained 5' and 3' EcoRI restriction enzyme sites. Tax 2A Lor was amplified by PCR from genomic DNA extracted from an HTLV-2A infected BJAB cell line. Gar was amplified from genomic DNA extracted from cultured PBMCs from a HTLV-2/HIV-1 infected individual and Mo was amplified from a plasmid construct supplied by P.L. Green. Tax 1 and Tax 2B coding sequences were amplified from the corresponding pFLAG constructs as described previously [12]. Purified PCR products were cloned into the mammalian expression plasmid pCAGGS using EcoRI. The nucleotide sequence of all constructs was determined using the BigDye Terminator sequencing kit (Applied Biosystems). The HTLV-1 LTR luciferase plasmid were described previously [12] and NFkB activation was determined using pNF-kB-Luc (Stratagene).
Transient transfections and luciferase assays
Plasmid DNA was introduced into cells using Fugene tranfection reagent (Roche Diagnostics) according to the man-ufacturer's instructions. For functional assays, 1 × 10 5 Jurkat cells were seeded in 60 mm dishes and co-transfected with either 1 ug of HTLV-1 LTR, or NFkB firefly luciferase reporters together with 250 ng of the indicated Tax expression plasmids and 50 ng of Renilla luciferase reporter pRL-TK. Reporter activities were measured using the Dual Luciferase reporter assay system (Promega) 24 hrs after transfection as described previously. Briefly, cells were lysed in 1× passive lysis buffer and firefly and Renilla luciferase activities were measured using a Turner 20/20 Luminometer. Reporter activities were normalized using Renilla luciferase values. To determine and compare Tax expression levels in cells transfected with wildtype or mutant plasmids, 293T cells were seeded on 60 mm dishes and co-transfected the next day with 250 ng of the indicated plasmids. Cells were lysed after 24 hrs using 1× passive lysis buffer. Lysates was analysed by western blotting and Tax proteins were detected using an anti-HIS antibody (Invitrogen). Blots were also probed with anti-Tubulin (Calbiochem) as a loading control.
Site directed mutagenesis
Point mutations in Tax 1, Tax 2A and Tax 2B constructs were generated using the QuickChange Site Directed Location of mutations found in Tax 2A proteins Mutagenesis kit (Stratagene) according to the manufacturers instructions. The presence of mutations was confirmed by sequencing using the BigDye Terminator sequencing kit (Applied Biosystems).
Indirect immunofluorescence
Cos 7 cells were seeded on two well chamber slides twenty four hrs before transfection with 150 ng of the indicated Tax expression plasmids. Twenty four hours after transfection cells were washed with PBS, fixed with 4% paraformaldehyde for 20 min at room temperature and permeabilized in 0.2% Tween 20/PBS. Non specific binding was blocked using 5% rabbit serum or swine serum for 1 h at room temperature and incubated with the anti-HIS antibody (Invitrogen; 1:400) for 2 h at room temperature. After washing in PBS, cells were incubated with rabbit anti-mouse FITC for 1 h at room temperature. Following a washing step the nuclei in cells were stained using DAPI (Sigma 1 ug/ml) and slides were mounted in Vectashield.
Expression levels of Tax 2 wildtype and mutant proteins Figure 4 Expression levels of Tax 2 wildtype and mutant proteins.
293T cells were transfected with either wildtype or mutant Tax plasmids and cell lysates were subjected to electrophoresis on 10% SDS polyacrylamide gels. Western blots were performed using anti-HIS to detect Tax expression and anti-tubulin to detect Tubulin which was used as a loading control. Each panel shows the expression levels of both wildtype (WT) and corresponding mutant proteins for the indicated Tax proteins. | 2017-08-03T02:07:44.209Z | 2006-03-21T00:00:00.000 | {
"year": 2006,
"sha1": "773cd418d3fd28f4aed71b050de097ce4e8d9819",
"oa_license": "CCBY",
"oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-3-20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "169d59b257f417bd8267ba4f7ce970ce840b1c4a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
8719003 | pes2o/s2orc | v3-fos-license | Challenges to conducting epidemiology research in chronic conflict areas: examples from PURE- Palestine
Little has been written on the challenges of conducting research in regions or countries with chronic conflict and strife. In this paper we share our experiences in conducting a population based study of chronic diseases in the occupied Palestinian territory and describe the challenges faced, some of which were unique to a conflict zone area, while others were common to low- and middle- income countries. After a short description of the situation in the occupied Palestinian territory at the time of data collection, and a brief overview of the design of the study, the challenges encountered in working within a fragmented health care system are discussed. These challenges include difficulties in planning for data collection in a fragmented healthcare system, standardizing data collection when resources are limited, working in communities with access restricted by the military, and considerations related to the study setting. Ways of overcoming these challenges are discussed. Conducting epidemiological research can be very difficult in some parts of our turbulent world, but data collected from such regions may contrast with those solely from politically and economically more stable regions. Therefore, special efforts to collect epidemiologic data from regions engulfed by strife, while challenging are essential.
Introduction
Conducting population based studies in low-and middle-income countries can be challenging due to poor infrastructure and limited resources. Such efforts can be even more challenging if the communities are in regions of war and strife. This paper describes issues encountered in designing and conducting the Prospective Urban Rural Epidemiology (PURE) study in the occupied Palestinian territory (oPt). In addition to the limited resources faced by other low and middle-income countries, research in the oPt is compounded by additional challenges related to its political situation and the military occupation. This paper provides a brief background on the current situation in the oPt followed by a description of PURE, in order to provide context to understand the challenges faced. Specific methods or results are not presented here; instead the aim is to describe the challenges encountered and how they were addressed during the data collection period of the baseline phase of PURE in the oPt. Our experiences could be useful to other researchers conducting epidemiological research in challenging and constrained settings.
oPt in context
The limited data from the oPt suggests an epidemiologic transition, where the leading causes of death have changed from infectious to chronic diseases [1]. The leading causes of death are heart disease, constituting 26 % of deaths, cerebrovascular disease (12 %), and cancers constituting 11 % of all deaths [2]. Epidemiology data on cardiovascular disease and cancer are scarce, and estimates are based on routine data gathered by the Ministry of Health and from national surveys conducted by the Palestinian Central Bureau of Statistics. Reliable data on risk factors, treatments, and outcomes of cardiovascular diseases are limited. The available information is based on surveys conducted using self-reported data [1]. Higher quality data are limited and based on small studies which are not representative [3,4].
The unique political situation in the oPt warrants special attention and requires that a number of factors that are usually not part of epidemiologic investigations of chronic diseases should be studied in trying to understand the causes, prevention and treatment of common diseases. The long term chronic conflict has increased exposure to violence, adding stressors that could heighten the impact of stress on cardiovascular diseases and also on health behaviors (eg. smoking, diet, physical activity). This conflict has also impoverished individuals and communities, with limited resources for health care [5][6][7]. Restrictions in travel between the West Bank and Gaza Strip, and also between communities within the West Bank compound lifestyle behaviors, resulting in regional differences in lifestyles which affect cardiovascular diseases.
The oPt has one of the largest refugee populations in the world which influences living conditions, socioeconomic status, and delivery of health care. Palestinians became refugees after the establishment of the state of Israel in 1948, and about 4.5 million refugees and their descendants are registered by the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA). Almost a third of Palestinian refugees still live in camps inside and outside the oPt, although these camps are now urban settlements, not tents [5]. Palestinian refugees have been living in these camps for over 60 years, and their entire life experience is influenced by the special circumstances that they have experienced.
In addition, certain characteristics of the healthcare system which influence screening, prevention and management of disease. There are currently four different health care providers in the oPt: The Palestinian Ministry of Health (MoH), UNRWA, nongovernmental organizations (NGOs), and the private sector. Secondary and tertiary care is provided mainly through the MoH. Primary care is more fragmented: cities and most villages receive care from MoH, a few villages receive care from NGOs, and refuges receive care from UNRWA. The private sector provides primary as well as secondary care, yet it is not well regulated by a supervisory body [1]. It is thus important to assess the impact on disease outcomes across the different health care providers, as availability and quality of care may vary.
The unique circumstances of this population and its context adds to the importance of collecting local data with a large enough population to inform policies for chronic disease prevention and programs for their management. So far the effects of the long term chronic conflict have been studied mainly as they pertain to mental health and overall wellbeing [6][7][8]. Collecting longitudinal data on outcomes of common diseases can improve the understanding of the effects of the long chronic conflict on chronic diseases. Development and implementation of policies at the local level are necessary for designing prevention programs in order to control common diseases. Such programs require high quality data drawn from a large sample representing the entire population.
PURE overview
PURE is a prospective cohort study designed to collect data on social, environmental, behavioral, biological, and genetic factors that contribute to the development of cardiovascular diseases in high-, middle-and low-income countries [9]. This study provides a simple design to be used in countries with limited resources for research, keeping in mind the importance of ensuring the collection of high quality data. Standardized forms are used to collect data at the community level, household level, and individual level with the aim of understanding how risk factors at these different levels may be associated with cardiovascular disease. Once the baseline data collection of the cohort is completed in each country, regular follow up visits are planned at three year intervals to follow up study participants for clinical events.
For PURE Palestine, plans for data collection initially included 48 communities in the West Bank and Gaza Strip. Due to the political unrest in the Gaza Strip at that time (6 day Israeli war on Gaza-November, 2012) the research team was unable to enter into the Gaza Strip and decided at the time, to focus data collection in the West Bank only. Data were collected from ten urban, nine rural, six refugee camp, and 15 seam zone communities in the West Bank ( Fig. 1). Seam zone areas are mostly rural and can be defined as Palestinian communities located between the separation wall erected by Israel inside West Bank Palestinian land and the Green Line, that is, the official and internationally recognized borders between Israel and the West Bank [10].
Data were collected from households representing each selected community that have at least one family member between the ages of 35 and 70 years. Trained fieldworkers used standardized forms to collect information on household socioeconomic status, details on chronic diseases, medication intake, and chronic disease risk factors including smoking, physical activity, nutrition, and family history. Each community was then visited by a medical team (a trained nurse and lab technician) who collected anthropometric measures, resting blood pressure, spirometry measures, grip strength, and blood and urine samples.
Challenges faced during PURE data collection
Research in low-and middle-income countries faces many challenges pertaining to limited resources and lack of research and medical infrastructure [11]. Conflict areas raise more challenges in terms of security and movement restriction difficulties, in addition to resource allocation to humanitarian acute response research as opposed research with more sustainable goals. While collecting data for PURE in the oPt we were faced with common challenges in such settings, in addition to a number of obstacles that were specific to the long-term chronic conflict in that region of the world. These challenges are presented along with how they were overcome by the research team: Working in a fragmented health care system
Problem
The multiple health care sectors in the oPt (MoH, UNRWA, NGOs and private) pose specific challenges for research. Though the PURE sample was population based, a clinic setting with trained personnel was needed in each community to collect physical measures, blood, and urine samples from study participants. Primary health care is provided by different sectors depending on the location. For example, primary care clinics located in rural communities are managed by the MoH or by NGOs, whereas primary care clinics located in refugee camps are managed by UNRWA. Involving all stakeholders would likely compromise standardization of data collection. Further, coordinating with officials as well as staff from all stakeholders would not have been practical.
Solution
To overcome these issues we decided to work with one stakeholder only. UNRWA had shown interest in research, specifically for cardiovascular disease prevention [12]. It was expected that participants from non-refugee communities would not be willing to attend clinics in refugee camps (where UNRWA clinics are located) due to the distance they have to travel and because refugee camp clinics are known to be overcrowded. Furthermore, it was unethical to take away resources from those in need from within the refugee camp community for study purposes. To overcome this problem, mobile clinics were set up, and trained nurses and lab technicians travelled to non-refugee community included in the study. Mobile clinics were set in place after contacting community leaders and municipalities. The support received from these leaders increased response rate as participants were more trusting knowing that this activity has been organized from within their community.
Standardizing data collection
Problem Standardized data collection is important for crosscountry as well as within-country comparisons. Due to access restrictions and unexpected closures in the oPt it was not possible to centralize training for the fieldworkers and medical teams.
Solution
Training sessions were held to explain selection of households and household members' strategy and to ensure that the forms were completed accurately. Nurses and lab technicians were also trained to complete the physical measures in a standardized manner. These sessions were held at three different locations, for the teams in the North, Center, and South of the West Bank. Once data collection started the research team visited each study site at least once to ensure adherence to study protocol. Team work and collaboration between fieldworkers and nurses was crucial. The fieldworkers were usually either from the same community or spent a longer time in the community and became familiar with community members. The nurses were less familiar with the community and the participants. Fieldworkers facilitated the nurses' work by finding a location for the mobile clinic and also by contacting participants for their appointments and following up with them when they missed their appointment.
Access restrictions
Problem Israeli checkpoints and road blocks, the separation wall, and military presence in the West Bank restricted movement and limited access of patients to health care facilities [13]. Therefore, movement restrictions in the West Bank were a foreseen challenge to this study. Data were collected from 39 communities in the West Bank (Fig. 1). Twenty-four of these communities were located within the separation wall with no major restrictions to access. However, at the time of data collection, the main entrances to three of these communities were blocked (Hizma, Biddu, and Beit Duqqu communities). The only way into these communities was through detours that are two to five times longer than the direct route [10]. In addition, residents of Hebron city-North of the West Bank, especially those living in the old city were required to take detours to get to the study clinic due to movement restrictions within the city. Gaining entrance to the remaining 15 communities was a challenge as they were all selected from "seam zone" areas. Most of these areas have been designated as closed military zones, which requires those aged 16 and above to apply for 'permanent resident' permits to continue living in their own homes. Entrance and exit of nonresidents requires special permits or coordination with the Crossing Point Administration (CPA) of the Israeli Ministry of Defense [10].
Solution
Since trained fieldworkers were not allowed into these communities, the study team contacted community leaders who identified community members who were able to collect the data, and had a valid permit to enter and leave these communities. Two training sessions on standardized data collection were held for each fieldworker in villages neighboring their communities.
The UNRWA medical team was still required to visit each of these communities to collect physical measures, blood, and urine samples. Unlike fieldworkers, these teams could not be replaced by community members as the latter do not have the proper clinical training. No problems were anticipated for the medical team since UNRWA is a United Nations (UN) agency and its personnel have access to all parts of the West Bank. UNRWA's operations team initially received approval to enter the seam zones from the Israeli District Civilian Liaison officer. Yet, on the first trip, UNRWA's car was denied entry into the community and the team was informed that even UN personnel require permits to enter seam zone areas.
Based on previous experience with requesting permits from the Israeli military, and the delays and rejections received by fieldworkers, the study team decided to find an alternative way for the medical teams to enter these areas. The only way these teams could access these communities was to get to them from the Israeli side of the separation wall as there are no movement restrictions from that side once already in Israel. Only UNRWA employees living in Jerusalem (Center of West Bank) and holding certain IDs were allowed to enter Israel, this limited the number of teams that could complete the data collection in these communities. Organizing this further delayed fieldwork; the team was not expected to arrive to each community before 10:00 am due to the longer distance they had to travel as they all lived in the center of the West Bank, whereas a large number of the seam zone communities are located in the North and South.
In addition to challenges posed by movement restrictions, further difficulties were faced in finding locations within the community for the mobile clinics' operations. Since seam zone areas are considered military zones, new construction as well as repairs of any buildings or infrastructure are restricted. Households are therefore very crowded and there is no space for public facilities including space for village councils, clinics, or even schools. In none seam zone communities, the mobile clinic for the PURE study were located in one of the rooms in the municipality or village council; in some cases it was also possible to use a clinic located in the community. Due to limited public space in seam zone communities working conditions in mobile clinics were sub-optimal. A different strategy was adopted depending on the circumstances in each community. The mobile clinic was set up in the community clinic if available. These "clinics" were poorly equipped and did not include any lab facilities, the team had to be fully equipped with material as basic as alcohol. The team also had to carry major equipment such as centrifuges to spin the blood samples. When a clinic was not present in the community, participants were asked to offer a room in their house to work from. Participants were generally cooperative and always provided space. The conditions of the space provided varied, some rooms did not have any electricity and the nurse had to keep the doors open to get light in.
Considerations related to study setting
Problem Research in any setting requires knowledge of the local context. This has been previously cited as an important consideration to facilitate research and data collection [11], especially when data collection requires interaction with the general population. Gender of the fieldworker was important in the oPt. For example it was not acceptable for a male interviewer to approach females in the communities included in the study.
Other cultural considerations included respecting customs during the month of Ramadan as well as during olive picking season. People change their lifestyle, eating habits and social habits at these times of the year. It was important to monitor how these changes affected data collection. During Ramadan people eat just before sunrise so that they can postpone their next drink and meal till after sunset. This means that by the time they are ready to visit the mobile clinic for the PURE study they may not have fasted according to the study protocol (12 h). Many families in rural and urban communities depend on olive picking for a large portion of their household income. They either have their own trees to harvest, or they are hired by others with land to harvest their trees. Olives have to be harvested soon after they are ripe to prevent damage. Usually everyone in the household teams up to complete the harvest on time; people who are employed take time off work during this season. Everyone in the household leaves very early in the morning for olive picking and return late in the day. This delayed recruitment as households selected into the study were empty during the day.
Solution
When possible, fieldworkers worked in teams of one male and one female for each community. Female fieldworkers were collecting data from females who mostly visited households during the day, because most females were homemakers. When encountering a household with a working female, the fieldworker was instructed to visit the household again in the afternoon. Male fieldworkers, who only interviewed males, were all instructed to make their visits in the afternoon to insure a more representative sample of working males.
There was a noticeable decline in response rate during the month of Ramadan and recruitment was paused in two communities until the month was over. This would not have been picked up if the research team was not closely monitoring the data collection process, and received daily updates from the field, thus compromising response rate. Ramadan also posed a challenge in communities where the mobile clinics had already started data collection as it was difficult for participants to complete 12 h of fasting. Working hours were changed in these mobile clinics to start working later in the day. Similarly, during olive picking season, recruitment and mobile clinics were interrupted in all rural communities until the harvest season was over.
Conclusions
Some of the challenges faced during this study are similar to challenges raised by researchers in other low-and middle-income countries, such as cultural considerations and working in remote areas with limited resources [14]. Other challenges, such as access restrictions and working in a fragmented health care system, are specific to areas with chronic conflict. Our experience indicates that understanding the local context is very important in overcoming these challenges. We had anticipated most of these challenges and thus planned to overcome them.
A total of 1600 participants were recruited into the PURE study from the West Bank. The sample ensured representation of urban and rural communities, and accounted for individuals living in Palestinian refugee camps and seam zone areas, two settings unique to Palestinians. In order to ensure a full representation of the entire population of Palestine, it is important to recruit participants living in the Gaza Strip, and this is expected to lead to new challenges, given the siege on Gaza and the periodic attacks. Understanding the challenges and coming up with innovative ways to overcome these challenges is a step forward in increasing research from low-and middle-income countries.
This paper sheds light on a few unique challenges experienced during the data collection of a large epidemiology study in the oPt. We hope that this experience provides an impetus for other researchers and research projects to be conducted in conflict settings. Lessons learned could be useful for research among refugees from the current conflict in Syria and the rest of the Middle East. | 2017-06-27T22:48:21.532Z | 2017-02-22T00:00:00.000 | {
"year": 2016,
"sha1": "7c8aa8af1c730fcf0ecde3ff0790d7d2fc30053b",
"oa_license": "CCBY",
"oa_url": "https://conflictandhealth.biomedcentral.com/track/pdf/10.1186/s13031-016-0101-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75c4c62961c28a5af7b707d399fd85767f34e785",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220076672 | pes2o/s2orc | v3-fos-license | The Effects of Dictionary Vocabulary Learning Versus Contextual Vocabulary Acquisition on the Vocabulary Development of Pakistani EFL Learners
The vocabulary of a language refers to the range of words used in it. A number of strategies are used in EFL classrooms to teach vocabulary, where the most common are Dictionary Vocabulary Learning (DVL) and Contextual Vocabulary Acquisition (CVA) The present study investigates the difference between the vocabulary development of EFL learners undergone the mentioned strategies. The study is experimental and the population is the BS students of Punjab (Pakistan). The sample was forty EFL students, divided as group 1 and group 2, where group 1 was taught by using DVL while group 2 was taught by CVA. Pre and Post tests were used to see the effects of DVL on group 1 and of CVA on group 2. Results indicated that the vocabulary development of the students taught by using CVA was higher than the students taught by DVL.
Introduction
The knowledge about the words of a language including their meanings is referred to as the vocabulary of a language (Diamond & Gutlohn, 2006). Language learners take vocabulary enhancement as the most difficult part of the language learning procedure (Celik & Topt, 2010). The need to understand a reading comprehension for the academic purpose in a foreign language requires the knowledge of 10,000 words at least whereas the learners need familiarity with a minimum of 2,000 words for spoken communication (Schmitt, 2008). Thus foreign language learners find it very difficult to get familiarized with such size of the vocabulary.
Learning Vocabulary in EFL
According to Thornbury (2002), a foreign language learner mostly focuses on learning the grammar of the language and ignores the importance of learning the words of that language and does not go for vocabulary enrichment. He further explains that grammar helps them to say just a few sentences while if they keep on adding new words to their language, that may enhance their expression of thoughts too.
The central part of any foreign language learning is the learning of vocabulary as concepts, ideologies and thoughts cannot be transferred to the people either in written or spoken form, without having sound knowledge of the words of the language (Fauziati, 2005). Harmer (2001), elaborated the fact by saying that vocabulary must be considered as the most essential element of a language. The idea of learning the vocabulary of a language is supported by various linguists in different forms. Karashen (1989), supported the idea of learning vocabulary by means of accidental and contextual practices rather than intentional vocabulary learning practices. This concept was further supported by Thornbury in 2002, where he mentioned that incidental learning was found to be more effective among EFL learners. Nation (2004), introduced a theory and practice relationship of vocabulary development among EFL learners. He categorized a few steps of vocabulary learning such as generating, notifying and retrieving.
Vocabulary development and Pakistani EFL context
Most of the countries in Asia put less stress on the enhancement of vocabulary among the learners of foreign languages and more focus is given to the language skills (Fan, 2003). In Pakistan, vocabulary learning is also ignored and more focus is given to the grammar of the language rather than on the practical approaches of the use of language in various contexts (Fatima & Pathan, 2016). Vocabulary learning is given importance very rarely in the EFL context of Pakistan. If a difficult or a new word appears in an academic text, that is the only time when the teacher tells the learners, to look for the meaning of the word and write it on the book or notes without any further discussion on its usage (Jamil & Khan, 2014). The strategies and techniques of learning new words of the English language in the Pakistani context are very much dependent on the teachers as students do not try to put effort to learn new words (Mansoor, 2010). Hence the insufficient amount of vocabulary creates hindrances in their written and spoken discourse.
Statement of the Problem
The effects of Dictionary vocabulary learning and Contextual vocabulary acquisition on the vocabulary development of the EFL learners were investigated in the present study.
Research Questions
What is the level of English language vocabulary of the EFL learners of Pakistan at the bachelor's level? What is the difference between the English language vocabulary development of Pakistani EFL learners taught by using DVL and CVA at the bachelor's level?
Significance of the Study
The study is significant in the following ways: The methods of teaching vocabulary through DVL and CVA in the present study can be useful for EFL teachers. EFL learners can adopt the strategies of learning vocabulary from the present study. The study can be employed in some other foreign language contexts as well, at some different levels.
Hypothesis
The hypothesis tested for the study was as follow: Ho: The difference between the vocabulary development of the EFL learners taught by using DVL and CVA is not significant.
Delimitation of the study
The study was delimited to the BS level students of a single private university of the city Lahore, province Punjab.
Strategies of Vocabulary Learning
The concept of the strategies of learning the vocabulary emerged from the theories of the strategies of language learning, which are determined as the independent learning patterns or techniques to learn a foreign language (Chamot and O' Malley, 1990). These strategies are helpful for learners to be less dependent on their language teachers and be more focused on self-learning. Brown and Payne (1994), highlighted various important steps that may be useful in the development of vocabulary. The steps were as follow: a) Express the new words. b) Relate words with their visual or verbal images. c) Putting the effort in learning the new words. d) Entering the new words in the memory. e) Using words in various contexts. Nation (2004), stated that the strategies which are used for language learning, including the strategies of vocabulary enhancement make students more autonomous, where they can take charge of their own learning and it further turns them into responsible language learners. If students get a knowledge of the strategies of vocabulary development then they can apply them in their foreign language classrooms, they can select the words of their own choice to learn and can apply them in their desired context because when they are given that much space then there is always a rapid learning (Ranalli, 2003). Nation (2004), stated that familiarity with the strategies proves to be very helpful towards the enhancement of the vocabulary of EFL students and they feel more motivated and enthusiastic towards learning. Cameron (2001), stated that language learners cannot apply the strategies of learning by themselves in language classrooms and they are supposed to be trained by their teachers at first on how to use them aptly. As stated by Schmitt (2008), learners are always in need of particular and continuous guidance to learn the usage of the strategies of vocabulary development.
Types of Vocabulary Learning Strategies Dictionary Vocabulary Learning Strategies
The words of a language come up with several meanings according to the context they are being used. Language learners can probably be familiar with what the word means in a sentence but it is not necessary that the meaning they know fits in according to the context of the sentence (Huang & Eslami, 2013). Furthermore, learners can also not be very accurate when they try to make a guess about the meaning of a word in a sentence, they may be right or wrong (Alavi, 2012). Hayati and Fattahzadh (2006), suggested that its always good to guess the meanings of words at first but to be certain about the meanings the learners must go back to the dictionaries, to avoid mistakes. The use of dictionaries makes learners more autonomous and independent in language classrooms as they can find for the meanings of difficult words without help from the language teachers (Gu, 2003). The strategies for the development of vocabulary among foreign language learners by using dictionaries were divided into a few steps by (Nadiya et al, 2019).
1. The vocabulary of a language can best be enhanced by reading a lot, which may include the readings of newspapers, magazines, novels or other literary texts. Language learners may find the meanings of the words in dictionaries, which they encounter while reading 2. Learners must have a dictionary and a thesaurus with them and they must be familiar with their proper and required usage. They can use whatever suits them as per their interest or context. They must look up the pronunciation and various meanings of the words, while they should look for the synonyms and antonyms as well. 3. They must keep a journal along to make the lists of the new words they encounter so that they can always refer back to them or may think of possible sentences out of them or learn them over the period of time. 4. Learning one word each day is quite a common practice. It is a great strategy to enhance the vocabulary gradually and it goes a long way.
Contextual Vocabulary Acquisition (CVA)
"Contextual vocabulary acquisition (CVA) is the acquisition of the meaning of a word in a text by reasoning from textual clues and prior knowledge, including language knowledge and hypotheses developed from prior encounters with the word, but without external sources of help such as dictionaries or people" (Connell, 2008. p.89). It is a very helpful technique for language learners, especially in times when no outside source is available to get the accurate meaning of the words (Rapaport, 2005). Gaskins (2010), stated that learners take most of the words as sight text and to grasp and comprehend the meanings of the unfamiliar words, contextual help is needed. Vocabulary learning incidentally and intentionally named as explicit and implicit learning by Laufer (2001). In explicit learning, the learner learns incidentally and intentionally both, whereas implicitly learning is thoroughly incidental. In addition to this incidental learning is determined to achieve the chief goals of communication (Schmidt, 2001).
Types of CVA Incidental CVA
Language learners are most of the time have familiarity with the words they were taught at some point in their lives which can be called a "learning by-product" of listening or reading. Contextual vocabulary acquisition can not be referred to as a thing that can be done once in for all as it is an ongoing process (Nagy & Scott, 2000). CVA is often incidental and is a result of the unconscious mind of a person or is the result of the assumption made by a person about the meanings of the words (Christ et al, 2011).
Deliberate CVA
In some contexts, CVA comes with the very conscious efforts of the learners and it involves more engaging and active contexts of language learning. Nation (1993), suggested some strategical steps for CVA, which were further modified and used by Coxhead in 2013. They are as follow: Step 1: At first it is important to have a deep look at the word itself and at the associated words carefully to get to know about the part of speech it belongs to, whereas it depends on the knowledge of the reader about the grammar of the language. This is where the knowledge of the grammar is extracted from the readers' minds not from any external source.
Step 2: The grammatical context of the word should be focused when it is the element of a particular phrase or a clause.
Step 3: Here a deep understanding of the bigger impact of the word is important which should be beyond the level of the clause or the sentence as a whole.
Step 4: Make some guess about the meaning of the word and then check it either it is correct or incorrect.
Methodology Population
EFL students of BS level in the context of (Punjab (Pakistan) were considered as the population of the present study.
Sample
The study sample was composed of 40 students of a private sector university who had English in their syllabus as the mandatory subject at BS.
Research Methodology
It was an experimental study where there were two groups under the experiment as group 1 and group 2. Group 1 was taught by using DVL whereas group 2 was taught by using CVA. The experiment was carried out for 2 months.
Research Design
The study was conducted with a purpose, at first to see the present level of the English vocabulary of participants and further, it aimed at analyzing the differences between the groups using DVL and CVA. The strategies of DVL by Nadiya (2019) and the strategies of CVA by Coxhead (2013) were used as the framework of the study.
Instruments of the study
Both the groups i.e. group 1 and group 2 went through pre-tests before the experiment and post-tests at the end of the experiment. The pattern of both the tests was the same though it was varying in the level of difficulty, whereas Post-test was more complex and difficult than the Pre-test.
The procedure of data collection
The study was conducted with the purpose of identifying the level of vocabulary as well as to find out the effectiveness in any of the domain whether DVL or CVA. A quantitative method of research was used to identify the differences between vocabulary development by DVL and CVA. Lesson Plans for each day were designed with the objectives of enhancing vocabulary among the participants and each activity and exercise was designed under the pattern of the questions and exercises used in the tests taken before and after the experiment. The study duration was one month, where three classes in a week were taken for forty minutes. The activities used for the vocabulary development through DVL and CVA were Vocabulary/Pictures, Matching tests, Synonym Antonym exercises, Written Composition and Reading comprehension. All the worksheets used in both groups were assessed and marked each day to see the development of vocabulary among the participants.
Data analysis
The data was analyzed by using Mean, Standard Deviation, while the difference between the vocabulary CVA Strategies (Coxhead,2013) Learning Practices Group Learning Assessments Group development of group 1 and group 2 was analyzed through independent sample t-test.
Results and Findings
The results of the study showed a noticeable difference between the vocabulary development of the learners taught by using DVL and CVA. The group taught with CVA strategies showed better performance in the post-test in comparison to the group taught by using DVL strategies. Table 1. Mean and Standard Deviation of Gain scores (Group 1) and Gain scores (Group 2) The table shows that the gain scores of the experimental group 2, N=20 with, M= 8.766 and SD=1.94 are higher than the gain scores of group 1, N=20 with, M= 2.633 and SD=1.033. .000 .000 The t value was significant with t (58) = 15.271 and p = 0.021 ≤ 0.05. It shows the significance of the value achieved from the independent sample t-test and further displays that the difference in the gain scores of group 1 and group 2 was significant. The findings of the study rejected the hypothesis stating no significant difference between the groups taught with DVL and CVA. The findings showed that the group taught with CVA gained higher scores than the group taught with DVL at the end of the experiment.
Discussion
The studies on the development of vocabulary mostly highlight the use of dictionaries as a preferred technique but a few studies have also shown some interest in the relationship between dictionaries and accidental vocabulary learning techniques. Welker (2015), identified that the students who used dictionaries for the vocabulary retention showed slightly better performance as compared to the students who used only the contextual clues so he suggested in his study that the use of clues in various contexts and a handy use of the language dictionaries must go together for the better vocabulary development. Chang (2005), found that if students consult dictionaries then they get a better and comprehensive understanding of the vocabulary instead of just inferring meanings from the context, whereas the present study goes into the opposite view than this. Laufer & Waldman (2011), conducted an experimental study with the methods of DVL and CVA and the results showed that the participants of the group using no dictionaries showed relatively higher results than the participants of the group using dictionaries. The present study goes in line with the findings of Laufer & Waldman (2011), as it also showed better performance among the non-dictionary group. Shi & Zhang (2008), expressed that a mix method approach towards vocabulary development can be more beneficial for language learners instead of using DVL and CVA as separate techniques. The present study with an experiment by using both the strategies of DVL and CVA also proved that the contextual acquisition helped more towards the vocabulary enhancement among EFL learners, whereas the use of dictionaries as an external source at the end of the contextual clues was also considered to be important.
Conclusion
The study concluded that the treatment of 'CVA' has strong effects on the vocabulary development of EFL learners rather than the treatment of 'DVL'. The problem of being inefficient on the part of dictionary vocabulary learning or intentional learning was the lack of practice by the learners. Due to this fact, learners were unable to retain much vocabulary taught in class. the students of the DVL group were slow in the process of retention and remembering, moreover, they were not paying much attention to the related text. They showed greater dependence on the external sources either in the form of the teacher or in the form of the dictionaries. So these external sources were the hindrance in their process of vocabulary development.
The students who went through the strategies of contextual guessing displayed satisfying results with greater enhancement of the vocabulary. The test results of the students of CVA showed that they were well aware of the basic techniques of learning vocabulary by guessing the meanings at first and then relating it to the context. The contextual clues helped them in understanding the basic essence of the text and then they were able to comprehend the meanings aptly.
The activities of contextual guessing added skill in their learning i.e how to guess the meaning themselves instead of getting much external support. It was like creating a habit to learn words and their usage with this strategy. A noticeable change in the participants of CVA was observed at the end of the research. The strategy of CVA does have an effect on enhancing vocabulary more than DVL, especially when learners get contextual help, they remember more words. Ultimately the results finalized that by learning 'Contextually', the retention power is strong and a student's vocabulary is also enhanced. The treatment of CVA also revealed the aptitude and interest of the learners in acquiring language as they participated, negotiated and discussed the meaning with peers and instructors each time they encounter an unknown word. | 2020-01-16T09:11:17.778Z | 2019-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "4f50d960714abebb8bbb09ada93c44569c28c040",
"oa_license": null,
"oa_url": "https://doi.org/10.31703/grr.2019(iv-iv).16",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4f33962f0fb0d781463b3cec269649ea950f45ef",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
56414990 | pes2o/s2orc | v3-fos-license | Associations Between Genetic Data and Quantitative Assessment of Normal Facial Asymmetry
Human facial asymmetry is due to a complex interaction of genetic and environmental factors. To identify genetic influences on facial asymmetry, we developed a method for automated scoring that summarizes local morphology features and their spatial distribution. A genome-wide association study using asymmetry scores from two local symmetry features was conducted and significant genetic associations were identified for one asymmetry feature, including genes thought to play a role in craniofacial disorders and development: NFATC1, SOX5, NBAS, and TCF7L1. These results provide evidence that normal variation in facial asymmetry may be impacted by common genetic variants and further motivate the development of automated summaries of complex phenotypes.
INTRODUCTION
The ability to make connections between genetic and phenotypic variation, hinges on phenotypic descriptions that are sufficiently detailed to capture the traits of interest. Biomedical imaging creates very high dimensional datasets that can be analyzed and used to extract phenotype descriptions. Traditional phenotyping from images consists of 2D and 3D measurements of landmarks manually placed on the image. Landmark data is typically sparse and is likely insufficient to capture the complexity necessary for an association with genetic data. A recent study, testing the relationship of facial asymmetry, estimated from nine mid-facial landmarks, with genetic variation at 102 single nucleotide polymorphism (SNP) loci, recently associated with facial shape variation, was unable to identify any SNP relating to asymmetry (Windhager et al., 2014). Methods for automatically phenotyping images and incorporating complex shape information, will be key in understanding the genetic basis of morphology. New approaches such as the BRIM method, developed by Claes et al., have shown the promise of summarizing morphological differences in novel ways to identify genes affecting normal morphology (Claes et al., 2014). The aim of this study is to use automated phenotyping to produce a score of facial asymmetry that incorporates local morphological measurements and their spatial distribution to investigate the genetic basis of facial asymmetry.
Previous analysis of symmetry in 3D facial images has included manual landmarks (Devlin et al., 2007;Stauber et al., 2008), automated measurements (Mercan et al., , 2018, plane of symmetry calculation (Linden et al., 2017), and dense surface registration of a 3D image with a mirrored version (Yu et al., 2009;Demant et al., 2010;Darvann et al., 2011;Djordjevic et al., 2012). Surface registration-based methods show particular promise due to their independence from the plane of symmetry and ability to provide dense shape information across the surface of the face.
Recent applications of surface-registration based methods have been validated by comparison to traditional landmark methods and used quantified asymmetry in individuals using the average transform magnitude or root mean squared error from predefined regions (Claes et al., 2011;Kornreich et al., 2016;Öwall et al., 2016;Verhoeven et al., 2016) and principle modes of variation (Lanche et al., 2007).
In previous work, our group have developed voxel-based deformable morphology analysis methods capable of quantifying facial development in embryos and postnatal animals, from 3D imaging modalities with high precision. Using compact feature representation of image differences, facilitates the comparisons between individuals and across groups (Rolfe et al., 2011(Rolfe et al., , 2013(Rolfe et al., , 2014. In this work we introduce a surface-registration based method to quantify bilateral symmetry in individuals and a metric to summarize how an individual's facial asymmetry and its spatial distribution compares to asymmetry in a healthy, control population.
In this study, we preform GWA analysis on two facial asymmetry scores using a sample of 3186 healthy subjects. Highly significant genetic associations were identified for one of our scores, including genes known to play a role in craniofacial disorders: NFATC1, SOX5, NBAS or likely to play a role in craniofacial development: TCF7L1.
Data
The datasets used in this work were previously collected as part of the FaceBase Consortium's 3D Facial Norms Dataset, described in detail by Weinberg et al. (2016). This study was one of the purposes, under informed consent, and IRB approval was obtained for their use in this work. The dataset consisted of 3D photographic facial surface scans and genetic data from 3186 3D facial meshes from healthy subjects of European Caucasian ancestry, between the ages of 3 and 40 years old. Error screening and quality control measures were followed to reduce variability, due to factors such as facial expression and poor image quality. Subjects were screened for many confounding environmental factors, including: (1) a personal history of facial trauma; (2) a personal history of facial reconstructive or plastic surgery; (3) a personal history of orthognathic/jaw surgery or jaw advancement; (4) a personal history of any facial prosthetics or implants; (5) a personal history of any palsy, stroke, or neurologic condition affecting the face; (6) a personal or family history of any facial anomaly or birth defect; and/or (7) a personal or family history of any syndrome or congenital condition known to affect the head and/or face (Weinberg et al., 2016). To demonstrate that the age range in this dataset did not disrupt the results, we also ran a GWAS excluding pre-pubertal individuals (under 14). The genes identified as significant on the whole dataset still met our threshold for significance on the restricted dataset. These results are reported in Figure S1 and Table S1.
All image data used in this project were acquired using the 3dMD imaging systems (3dMD, Atlanta, GA). These commercial stereo-photography systems incorporate multiple camera viewpoints to provide a 3D mesh of the human face, at no risk to the subject, with the high level of anatomical integrity required for medical research. Several recent studies have assessed the amount of noise or variability that may be present in 3D meshes acquired using the 3dMD system, compared to alternative methods such as direct anthropometry and digital photogrammetry (Dindaroglu et al., 2016), or highaccuracy industrial "line-laser" scanning (Zhao et al., 2017). The findings from these studies suggest that the number of errors likely to be present in a 3dMD dataset is similar to, or an improvement over more traditional methods. The facial surface scans were stored as 3D meshes that were not aligned and may contain extraneous objects such as hair and clothing. Prior to analysis, images were preprocessed to remove noise, cropped to extract the facial region and aligned using custom software developed by our research group (Wu et al., 2014).
A standard set of 24 facial landmarks was collected for each 3D facial mesh. In this study, a subset of 18 landmarks was selected to minimize the number of subjects excluded, due to missing the landmark points. A diagram of the landmarks used for analysis is shown in Figure S2. Details on the procedures used to identify the landmarks on 3D facial surfaces can be found on the "Technical Notes" section of the 3DFN website (https://www. facebase.org/facial_norms/notes).
The genotype data consists of 964,193 SNPs on the Illumina (San Diego, CA) OmniExpress+Exome v1.2 array plus 4,322 custom SNPs chosen in regions of interest based on previous studies of the genetics of facial variation.
Facial Asymmetry Score
Most attempts to summarize image characteristics rely on global features that describe the image as a whole, or local features calculated point-wise across the image. Previous work evaluating asymmetry in facial images has tended toward a local, point-wise approach (Claes et al., 2011;Kornreich et al., 2016;Öwall et al., 2016). While these features have been shown to be effective, we propose a method to produce a richer phenotype description by scoring an individual's relationship to a model of normal asymmetry using both global and local differences. In this work, the global assessment of facial asymmetry is restricted to the region below the eyes (defined by the right and left endocathion landmarks). This restriction limits noise caused by eyelashes, eyebrows, and the hairline and is consistent with landmark-based analysis for this data set, as landmarks were not collected in the forehead region (Weinberg et al., 2016).
In our score assignment system, two local asymmetry metrics were defined to produce independent scores of asymmetry. The local metrics were assessed at each point on the surface of each image. For each asymmetry metric, a statistical model of asymmetry was calculated and each image was scored by its distance from the average model using our novel similarity measure to combine global distribution information with local point-wise correspondences, to produce a summary of the local and global differences. The block diagram of the asymmetry score assignment system is shown in Figure 1.
Local Asymmetry Metrics
The 18 manually-placed landmarks shown in Figure S2 were used to align each subject mesh in a common orientation. After alignment, a base mesh was chosen and corresponding points in each source mesh were found for each point in the base mesh, using the dense point correspondence method developed by Hutton et al. (2003). The locations of corresponding points for each point in the base mesh were averaged over the group to generate the average mesh. Each subject mesh was mirrored across the mid-line and the original and mirrored image were densely mapped to the average mesh. For each point on the average image, an asymmetry flow vector was defined by the difference in position between the corresponding points on the mirror and original images, representing the transformation due to asymmetry. This is illustrated in Figure 2.
We defined two properties of local morphology calculated at each point on an individual facial mesh to capture independent aspects of facial asymmetry.
1. Angle of surface orientation: angle between the normal vectors at corresponding points on and the mirror image. This value quantifies the asymmetry in surface orientation at each point on the image. 2. Angle of deformation: angle between asymmetry flow vector and surface normal on the original image. This value quantifies the direction of the transformation between an image and its mirrored copy at each corresponding point.
These local asymmetry features are illustrated in Figure 3. These angle-based features capture one aspect of asymmetry and are independent of the magnitude of asymmetry.
The magnitude of the deformation can also be used as a local feature of asymmetry using our method. It is defined at each FIGURE 2 | Example of corresponding point mapped from an average image (C) to subject mesh (A) and mirrored copy (B). The asymmetry flow vector is defined between corresponding points on the subject mesh and its mirrored copy. point on an individual facial mesh as the length of the 3D vector between that point and the corresponding point on the mirror image. Results from this approach are included in Figure S3 and Table S5.
Average Model of Normal Asymmetry
Some asymmetry is expected in normal human facial features and the type and amount expected varies with location on the face. For example, asymmetry in the corners of the lips and eyes is more common than asymmetry in the nasal tip. To take into account these spatial differences, each asymmetry score was based on the distance between an individual and an average model of normal asymmetry rather than the absolute asymmetry of the face.
The asymmetry heat maps were used to create an average model of normal asymmetry for each feature. For every point on the average mesh, the average and standard deviation of each feature distribution over all corresponding points in the dataset were calculated. The average and standard deviation heat maps for the angle of surface orientation feature are shown in Figure 4.
Distance From Average Model of Normal Asymmetry
To assess the similarity between two feature heat maps the following questions must be addressed: 1. What feature values are present in the image? 2. Where are regions of similar feature values approximately located?
To simultaneously address these two questions, we developed a similarity metric that combines information about the global feature distribution and point-wise differences. Histograms of image features provide a robust description of global image data that has proven to be powerful in detecting similarity. However, the use of histogram representations of features presents two primary drawbacks: the loss of spatial distribution information and the loss of information due to quantization. To address this, histograms can be augmented by the inclusion of additional spatial information and other local properties (Birchfield and Rangarajan, 2005;Lyons, 2009;Prabhu and Kumar, 2014;Zeng et al., 2015). In previous work, our group developed a method to simultaneously assess similarities in feature values and their regional distribution based on spatial histograms (Rolfe et al., 2014).
Intuitively, the spatial histogram, or spatiogram is an image histogram where the distribution of values is spatially weighted by the similarity of the spatial positions of the values in each bin. Typically, this is done by modeling the spatial location of the contents of each histogram bin with a single or mixture of Gaussian distributions. In this application the known point correspondences between images in the data set calculated in section 2.2.1 are leveraged to provide a more precise score of spatial matching between histogram bins. The spatial information is incorporated as the set of coherent feature regions in a histogram bin. For an image I, the histogram of I is defined as: where n b is the number of pixels with values assigned to the b-th bin and B is the total number of bins. Our spatially augmented histogram is defined as: where n b is the number of points with values assigned to the bin b, and R b is the set of m coherent regions r b1 , ...r bm where r bi is a vector of j point indexes < x 1 , ...x j >. Coherence of regions is determined by computing connected components. A connected region r bi is the set of mesh points such that for any point x, x ′ ∈ r bi , there is a path in r bi to from x to x ′ . A threshold for coherence can be set for a connected region with greater than τ mesh points. For this study regions with τ < 20 (corresponding to less than 0.1% of the image) were classified as incoherent. An example of a feature heat map and coherent regions extracted from the histogram is shown in Figure 5. In Figure 5A, feature values from a feature heat map are grouped into histogram bins. Figure 5B shows the original feature heat map and the extraction of coherent image regions assigned to Bin 4 of the histogram in Figure 5A.
The distance metric between two augmented histograms is typically based on the Bhattacharyya distance between histograms, weighted by the spatial similarity of the contents of bin b as in Birchfield and Rangarajan (2005). The difference between spatial histograms h and h ′ is expressed as: The spatial weighting term m b expresses the similarity of the m spatial regions in bin b. Previously, the Mahalanobis distance, or number of standard deviations between the means of the Gaussian distributions in each bin, has been used to weight the spatial similarity. In this work, we utilized the spatial weighting term to incorporate the point-wise similarity between corresponding points from two feature heat maps. This modification addressed both the need for spatial information and the loss of information due to histogram quantization. We defined the spatial weighting term as the mean feature error between histogram regions, normalized by the standard deviation Frontiers in Genetics | www.frontiersin.org at that point, calculated from the average model of asymmetry. This is expressed as: where w bi is the weight of the ith coherent region in bin b, A(x j ) and A ′ (x j ) are the feature values from the two feature maps at the corresponding point j and σ¯j is the standard deviation at point j. This spatial weighting term represents average error between feature maps, measured in standard deviations, for each coherent region. To achieve a symmetric distance measure, the total distance between h and h ′ was defined as: This distance ρ(h, h ′ ) was applied to assess the similarity of each individual feature heat map to the average feature heat map. This provided a hybrid local-plus-global summary of the abnormality of asymmetry of an individual and was assigned as our score of asymmetry. Average feature heat maps for subjects with the lowest (lower 10 percent of the data set) and highest (upper 10 percent of the data set) asymmetry scores for the angle of surface orientation feature are shown in Figure 6. In the average heat map from the high asymmetry score group in Figure 6B, regions with high values contributed the most to the score in individuals with high levels of asymmetry.
Genetic Association Analyses
Whole genome association with each phenotype score was done using PLINK (Purcell et al., 2007). SNPs with the minor allele present in less than 5 subjects were removed, resulting in 747,780 remaining SNPs. The first four principle components of the genetic data were used as covariates to adjust for the effects of ancestry. A linear model was used to test genetic association between our phenotype scores and each SNP, controlling for the effects of age and gender. The Benjamini-Hochberg procedure was used to adjust the original p-values globally over both phenotype scores in order to control the false discovery rate FIGURE 6 | Group average heat maps from subjects with low asymmetry (angle of surface orientation score in lower 10 percent of data set) (A) and subjects with high asymmetry (angle of surface orientation score in upper 10 percent of data set) (B). Regions with low asymmetry are blue and regions with high asymmetry are red.
(FDR) (Benjamini and Hochberg, 1995). Genome-wide Complex Trait Analysis (GCTA) was used to estimate the proportion of variance in each phenotype score explained by all GWAS SNPs, i.e., heritability (Yang et al., 2011). Each phenotype score was tested for associations with age and sex. The Pearson correlation coefficient was used to test for an association with age. An association with sex was tested using the Kendall rank correlation coefficient (tau). The Kendall test does not rely on the assumption of normally distributed data and so is more appropriate for dichotomous data such as sex. The correlations found between the asymmetry scores, age and sex were weak, though the correlations had high levels of significance in terms of the p-values, as reported in the Tables S2, S3. We speculate that this effect is likely due to the large sample size (i.e., statistical power), which made it possible to detect the significant associations.
Angle of Surface Orientation Score
The top 10 SNPs significantly associated with the angle of surface orientation scores (p-value < 5 × 10 −8 ) are listed in Table 1, and the Manhattan plot is shown in Figure 7. Of these SNPs with highly significant associations, three are located on genes with known links to craniofacial abnormality and asymmetry (NFATC1, SOX5, and NBAS) and one (SNX6) is on a gene with a potential link. NFATC1 encodes a transcription factor that plays a role in mandibular development and the Wnt signaling pathway, which is instrumental to facial morphogenesis (Winslow et al., 2006;Brugmann et al., 2007;Doraczynska-Kowalik et al., 2017). Mutations in NFATC1 are linked to Cherubism, a disorder characterized by abnormal bone tissue in the lower part of the face and a characteristic facial phenotype (Kadlub et al., 2016). A recent GWA study of morphological measurements has also suggested a possible link between this gene and measurements of the mouth (Lee et al., 2017). SOX5 encodes a transcription factor involved in the regulation of embryonic development that is thought to play a role in chondrogenesis. SOX5 is linked to Lamb-Shaffer Syndrome, which can cause an abnormal craniofacial phenotype including a facial asymmetry, depressed and/or broad nasal bridge, and bulbous nasal tip (Lamb et al., 2012). Mutations in NBAS are associated with Pelger-Huet Anomaly, which has a phenotype including facial asymmetry, long face, and straight nose (Segarra et al., 2015). It is also linked to Feingold Syndrome 1, which can result in craniofacial dismorphology including asymmetry, triangular shaped face, and flat nasal tip (Chen et al., 2012). SNX6, a member sorting nexin family, has not been definitively linked to craniofacial disorders, however multiple studies have suggested it as a candidate gene for holoprosencephaly, the most common developmental field defect in patterning of the human prosencephalon and associated craniofacial structures (Kamnasaran et al., 2005;Segawa et al., 2007). Also of interest is TCF7L1, which encodes for a transcription factor that mediates the Wnt signaling pathway and has been found to have high expression in the developing murine palate (Potter and Potter, 2015). Genes with known or potential association with craniofacial abnormality and asymmetry are boldfaced.
The angle of surface orientation phenotype scores were assessed for heritability using GCTA and were found to have a proportion of variance consistent with a substantial heritability. Detailed results are reported in Table S4.
Angle of Deformation Score
The angle of deformation phenotype scores showed less significance than the facial asymmetry scores. The top 10 SNPs associated with the angle of deformation scores are reported in Table 2, and the Manhattan plot is shown in Figure 8. While many of the p-values for the SNPs associated with this phenotype score are not considered significant, it is possible that the multiple testing correction might have been overly conservative when significant linkage equilibrium was present. As there are a number of genes which have known or potential links to facial development or morphology, we reported the genes associated with these SNPs of interest, though the associations are weak.
AMBRA1 encodes a protein that regulates different steps of the autophagic process and is an important regulator of embryonic development. Its mutation or inactivation in mice was shown to result in embryonic malformations (Fimia et al., 2007). Rare deletions in NRXN3 was linked to autism spectrum disorder (Vaags et al., 2012). While there is as yet no consensus on facial phenotypes associated with autism spectrum conditions (ASC), there is evidence to suggest that there are morphologically distinct subgroups within ASC that correspond with different cognitive and behavioral symptomatology (Boutrus et al., 2017). Two SNPs of interest are located on the gene FANCC, which encodes a DNA repair protein with a role in the maintenance of normal chromosome stability. FANCC is implicated in Gorlin syndrome that has a phenotype including broad nasal root, cleft lip, and cleft palate (Reichert et al., 2015). FANCC is also linked to Fanconi anemia that has a phenotype including craniosynostosis, microencephaly and small eyes (de Winter et al., 2000). FTO is a protein coding gene associated with growth retardation, developmental delay, and facial dysmorphism (Boissel et al., 2009;Daoud et al., 2015). The associated phenotype includes skull asymmetry, coarse facial features, abnormal positioning of the maxilla or mandible, prominent alveolar ridge, and cleft palate. The retinoid acid receptor-responsive gene RARRES1 contains two SNPs of interest. This gene is thought likely to play a role in embryonic morphogenesis (Oldridge et al., 2013).
The angle of deformation phenotype scores were assessed for heritiability using GCTA and were found to have a proportion of variance suggesting minimal heritiability and a p-value suggesting low significance. This is a possible explanation for the low levels of significance observed. These results are detailed in Figure S4.
Comparison to Asymmetry Scores Based on the Deformation Vector Magnitude
Angle-based measurements capture one aspect of asymmetry, which may be relevant to specific biological processes. Deformation magnitude, defined as the magnitude of the distance between each point on a facial image and its corresponding point on a mirrored image, is another common choice for meshbased shape analysis. For comparison, we implemented our asymmetry score using the deformation magnitude as the local asymmetry feature. This local property was then used to calculate an overall score of asymmetry following the procedure outlined in the Methods section 2.2. The GWAS results from our magnitude-based asymmetry score are reported in Figure S3 and Table S5.
Since the average value of the deformation magnitude over an image surface is a metric frequently used in other studies, we also implemented an established measure from the literature to compare to our deformation magnitude asymmetry scores (Verhoeven et al., 2016). In this work, local asymmetry is defined as the magnitude of the distance between each point on a facial image and its corresponding point on a mirrored image. The measure of total facial asymmetry was calculated using the average of these distances over the face. This method was selected because it is similar to those used by several other groups and the results were validated on a data set with known ground truth. The GWAS results from this comparable deformable morphology approach are detailed in Figure S4.
Both magnitude-based methods we tested had lower significance and did not identify genes known to result in facial asymmetry. One gene of interest identified by both methods, MYO10, has been linked to craniofacial development in zebrafish. The genes identified by these two magnitude-only methods overlapped, but our magnitude-based asymmetry score showed higher levels of significance.
Comparison to Asymmetry Scores Based on Landmark Measurements
The motivation for developing the mesh-based methods in this work was to provide more complex phenotypes for genetic association than standard landmark-based approaches. Subtle differences in asymmetry that may be scientifically interesting are The genes with known or potential association with facial development or morphology are boldfaced.
unlikely to be captured by landmark data, which is usually very sparse. While landmark-based methods may identify associations with genes of interest, they may identify different pathways than mesh-based analysis as they do not use data between the landmark points. To compare our method to GWAS using a traditional, landmark-based approach of measuring asymmetry, a score of facial asymmetry was defined using the Procrustes distance between an image and its mirrored copy (Bookstein, 1997). Each image was rigidly aligned with its mirrored copy. A subset of the 12 bilaterally paired landmarks was selected from the original 18 landmarks shown in Figure S2 and the Euclidean distance between right/left landmark pairs was measured. The facial asymmetry score was calculated using the average of these distances. Using this method, no SNPs were found to meet the threshold of genome-wide significance of p = 5 × 10 −8 , as detailed in Figure S5. These results provide additional motivation for the use of mesh-based analysis, in addition to the improvements in precision and reproducibility,
DISCUSSION
Asymmetry is the topic of a large number of studies investigating how genetic and environmental factors influence normal development. It is likely to be influenced by complex and interelated factors, which can be difficult to control for in human studies, presenting significant challenges for analysis. Asymmetry, especially fluctuating asymmetry, has been hypothesized to be closely linked to developmental instability and many studies have interpreted it as a marker of environmental stress during development (Klingenberg and McIntyre, 1998;DeLeon, 2007;Ozener, 2010). However, several recent studies have called these findings into question and have suggested a stronger role of heredity (Quinto-Sánchez et al., 2015. Further studies using genotype and phenotype data are needed to better understand how the developmental processes leading to asymmetry are impacted by environmental factors. While subjects in our study were screened for a number of possible environmental influences on facial asymmetry, as detailed in section 2.1, many other potential confounding factors remain, such as twinning status and smoking behavior that are unknown or could not feasibly be controlled for in this study. Despite these limitations, we have applied a data-driven approach to evaluate methods for quantifying aspects of asymmetry that may be related to biological processes resulting in facial asymmetry. Consistent with other recent findings on the genetic basis of normal facial variation, several of the genes associated with variation in normal asymmetry are involved in syndromes with craniofacial phenotypes. This supports the hypothesis that common variants near the genes related to Mendelian syndromes are implicated in normal phenotypic variation (Shaffer et al., 2016). While we are cautious about interpreting the results from a angle of deformation asymmetry score, due to the weak associations, several of the genes of interest identified are associated with embryonic morphology and development and craniofacial abnormality. The genes identified by the angle of deformation score do not overlap with the genes identified by the angle of asymmetry score. This indicates the possibility that the two aspects of asymmetry quantified may be useful for identifying different biological pathways impacting facial asymmetry.
The heat maps of local asymmetry features provide information about the regions of the face that contribute most to the asymmetry scores. Figure 6B shows the average feature heat map for subjects with the highest asymmetry scores (top 10 percent of the data set). This heat map shows higher levels of asymmetry than the average feature heat map in Figure 4 and also suggests the relative importance of the nasal tip, nasal bridge, upper lip, and chin regions in subjects with high levels of normal asymmetry.
Questions still remain about the ability of complex phenotypes to be accurately associated with genetic data as the genotypephenotype map for facial morphology is likely to be incredibly complex (Hallgrimsson et al., 2014). A single gene can result in local or global shape differences and be intertwined with environmental factors. Despite these challenges, we have demonstrated that our hybrid local-to-global score of abnormal asymmetry was able to find associations with genes known to play a role in craniofacial morphology and asymmetry. While we do not have an assurance that our automated phenotyping method is the optimal strategy to summarize phenotypes for genetic association, the significance of the results motivates its further development. One limitation of this study was our lack of a comparable dataset with which to replicate our findings. If one becomes available in the future, applying these methods to identify an overlapping set of genes would significantly strengthen the findings in this work.
In future work, new local morphological metrics can be investigated using this framework. This method can also be implemented to compare subjects to an average model of a group of interest, rather than a control population, to assess similarity to a known phenotype. Taking a data-driven approach to optimizing phenotypic descriptors, guided by the significance of the genetic associations uncovered, will contribute to both our understanding of the genetic basis of human facial variation and the creation of new metrics for biologically relevant phenotype data.
ETHICS STATEMENT
The study was carried out in accordance with the recommendations of University of Washington IRB #42874 with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The University of Washington IRB approved the protocol.
DATA AVAILABILITY STATEMENT
The datasets analyzed for this study were obtained from FaceBase (www.facebase.org), and were generated by projects U01DE020078 and U01DE020054. The FaceBase Data Management Hub (U01DE020057) and the FaceBase Consortium are funded by the National Institute of Dental and Craniofacial Research. The phenotype scores developed for this work will be made available on request.
AUTHOR CONTRIBUTIONS
SR carried out the main efforts on the research including developing the theory behind the angle of surface orientation score and the angle of deformation score as well also carrying out all the experiments, and writing the paper. S-IL acted as a consultant on the analysis of the GWAS results and helped to write the Results and Discussion sections of the paper. LS served as the primary adviser to SR in this work.
FUNDING
Research reported in this publication was supported by the National Institute of Dental and Craniofacial Research (NIDCR) of the National Institutes of Health under award numbers: 5F32DE025519 and U01-DE020050. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. | 2018-12-18T14:04:11.751Z | 2018-12-18T00:00:00.000 | {
"year": 2018,
"sha1": "7fe96d31c8145c24ca7baaa6819b49d3adb5d35c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2018.00659/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fe96d31c8145c24ca7baaa6819b49d3adb5d35c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
18339434 | pes2o/s2orc | v3-fos-license | Single-molecule assays reveal that RNA localization signals regulate dynein-dynactin copy number on individual transcript cargoes
Subcellular localization of mRNAs by cytoskeletal motors plays critical roles in the spatial control of protein function1. However, optical limitations of studying mRNA transport in vivo mean that there is little mechanistic insight into how transcripts are packaged and linked to motors, and how the movement of mRNA:motor complexes on the cytoskeleton is orchestrated. Here, we have reconstituted transport of mRNPs containing specific RNAs in vitro. We show directly that mRNAs that are either apically localized or non-localized in Drosophila embryos associate with the dynein motor and move bidirectionally on individual microtubules, with localizing mRNPs exhibiting a strong minus-end-directed bias. Single-molecule fluorescence measurements reveal that RNA localization signals increase the average number of dynein and dynactin components recruited to individual mRNPs. We find that, surprisingly, individual RNA molecules are present in motile mRNPs in vitro and present evidence that this is also the case in vivo. Thus, RNA oligomerization is not obligatory for transport. Our findings lead to a model in which RNA localization signals produce highly polarized distributions of transcript populations through modest changes in motor copy number on single mRNA molecules.
Following injection into embryos, fluorescent, in vitro-synthesized transcripts assemble into messenger ribonucleoprotein complexes (mRNPs) that move bidirectionally 8,9 . Net apical accumulation of localizing RNAs is due to longer uninterrupted movements, on average, in the apical direction than in the basal direction 8,9 . Surprisingly, RNAs that have a uniform distribution endogenously also move bidirectionally on injection, but with little net bias 8 . This observation contributed to the speculative model that Egl, BicD and RNA signals are not obligatory for linking mRNAs to motor complexes, but drive apical localization by increasing the frequency of dynein-driven movements of a generic bidirectional transport complex 8 . However, it was unclear whether reversals of mRNPs in the apical-basal axis represent movements on single microtubules or switching between mixed polarity filaments, and what mechanism is used by RNA localization signals, Egl and BicD to impart a net minus-end-directed bias to transport.
To explore the basis of differential mRNA sorting we set out to reconstitute transport in vitro of isolated RNPs carrying either a well-characterized, apically localizing RNA, fs(1)K10 (K10), or a mutant in which the 44-nucleotide localization signal 6,10 had been destroyed in the context of an otherwise wild-type K10 transcript (K10 mut ). This mutation prevents enrichment of the injected RNA apically 11 (Supplementary Fig. S1) by substantially diminishing the net minus-end bias to bidirectional transport (Supplementary Table S1).
In vitro-synthesized K10 or K10 mut RNAs (body-labelled with multiple Cy3-UTPs) were incubated with Drosophila embryonic extracts in the presence of biotinylated microtubules and streptavidinconjugated magnetic beads (Fig. 1a). Motor proteins and their associated complexes were then captured from extracts on the basis of their affinity for the exogenous microtubules, followed by brief washing and release with ATP. The released fraction included known constituents of RNA-motor complexes (Fig. 1b), but still represented a complex mixture of many proteins (data not shown).
This fraction was added to an imaging chamber and viewed with total internal reflection (TIRF) microscopy. Typically five to ten puncta of Cy3-labelled K10 or K10 mut RNA exhibited persistent movements and kymographs (lower panels) generated from time-lapse series of motile K10 (c,d) and K10 mut mRNPs (e). c shows unidirectional motion; d and e show bidirectional motion (these data are derived from Supplementary Movies S2 and S3, respectively). Images are pseudocoloured for clarity (green, Cy3 signal; red, fluorescein-labelled microtubule). Arrows, position of motile mRNP; t , time; d , distance. It should be noted that only ∼10% of microtubule-associated puncta of both mRNA species moved during the 3 min of imaging. Essential regulatory components were presumably lost from many RNA-motor complexes during their capture from embryos, or their motor activity compromised.
along fluorescein-labelled microtubules pre-adsorbed on the coverslip (Fig. 1c-e and Supplementary Movies S1-S3), with speeds of up to 1.5 µm s −1 (Supplementary Fig. S2a). These presumably represented active RNA-motor complexes assembled in the extract. Approximately half of the motile K10 and K10 mut mRNPs underwent at least one reversal in the direction of movement along individual microtubules before the Cy3 signal was lost (Fig. 1d,e and Supplementary Movies S1-S3). Mean square displacement analysis indicated that active transport contributed to the movement of even the most oscillatory mRNPs ( Supplementary Fig. S2b). Consistent with a physiological role for motors in transporting non-localizing mRNAs, K10 mut RNA associated with dynein, and was transported, when RNA-motor complexes were assembled and washed in 150 mM salt ( Supplementary Fig. S2c,d), not just the 50 mM concentration used in all other motility assays. Collectively, these data demonstrate that both localizing and non-localizing mRNPs are capable of bidirectional transport on individual microtubules. We next quantified the motile properties of localizing and nonlocalizing mRNPs by carrying out in vitro motility assays on polaritymarked microtubules (Supplementary Movie S4). K10 mRNPs exhibited a strong net minus-end bias in their transport, whereas K10 mut mRNPs did not (Fig. 2a). Net transport of K10 mRNPs was associated with substantially longer runs in the minus-end direction than in the plus-end direction (Fig. 2b). Run lengths of K10 mut mRNPs also had a minus-end bias, but the magnitude of the difference was much lower than for K10 mRNPs (Fig. 2c). Qualitatively, these findings are reminiscent of the ability of the K10 localization signal to significantly augment apical and basal run lengths in vivo, with much stronger effects on apical travel distances (Supplementary Table S1).
The association time of Cy3 signals with microtubules was similar for motile K10 and K10 mut mRNPs (Fig. 2d). Thus, measured differences in net motion of the two RNAs were not due to decreased dissociation of localizing RNPs from microtubules. Rather the localization signal must enhance the ability of an mRNP to move persistently in the
Figure 2
Differential motility of localizing and non-localizing mRNPs along polarity-marked microtubules in vitro. (a-c) Quantification of net RNA motion (a; negative values are net minus-end-directed movement) and lengths of minus-end-and plus-end-directed runs (b,c) of motile K10 and K10 mut mRNPs. A run was defined as a persistent movement in either the minus-end or plus-end direction with a total length ≥320 nm (≥2 pixels) that was ended with either a reversal in the direction of movement of the mRNP or disappearance of the Cy3 signal (presumably due to a dissociation event or photobleaching minus-end direction before reversing. Consistent with this notion, the frequency of reversals was significantly lower for K10 than for K10 mut mRNPs (Fig. 2e).
We also monitored the in vitro behaviour of the h RNA, which localizes apically in the embryo 12 , and the Kr transcript, which has a uniform apical-basal distribution 8 (Fig. 2f-h and Supplementary Fig. S2b). Reminiscent of the behaviour of each injected mRNA in embryos 8 (Supplementary Table S1), both mRNP species underwent bidirectional transport in vitro, with those containing h RNA exhibiting significantly greater net minus-end-directed motion than those containing Kr (Fig. 2f-h).
Our findings provide direct evidence for the previous model that localizing and non-localizing mRNAs undergo differential transport on individual microtubules, with localizing RNAs accumulating apically in embryos because RNA signals increase the probability of minus-end-directed motion of a bidirectional transport complex 8 . It is plausible that the extent of association of non-localizing mRNPs with motors is exaggerated in our assay by the presence of a microtubuleaffinity purification step. Nonetheless, a physiological interaction between uniformly distributed RNAs and motors is supported by our previous observation that the spreading of endogenous Kr RNA in the blastoderm cytoplasm is dependent on dynein 8 . Transport of non-localizing mRNPs presumably facilitates intermolecular interactions in the crowded cytoplasm 13 .
We next investigated the mechanism by which localization signals increase the frequency of minus-end-directed motion. It is possible and microtubules. Data were generated from manual analysis of the raw traces, although very similar results were obtained with the Stepfinder algorithm 33 and with manual analysis of data that were denoised with a moving average. The number of mRNPs analysed is shown in parentheses. We assume that localization signals recruited additional copies of the intact dynein-dynactin complex to a sizeable subset to mRNPs; a substantial proportion of the unlabelled, endogenous Dlic or Dmn in the extracts (Supplementary Fig. S3d) means that discrete peaks in the distribution of GFP::Dlic or GFP::Dmn decay steps would not be observed with the number of complexes that could be practically analysed. (e) Mean number of GFP decay steps (±s.e.m.) from d and e (calculated from fitting a Gaussian distribution, including the predicted contributions of mRNPs that contain only unlabelled copies of the protein). * * * , P < 0.001 (t -test), compared with the number of GFP decay steps of K10 mRNPs.
that dynein needs to dissociate from mRNPs for them to undergo plusend-directed motion, with localization signals driving net minus-end movement by reducing the rate with which this happens. However, this scenario is highly unlikely because GFP::dynein light intermediate chain (Dlic) was detected with Cy3-labelled mRNPs during movements in both directions along microtubules in vitro ( Supplementary Fig. S2e). We previously speculated that localization signals, through Egl and BicD, mediate net minus-end-directed transport by recruiting additional dynein motors to those present on non-localizing mRNPs (ref. 8). Direct evidence for such a model was lacking, however, as it is not possible to visualize motor components on mRNPs in vivo. More recent work on lipid droplet motion in Drosophila embryos-which also involves BicD (ref. 14)-has provided evidence that regulation of the absolute motor copy number is not significant for the control of bidirectional transport properties 15 . Thus, it was possible that the same number of dyneins is present on localizing and non-localizing mRNPs, with differential sorting due to regulation of the activity of the motors.
To investigate this issue we employed stepwise photobleaching to assess the relative copy number of motor components associated with K10 or K10 mut RNPs in vitro ( Fig. 3 and Supplementary Fig. S3a,b).
This method allows the number of fluorescent molecules in individual complexes to be estimated by counting the number of discrete step changes in fluorescence intensity during photobleaching [16][17][18][19][20] . The ability of our photobleaching assay to estimate the copy number of fluorescent molecules was supported by predominantly two-step photobleaching of microtubule-associated puncta of a GFP-tagged tail-less kinesin-1 motor ( Supplementary Fig. S3c), which is predicted to form a dimer 16,21 .
Motor complexes were captured from extracts of transgenic embryos expressing GFP-tagged versions of either the dynactin subunit dynamitin (Dmn) or Dlic in the presence of Cy3-labelled K10 or K10 mut RNAs. Long-term photobleaching, carried out on RNA-motor complexes immobilized on microtubules in the nonucleotide state, revealed variation in the number of GFP bleaching steps per complex ( Fig. 3a-d). This was expected because the multiple molecules of Dlic and Dmn present in each dynein-dynactin complex will represent mixtures of GFP-tagged and endogenous Dlic and Dmn ( Supplementary Fig. S3d).
The mean number of decay steps of GFP::Dlic and GFP::Dmn was, respectively, ∼70% and ∼55% greater on K10 mRNPs than on K10 mut mRNPs ( Fig. 3c-e; P < 0.001). These data provide direct evidence that localization signals increase the average copy number of dynein-dynactin complexes per mRNP. Consistent with this notion, when K10 and K10 mut RNAs were immobilized through an aptamer to an affinity matrix and incubated with embryonic extracts, we observed greater recruitment of Dlic to the population of localizing RNA ( Supplementary Fig. S3e). We next combined the photobleaching data with quantification of the ratios of GFP-labelled to unlabelled Dlic and Dmn in the material injected into the imaging chamber ( Supplementary Fig. S3d), and published data on the stoichiometry of Dlic and Dmn per motor complex ( Supplementary Fig. S3d legend). This analysis indicated that there is a low number of dynein-dynactin complexes on captured non-localizing mRNPs (most likely an average of one or two copies per mRNP), with localization signals increasing the proportion of mRNPs in the population that have recruited additional copies of this complex. Long-term experiments, involving significant technological advances, will be needed to determine with absolute precision the copy number of dynein and dynactin within motile RNA-motor complexes.
There are large increases in the duration of minus-end-directed runs when two versus one, or three versus two, dynein motors are active on a bead 22 . By increasing the likelihood of engaging in minus-end-versus plus-end-directed motion, recruitment of additional dynein-dynactin complexes is therefore likely to play an important role in generating the overall directional bias exhibited by the population of localizing mRNPs. The in vivo significance of dynein copy number for controlling mRNP motility is supported by our earlier observations that increasing the concentration of dynein light chain-which links cargoes to the motor complex-augments minus-end motion of mRNPs in a perduring fashion and leads to weak apical enrichment of endogenous Kr RNA (ref. 8). Nonetheless, we cannot rule out the possibility that an RNA localization signal, through Egl and BicD, also enhances minus-end-directed transport by increasing the activity of the additional dynein-dynactin it recruits, or even by promoting its coordination with other motor complexes on the mRNP. This last scenario could explain the increases in plus-end run lengths observed for localizing versus non-localizing RNA populations (Fig. 2b,c,g,h and Supplementary Table S1). Work on other mRNA-targeting systems has provided evidence that individual localizing mRNPs can contain multiple RNA molecules 23-26 , with intermolecular RNA-RNA interactions obligatory for asymmetric localization in at least one case 27 . We therefore investigated in vitro whether localizing mRNPs recruit more dynein-dynactin copies than non-localizing mRNPs because they contain more RNA molecules. We produced K10 or K10 mut RNAs body-labelled with Cy3 and determined the mean number of dyes per transcript. Following assembly of mRNPs on these RNA preparations, stepwise photobleaching was used to count the number of Cy3 decay steps within individual microtubule-associated motor complexes ( Fig. 4a and Supplementary Fig. S4). Surprisingly, the mean numbers of Cy3 decay steps observed for K10 or K10 mut mRNPs fit best with a single RNA molecule being present in most complexes (Fig. 4a). Almost exclusive one-step photobleaching of K10 mRNPs containing RNA labelled at the 3 end with a single Cy3 also supports this notion (Fig. 4b), as does our finding that when Alexa-488-labelled preparations of a localizing RNA were mixed with Cy3-labelled preparations of the same RNA we never observed motile mRNPs containing both fluorophores (Fig. 4c-e). These experiments demonstrate that localization signals increase the average copy number of dynein-dynactin complexes recruited to individual RNA molecules and that RNA multimerization is not obligatory for motility or overall directional transport.
We next used a non-enzymatic in situ hybridization technique capable of detecting single RNA molecules in Drosophila embryos 28 to investigate whether an individual copy of an RNA species is present in localizing mRNPs in vivo. We focused on the h mRNA, which, unlike K10, is abundantly expressed in the blastoderm. Embryos were hybridized simultaneously with two antisense probes against the same sequence in h, which were labelled with different haptens and detected with antibodies conjugated to different fluorophores (Fig. 5a-e). Co-localization of cytoplasmic signals from these competitive probes was detected only rarely both apically (Fig. 5c-e and o) and in more basal regions ( Supplementary Fig. S5a,b). Overlap of signals from these probes could, however, be detected at the sites of nascent transcription in the nuclei (Fig. 5n), where multiple copies of the same transcript accumulate 29 .
To determine whether the relatively low degree of co-localization of the competitive probe signals in the cytoplasm was significant we simulated a random distribution of dots by rotating the image of the signal derived from one of the probes in the apical cytoplasm by 90 • and overlaying it on the original orientation of the image derived from the other probe 28 . Very similar proportions of dual-coloured puncta were observed in the rotated and original configurations (Fig. 5o), indicating that the co-localization was attributable to chance overlap of the signals. The incidence of immediately adjacent, but not co-localized, signals from the competitive probes was also the same in the rotated and original configurations ( Supplementary Fig. S5c), providing evidence against the existence of very large mRNPs containing multiple copies of h that can be clearly resolved.
In contrast, we frequently detected co-localization of two probes to non-overlapping regions of h in the apical cytoplasm (Fig. 5a,g-i,o), as well as more basally (Supplementary Fig. S5a,b). This method can therefore unambiguously detect two target sites within the same cytoplasmic mRNP, should they exist. Failure to detect simultaneous hybridization of two probes targeted to the same sequence therefore indicates that individual copies of h RNA are present in most localizing mRNPs in vivo.
We also did not detect significant cytoplasmic co-localization of h with even-skipped (eve) (Fig. 5k-m,o), another apically localized pair-rule mRNA that is expressed abundantly in blastoderm embryos in a pattern that overlaps partially with that of h (ref. 30 and Fig. 5j). Although we cannot rule out the possibility that h RNA molecules are packaged into mRNPs with other localizing RNA species, this result provides evidence that apically localizing mRNPs in vivo contain a single RNA molecule. Assembling many transcript molecules into a single mRNP might intuitively seem a more efficient strategy for translocating an mRNA population. However, such a mechanism would present an additional challenge in so far as localizing and non-localizing RNA species would need to be packaged independently. Indeed, the long-standing notion than neuronal RNA transport complexes contain large numbers of transcripts has been challenged recently 31 .
We have reconstituted transport of specific mRNA species along individual microtubules in vitro and employed single-moleculeresolution measurements to shed light on the composition of transport complexes. An in vivo study of oskar mRNA transport in Drosophila oocytes has demonstrated that asymmetric RNA localization can be achieved by a random walk of a single motor species along a weakly polarized microtubule cytoskeleton 32 . Our findings provide direct evidence for an additional mechanism for RNA targeting in which localization signals control sorting by regulating the net directionality of bidirectional motor complexes on individual microtubules. We propose that this is associated with modest differences in the number of motors assembled on individual mRNA molecules. Our findings raise fascinating questions about how dynein-dynactin and the unidentified plus-end motor(s) 9 are bound to localizing and non-localizing mRNA molecules and how their activities are orchestrated in time and space. Fluorescent RNA synthesis and injection. Capped RNA was synthesized as described previously, using T7-or T3-mediated transcription 8 . For in vitro and in vivo RNA motility assays, a 1:3 ratio of fluorescent UTP (Alexa-488-UTP (Invitrogen) or Cy3-UTP (PerkinElmer))/unlabelled UTP was used in the reaction, typically resulting in an average of ∼4 fluorophores per 1,000 nucleotides (nt) of RNA. For photobleaching experiments, either a ratio of 1:39 Cy3-UTP to unlabelled UTP was used, resulting in an average of ∼0.4 fluorophores per 1,000 nt, or, following transcription, non-fluorescent RNAs were labelled at the 3 end with a single dye using pCU-Cy3 (gift from E. Miska, Gurdon Institute, UK), as described previously 37 .
Methods and any associated references
Wild-type K10 RNA and K10 mut RNA were as described previously 11 , and corresponded to the entire 1,432-base-pair (bp) 3 UTR and an 860-bp portion of the 3 genomic sequences. K10 mut was referred to as K10 scrambled in ref. 11, and has the constituent bases of the apical localization signal randomized. h and Kr RNAs corresponded to the full-length complementary DNAs (∼1,900-and ∼2,300-bp, respectively). Injection of embryos with fluorescent RNA followed by time-lapse imaging and automatic particle tracking was as described previously 8 .
Microtubule polymerization and adsorption to glass. Tubulin monomers,
of porcine or bovine origin (Cytoskeleton), were polymerized according to the manufacturer's instructions. Biotinylated microtubules for capturing motors were polymerized using a 1:10 ratio of biotinylated-tubulin to tubulin for 5 min. Fluorescein-labelled microtubules were polymerized using a 1:25 ratio of fluorescein-tubulin to tubulin for 15 min. Polarity-marked microtubules were produced by first allowing polymerization of a 1:5 ratio of fluorescein-tubulin to tubulin for 5 min. Unlabelled tubulin was then added to the mixture to dilute the fluorescent tubulin to a 1:50 ratio and the mixture incubated for a further 15 min, during which time predominant elongation in the plus-end direction resulted in a relatively bright minus-end segment. Minus-end labelling with this method was confirmed with microtubule gliding assays using purified mammalian dynein-dynactin complexes (provided by A. Carter, MRC-LMB, UK). Microtubules labelled with both Cy5 and biotin were polymerized with a 1:10:25 ratio of Cy5-tubulin/biotinylated-tubulin/unlabelled tubulin for 15 min. Following polymerization, microtubules were diluted in PEM polymerization buffer (Cytoskeleton) containing taxol to a final concentration of 20 µM. Unincorporated tubulins were removed by ultracentrifugation in a 60% glycerol cushion buffer.
Microtubules were adsorbed to the coverslip in a flow cell in one of two ways, with indistinguishable results in RNA motility assays: they were either allowed to associate nonspecifically with the glass or they were associated through a biotin-streptavidin-biotin link 38 (using microtubules labelled with both fluorescein and biotin). Washes with PEM buffer containing 20 µm taxol were used to remove unbound microtubules after 5 min, followed by blocking of nonspecific binding sites on the glass surface with 0.5 mg ml −1 bovine serum albumin (BSA).
In vitro RNA motility assay. Extracts were produced by homogenizing 0-6 h embryos in DXB buffer (30 mM HEPES at pH 7.3, 50 mM KCl, 2.5 mM MgCl 2 , 250 mM sucrose, 5 mM dithiothreitol, 10 µM MgATP and 2 × Complete (EDTAfree) protease inhibitors (Roche)) as described previously 8 using 4 ml of buffer per gram of embryos. Typically the supernatant derived from 50 mg of embryos was mixed with 100 ng of in vitro-transcribed fluorescently labelled mRNA and 50 µg biotinylated microtubules. This mixture was agitated for 5 min at room temperature to allow recruitment of motor complexes to the fluorescent RNA. Cargo-motor complexes bound to biotinylated microtubules were captured by incubation with 350 µg of streptavidin-coated magnetic beads (Bang Laboratories) for a further 5 min at room temperature. Magnetic beads were washed 3-4 times in DXB buffer, followed by elution of cargo:motor complexes from the microtubules by incubation for 2 min in assay buffer (30 mM HEPES/NaOH, 5 mM MgSO 4 , 1 mM EGTA, 1 mM dithiothreitol and 0.5 mg ml −1 BSA) containing 4 mM MgATP.
The released fraction was introduced together with an antibleach system (0.5 µg ml −1 glucose oxidase, 470 units ml −1 of catalase, 10 mM dithiothreitol and 15 mg ml −1 glucose) into a flow chamber with fluorescein-labelled microtubules pre-adsorbed to the coverslip. Microtubules and Cy3-labelled RNA molecules were visualized at room temperature with a TIRF microscope (Olympus) equipped with a ×100 objective (PlanApo, 1.45 NA TIRFM). Images of microtubules and RNAs were acquired sequentially with a 300 ms exposure time for each channel, using an iXon EM+DU-897 camera (Andor). The movement of mRNPs on polarity-marked microtubules was analysed using kymographs generated in ImageJ. Fig. 4c-e, 100 ng each of Alexa-488-labelled K10 and Cy3labelled K10 or 100 ng each of Alexa-488-labelled h and Cy3-labelled h RNA were added to the embryo extract before capturing motor complexes on microtubules and releasing them into a flow cell containing Cy5-labelled microtubules. Images of microtubules were captured at the beginning and end of a series of alternating images of the Alexa-488 and Cy3 signals.
RNA affinity purifications.
Uncapped RNAs were transcribed from the pTRAPv5 vector (Cytostore), resulting in the incorporation of two 5 copies of the S1 streptavidin-binding aptamer. RNAs were tethered to streptavidin magnetic beads (Invitrogen) and affinity purifications from embryo extracts, including elution of RNA-protein complexes with biotin, were carried out as described previously 4,8 . Aptamer-linked K10 and K10 mut RNAs were ∼1,200 nt long and contained most of the 3 UTR; our preliminary studies indicated that substantially longer RNAs are not efficiently coupled to beads.
Stepwise photobleaching. Cy5-microtubules were bound to the coverslip in the flow chamber using a biotin-streptavidin-biotin link as described above. Motor complexes, captured from extracts from transgenic embryos in the presence of Cy3-RNAs, were released from the biotinylated microtubules with 2 mM MgATP. MgADP (2 mM) was added to the released fraction just before its addition to the flow cell to promote binding of the motor complexes to microtubules.
After a 5 min incubation of cargo-motor complexes with the microtubules, unbound complexes were washed off with assay buffer containing a tenfold lower concentration of the antibleach system than used in the motility assay and no nucleotide. The absence of nucleotide was designed to inhibit dissociation of motors from microtubules; quantification of the number of GFP molecules in mRNPs undergoing motion was precluded by fluctuations in GFP fluorescence intensity and frequent dissociation of RNA-motor complexes from the microtubules. Fluorescein and Cy3 fluorophores were illuminated sequentially for 300 ms and images were captured as described above. Cy5-labelled microtubules were imaged at the beginning and end of filming. Solis software (Andor) was used to plot the change over time in fluorescence intensities of Cy3 and GFP signals co-localized on microtubules.
Primary antibodies. Primary antibodies were: rat anti-Dmn (provided by R. In situ hybridization. DIG-or biotin-labelled antisense probes were prepared by SP6-mediated in vitro transcription from PCR-generated templates using 10× RNA labelling mixes (Roche). hA and hB corresponded to nucleotides 1,291-1,721 and 1,761-2,193 of the h mRNA (Genbank: NM_001014577), respectively.
For each embryo, two 15 µm × 15 µm images within expression domains were used to give a value for percentage of overlap of the biotin-derived signal with the DIG-derived signal per embryo using manual analysis in ImageJ (NIH). To control for any systematic errors in the alignment of the signals from red and green channels, in situ hybridization experiments always included a set of embryos hybridized with a single h 3 UTR antisense probe labelled with both DIG and biotin. According to the manufacturer's instructions, DIG-UTP and biotin-UTP are typically incorporated once per 20-25 nt of probe using the procedure employed. Each hapten-bound primary antibody is expected to be bound by multiple polyclonal secondary antibodies and, according to the manufacturers, each Alexa-488-or Alexa-555-conjugated secondary antibody molecule contains, respectively, ∼4 or 6 dye molecules. Although the efficiency of antibody binding is likely to be substantially lower than maximal, there is still scope for many fluorophores to be present on a single RNA molecule. Consistent with the ability of the microscope to detect a small number of primary antibodies bound to a puncta containing a single h RNA molecule, we were able to detect discrete puncta on the surface of blastoderm nuclei using a Nup214 antibody 45 (gift from C. Samakovlis, Stockholm University, Sweden) and Alexa-conjugated secondaries. These puncta presumably derive from individual nuclear pore complexes, which are each expected to contain eight copies of the Nup214 protein (ref. 20 and citations therein). Amrute-Nayak and Bullock, Figure S2. Red line shows output of the step-fitting algorithm Stepfinder; predicted bleaching steps are shown by red numbers. Traces i and ii are from Cy3-K10associated complexes; trace iii is from a Cy3-K10 mut -associated complex. (b, b') Comparison of fluorescent lifetime of GFP signals exhibited by GFP::Dlic (b) or GFP::Dmn (b') with intermittent or continuous illumination with a 488-nm laser (number of mRNPs analyzed are shown in parentheses). GFP puncta were associated with microtubules and Cy3-K10 RNA. For intermittent illumination of the GFP signal, chambers were exposed alternately to the 488nm (300 ms) and 561-nm lasers (300 ms). Left-and right-hand panels show distributions and mean, respectively. Fluorescent lifetime was significantly influenced by the degree of illumination (Mann-Whitney test), indicating that it was a function of photobleaching events and did not just represent dissociation of motor components from the complex (the rate of which would not be affected by illumination frequency). Error values are s.e.m. in this and other panels. (c) Distribution of decay steps exhibited by puncta of tailless Khc(Kinesin-1 heavy chain)::GFP associated with microtubules (n = number of puncta analyzed). Transgenic embryo extracts expressing this fusion protein, which is defective in cargo binding due to the absence of the tail, were used for capture of motor complexes. The majority of puncta exhibited two photobleaching events. Previous work indicated that tailless Khc forms a dimer on microtubules 9,10 . A minor proportion of one step photobleaching events is consistent with analyses of other fluorescent fusion proteins that are predicted to be obligate dimers 11,12 . Such events are likely due to some GFP bleaching before data capture or a proportion of GFP molecules that are not fluorescently active. Instances of three or four tailless Khc::GFP bleaching steps presumably represent adoption of a higher oligomeric state by a proportion of these molecules or two tailless Khc dimers whose proximity on a microtubule means they cannot be resolved. (d) Estimation by near-infrared western blotting of the proportion of total Dlic or Dmn that is GFP-tagged in the fractions injected into flow cells (i.e. material released from biotinylated microtubules incubated with GFP::Dlic or GFP::Dmn extracts (upper and lower panel, respectively)). Fluorescent intensity of the bands was measured with Andor Solis software. Means were calculated from three independent experiments for each fusion protein; the value for each experiment was determined by averaging the signals from two to three lanes loaded with the same sample. Images show three independent loadings of the same experiment. To approximate the mean number of total Dlic and Dmn copies on localizing and non-localizing mRNPs we divided the mean number of GFP photobleaching steps in figure 3e by the estimated proportion of total Dlic or Dmn that is labelled with GFP in the material injected into flow cells (note that GFP-tagged versions of these proteins had a similar ability to their wild-type counterparts to be incorporated into mRNPs as judged by their association with RNAs immobilized on an affinity matrix). For Dmn, the mean copy numbers estimated by this method for K10 and K10 mut mRNPs were 8.6 ± 2.3 and 5.6. ± 1.3, respectively. For Dlic, the respective values were 7.2 ± 2.2 and 4.2 ± 0.5 for K10 and K10 mut mRNPs. Based on the analysis of other multi-subunit complexes by photobleaching [13][14][15] , the values are likely to be a slight underestimate of the average number of Dlic and Dmn per mRNP. This is due to the cumulative probability of individual GFPs in the complex not being visible in photobleaching experiments due to pre-experiment bleaching or them not being excitable. There are reportedly four Dmn molecules per dynein-dynactin complex 7,11,16,17 . Our estimates of Dmn copy number on mRNPs are therefore consistent with non-localizing mRNPs containing, on average, one or two dynein-dynactin complexes, with the presence of the localization signal increasing the proportion of mRNPs that recruit additional copies. The copy number of Dlic within dynein-dynactin has not been determined precisely, although it is estimated to be between two and four 18 . Our estimates of Dlic copy number on K10 and K10 mut mRNPs are therefore also consistent with RNA non-localizing mRNPs containing one or two dynein-dynactin complexes, with an ~70% increase in average copy number per mRNP elicited by the localization signal. (e) Western blot for Dlic, showing that more dynein is recruited from wild-type embryo extracts to K10 RNA than K10 mut RNA populations. RNAs were fused to a streptavidin-binding aptamer 8 and coupled to streptavidin-coated magnetic beads. Assay was performed in 50 mM salt. Extract lane (Ext.) represents 2% of the amount added to the pulldowns. Cy3 body-labelled K10 RNA associated w. Dlic::GFP Amrute-Nayak and Bullock, Figure S4. The degree of body labelling of RNAs with Cy3 was much lower than in the motility assay in order to simplify quantification of dye number per mRNP; Cy3 mol/RNA mol = mean number of Cy3 dyes per RNA molecule, determined with a spectrophotometer. Note that the lower the Cy3mol/RNA mol value, the greater the proportion of RNA molecules that will be unlabelled and not visible in our TIRF experiments (hence the differences between the mean number of decay steps expected for a single RNA molecule per mRNP (Fig. 4a) and the number of Cy3 dyes per RNA molecule in the body-labelled preparation). Amrute-Nayak and Bullock, Figure S5. *** *** *** *** probe-derived signals that co-localized with a DIG-derived probe signal (calculated from 6 embryos per experimental condition). ***, p <0.001 (t-test), compared to co-localization of hA and hB signals in the same apical-basal region of the cytoplasm; error bars in this and other panels are s.e.m.. The degree of co-localization of hA and hB was not significantly different in the apical and sub-apical regions. We observed one event of co-localization of a hB-biotin puncta with a hB-DIG puncta in a sub-apical region, from a total of 96 hB-biotin puncta in 6 embryos. This may reflect chance overlap of signals, the probability of which is reduced compared to the apical cytoplasm due to decreased density of puncta in the cytoplasm (note that the rotation control is not applicable for the sub-apical images due to the presence of the nuclei in the sections). Alternatively, a small subset of h transport complexes may contain more than one h RNA molecule. Note that the efficiency of hybridization in this series of experiments was greater than in the series of experiments shown in figure 5, as revealed by the increased percentage of overlap of hA and hB signals in the apical cytoplasm. (c) Quantification of the mean percentage of hB-biotin puncta in the apical cytoplasm that abut (but do not co-localize with) a hB-DIG puncta in the original images and in the control when the hB-DIG signal is rotated to simulate a random distribution (n = 6 embryos). Similar frequencies of adjacent puncta in the original and rotated images argue against the existence of very large mRNPs containing multiple copies of h that can be clearly resolved. Data are derived from the experiment in a and b. Amrute-Nayak and Bullock, Figure S6. | 2016-05-12T22:15:10.714Z | 2012-02-01T00:00:00.000 | {
"year": 2012,
"sha1": "a187df64f28256b529c84a0e95e36ace1ddb8dc1",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3343632?pdf=render",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "44134b3c834bdc360a7f4db736481fd7de43f87d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
232035594 | pes2o/s2orc | v3-fos-license | On Cofibrations of Permutative categories
In this note we introduce a notion of free cofibrations of permutative categories. We show that each cofibration of permutative categories is a retract of a free cofibration.
Introduction
A permutative category is a symmetric monoidal category whose associativity and unit natural isomorphisms are identites. Permutative categories have generated significant interest in topology. An infinite loop space machine was constructed on permutative categories in [May78]. A K-theory (multi-)functor from a multicategory of permutative categories into a symmetric monoidal category of symmetric spectra, which preserves the multiplicative structure, was constructed in [EM06a]. In [EM06b], the K-theory of [EM06a] was enhanced to a lax symmetric monoidal functor. It was shown in [Man10] that permutative categories model connective spectra.
Every symmetric monoidal category is equivalent (by a symmetric monoidal functor) to a permutative category. The category of symmetric monoidal categopries SMCAT does NOT have a model category structure, however its subcategory of permutative categories and strict symmetric monoidal functors Perm can be endowded with a model category structure. The category Perm is isomorphic to the category of algebras over the (categorical) Barrat-Eccles operad. Using this fact, the a model category structure follows from [BM07] and [Lac07,Thm. 4.5]. This model category structure is called the natural model category structure of permutative categories.
The main objective of this note is to identify a class of cofibrations in the natural model category Perm, called free cofibrations such that every cofibration in Perm is a retract of a free cofibration. A useful property of free cofibrations is that cobase changes along a free cofibration preserve acyclic fibrations in the natural model category Perm. This property allows us to prove that the natural model category Perm is left proper.
Cofibrations in Perm and left properness
In this note we define a class of maps called free cofibrations in the natural model category of permutative categories Perm. We show that a strict symmetric monoidal functor is a cofibration in Perm if and only if it is a retract of a free cofibration. Using this characterization of cofibrations we will show that the natural model category Perm is left proper. A characterization of cofibrations in Perm was formulated, purely in terms of object functions (which are monoid homomorphisms) of the underlying strict symmetric monoidal functor, in [Sha20]. In order to define free cofibrations, we will start by reviewing some basic notions of permutative categories: Date: February 25, 2021.
Definition 2.1. A symmetric monoidal category is called a permutative category or a strict symmetric monoidal category if it is strictly associative and strictly unital.
Remark 1. A permutative category is an internal category in the category of monoids.
We recall that the forgetful functor U : Perm → Cat has a left adjoint F : Cat → Perm.
Definition 2.2. A monoid M is called a free monoid if there exists a (dotted) lifting monoid homomorphism whenever we have the following (outer) commutative diagram of monoid homomorphisms: where p is a surjective monoid homomorphism and * is a zero object in the category of monoids.
Definition 2.3. A free cofibration of permutative categories is a (strict symmetric monoidal) functor i : A → C whose object function is the inclusion Ob(i) : where M is a free monoid and the coproduct is taken in the category of monoids.
The next proposition presents the desired characterization of cofibrations: Proposition 2.4. A strict symmetric monoidal functor F : C → D is a cofibration in Perm if and only if it is a retract of a free cofibration by a map that fixes C.
Proof. Let us first assume that F is a retract of a free cofibration i : E → M . We observe that the object function of a free cofibration has the left lifting property with respect to all surjective monoid homomorphisms, therefore each free cofibration is a cofibration in Perm. A retract of a cofibration is again a cofibration. Thus, F is a cofibration in Perm.
Conversely, let us assume that F is a cofibration in Perm. We have the following (outer) commutative diagram in the category of monoids where F m (Ob(D)) is the free monoid generated by the set Ob(D), i is the inclusion into the coproduct and p = Ob(F ) ∨ ǫ, the summand ǫ : F m (Ob(D)) → Ob(D) is the counit of the reflection: Since the right vertical homomorphism of monoids is surjective and F is a cofibration by assumption, therefore there exists a (dotted) lifting homomorphism L which makes the whole diagram commutative. Thus Ob(F ) is a retract of the inclusion i in the category of monoids. We will construct a strict symmetric monoidal functor I : C → E whose object function is the inclusion i and show that F is a retract of I. We begin by constructing the category E: the tensor product is uniquely determined by the monoid structures on Ob(E) and M or(E).
The commutative diagrams (1), (2) and the definition of the symmetry natural transformation (3) together imply that there is a strict symmetric monoidal functor P : E → D whose object homomorphism is p and morphism homomorphism is p 1 . Further P is surjective on objects and also fully-faithful. This implies that P is an acyclic fibration in the natural model category Perm. Now we construct the free cofibration I : C → E mentioned above. The object homomorphism of I is the inclusion i : Ob(C) → Ob(C) ∨ F (Ob(D)). The morphism homomorphism of I is defined as follows: M or(I) := M or(F ). In other words, I(f ) = F (f ) for each morphism f ∈ M or(C). Now we have the following (outer) commutative diagram in Perm: D Since F is a cofibration and P is an acyclic fibration in the natural model category Perm, therefore there exists a (dotted) lifting arrow L which makes the entire diagram commutative. This implies that F is a retract of the free cofibration I in the natural model category Perm. In this section we show that the natural model category of permutative categories Perm is left proper. We recall that a model category is left proper if the cobase change of a weak-equivalence along a cofibration is again a weak-equivalence. We will first show that the cobase change of a weak-equivalence along a free cofibration is a weak-equivalence. Using this intermediate result, we will prove the left properness of Perm.
Let G : A → B be an acyclic fibration in Perm and i A : A → C be a free cofibration therefore the object monoid of C can be written as a coproduct Ob(A) ∨ V , where V is a free monoid. We observe that the following commutative square is coCartesian: We will construct the following pushout square in Perm: Remark 2. The above characterization of acyclic fibrations implies that S : B → A is a leftadjoint-right-inverse of G : A → B. This means that ǫ S : SG ∼ = id A is a counit of an adjoint equivalence whose unit η : GS = id B is the identity natural transformation. This further implies that Gǫ S · ηG = id G . In other words, for each a ∈ A, we have the following equality: Since the unit natural transformation η is the identity, therefore Gǫ S = G.
Remark 3. Let b 1 , b 2 be a pair of objects in B. Since ǫ S is a monoidal natural transformation, therefore we have the following commutative diagram: Thus we have shown that λ S = ǫ S S.
This further implies that
The unital symmetric monoidal functor S gives us the following unital symmetric monoidal functor: where F (C; V ) is the full permutative subcategory of C whose object set is the (free) monoid V . We observe that S ∨ F (C; V ) is a section of the strict symmetric monoidal functor Hence the functor G ∨ F (C; V ) is an acyclic fibration in the natural model category Perm by [Sha20, Cor. 3.5(3)].
We observe that the free cofibration i A factors as follows: Remark 4. The following commutative square is a coCartesian: We observe that the object monoid of C is the same as the object monoid of A∨F (C; V ), namely the coproduct (Ob(A))∨V . This implies that for each c ∈ Ob(C) there is the following isomorphism in C: Now it follows from [Sha20, Prop. 2.7] that there exists a (uniquely defined) functor S C : C → C and a natural isomorphism δ C : id C ∼ = S C . The functor S C is defined on objects as follows: The following lemma now tells us that S C is a unital symmetric monoidal functor and δ C is a monoidal natural isomorphism: Lemma 3.1. Given a unital oplax symmetric monoidal functor (F, λ F ) between two symmetric monoidal categories C and D, a functor G : C → D, and a unital natural isomorphism α : F ∼ = G, there is a unique natural isomorphism λ G which enhances G to a unital oplax symmetric monoidal functor (G, λ G ) such that α is a monoidal natural isomorphism. If (F, λ F ) is unital symmetric monoidal then so is (G, λ G ).
Proof. We consider the following diagram: as follows: This composite natural isomorphism is the unique natural isomorphism which makes α a unital monoidal natural isomorphism. Now we have to check that λ G is a unital monoidal natural isomorphism with respect to the above definition. Clearly, λ G is unital because both α and λ F are unital natural isomorphisms. We first check the symmetry condition [Sha20, Defn. 2.4 OL. 2]. This condition is satisfied because the following composite diagram commutes The condition [Sha20, Defn. 2.4 OL. 3] follows from the following equalities c 1 , c 2 , c 3 )).
If F = (F, λ F ) is a symmetric monoidal functor then so is G = (G, λ G ) because (6) is a natural isomorphism.
The section S ∨ F (C; V ) provides us with a unital symmetric monoidal functor i A,V • (S ∨ F (C; V )) : B ∨ F (C; V ) → C which we denote by S F . The unital symmetric monoidal functor S F has the following Gabriel factorization: is a permutative category structure. Also, by the same lemma, Γ is a strict symmetric monoidal functor.
Remark 5. The following diagram of unital symmetric monoidal functors is commutative: The above commutative diagram implies that for each object z ∈ G(S F ), λ S F (z) = λ SC (z).
We claim that there exists a strict symmetric monoidal functor P : C → G(S F ) such that the following diagram, in Perm, is coCartesian: The object function of the functor P is the monoid homomorphism For any pair of objects c 1 , c 2 ∈ Ob(C), we observe the following equality: G(S F )(P (c 1 ), P (c 2 )) = C(S C (c 1 ), S C (c 2 )). Now we define the morphism function of P as follows: where f is a morphism in C. The functoriality of P follows from that of S C .
The object function of P is a monoid homomorphism therefore P (c 1 ⊗ C c 2 ) = P (c 1 ) P (c 2 ), for each pair of objects c 1 , c 2 ∈ Ob(C). The following commutative diagram shows that P (f 1 ⊗ C f 2 ) = P (f 1 ) P (f 2 ), for each pair of maps (f 1 , f 2 ) ∈ C(c 1 , c 2 ) × C(c 3 , c 4 ): Thus, we have defined a strict symmetric monoidal functor P which is fully faithful. Further, each object of G(S F ) is isomorphic to one in the image of P . Thus,discussion P is an equivalence of categories.
Proof. In order to show that (7) is coCartesian, it is sufficient to show that the following commutative square is coCartesian, in light of factorization (4) and remark 4: We will show that whenever we have the following (outer) commutative diagram, there exists a unique dotted arrow L which makes the whole diagram commutative in Perm: Since Ob(Γ S F ) is the identity, therefore the object homomorphism Ob(L) has to be the same as Ob(T ) in order to make the diagram commutative, therefore we define Ob(L) = Ob(T ). The morphism function of L is defined as follows: L z1,z2 := R S F (z1),S F (z2) : G(S F )(z 1 , z 2 ) = C(S F (z 1 ), S F (z 2 )) → X(L(z 1 ), L(z 2 )).
for each pair of objects z 1 , z 2 ∈ Ob(G(S F )). This defines a functor L which makes the diagram above commutative (in Cat). In order to verify that L is a strict symmetric monoidal functor, it is sufficient to show that for each pair of maps f 1 : z 1 → z 2 , f 2 : z 3 → z 4 in G(S F ), We recall that the map f 1 f 2 is defined by the following commutative diagram: . Now it sufficient to show that Rλ S F = id, in order to establish the equalities in (8). We observe that is an acyclic fibration, it follows from remark 3 that The uniqueness of the object functor of L is obvious. The uniqueness of the morphism homomorphism of L can be easily checked.
The main objective of this section is to show that the natural model category Perm is left proper. The next lemma serves as a first step in proving the main result. The lemma follows from the above discussion: Lemma 3.3. In the natural model category Perm a pushout of a weak-equivalence along a free cofibration is a weak-equivalence.
Proof. In light of the facts that each weak equivalence in a model category can be factored as an acyclic cofibration followed by an acyclic fibration and acyclic cofibrations are closed under cobase change, it is sufficient to see that the cobase change of an acyclic fibration is a weak-equivalence. This follows from the dicussion above. Proof. We will show that a pushout P (F ; q) of a weak equivalence F : A → D in Perm along a cofibration q : A → B in Perm is a weak-equivalence. We consider the following commutative diagram: C P Since F is a cofibration therefore by proposition 2.4 there exists a free cofibration r : A → C such that F is a retract of r by a map that fixes A. The top left commutative square in the above diagram is coCartesian. The map P (F ; l) is a pushout of F along the free cofibration r and therefore a weak-equivalence by lemma 3.4. Now the result follows from the observation that the diagonal composite P → P s → P , in the above diagram, is the identity map and the commutativity of the above diagram.
Appendix A. Gabriel Factorization of symmetric monoidal functors
In this appendix we construct a Gabriel Factorization of a unital symmetric monoidal functor beyween permutative categories. Our construction factors a unital symmetric monoidal functor into an essentially surjective strict symmetric monoidal functor followed by a fully-faithful unital symmetric monoidal functor.
Lemma A.1. Each unital symmetric monoidal functor F : C → D between permutative categories can be factored as follows: where Γ F is a strict symmetric monoidal functor which is identity on objects and ∆ is fully-faithful.
Proof. We begin by defining the permutative category G(F ). The object monoid of G(F ) is the same as Ob(C). For a pair of objects c 1 , c 2 ∈ Ob(C), we define The Gabriel factorization of the underlying functor of F gives us the following factorization in Cat: We will show that the functor Γ F is strict symmetric monoidal and ∆ F is unital symmetric monoidal. We define a symmetric monoidal structure on G(F ) next which we denote by (G(F ), , γ). For any pair of objects c 1 , c 2 ∈ Ob(G(F )), we define c 1 c 2 := c 1 ⊗ C c 2 . For a pair of maps f 1 : c 1 → c 3 and f 2 : c 2 → c 4 , we define f 1 f 2 to be the following arrow: It is easy to establish that − − is a bifunctor: Let f 3 : c 3 → c 5 and f 4 : c 4 → c 6 be another pair of arrows in G(F ). Now we consider the following commutative diagram: The above diagram tells us that: because the composite map in the bottom row of the above diagram namely ( The tensor product − − on G(F ) is strictly associative because the object set of G(F ) is a monoid and the tensor product of morphisms is associative because the tensor product of morphisms in G(F ) is inherited from that in D which is strictly associative. The symmetry natural transformation γ is defined on objects as follows: γ c1,c2 := F (γ C c1,c2 ). Let f 1 : c 1 → c 3 and f 2 : c 2 → c 4 be a pair of maps in G(F ). The following commutative diagram shows us that γ is a natural isomorphism: which shows that γ is a natural transformation. The following equalities verifies the symmetry condition: γ c1,c2 • γ c2,c1 = F (γ C c1,c2 ) • F (γ C c2,c1 ) = F (γ C c1,c2 • γ C c2,c1 ) = id.
This defines a permutative category structure on the category G(F ). Using the definition of the symmetric monoidal structure on G(F ), one can easily check that Γ F is a strict symmetric monoidal functor. | 2021-02-25T02:15:55.385Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "b7a6344372091efb14ed065a7b57351301eb1de9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f668dacd7ea02591dd0bb6a85a8c38f5dc377f12",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
208169013 | pes2o/s2orc | v3-fos-license | Including parents in inclusive practice: Supporting students with disabilities in higher education
Background While a number of research studies have endeavoured to understand students with disabilities’ experience in higher education and have recommended ways to effectively support student success, the role of parental support has been neglected. Many studies have been hampered by a limited understanding of students with disabilities and have, in particular, underestimated students’ ‘access to economic, social and cultural forms of capital’ that caring parents provide. Objectives This article seeks to explore students with disabilities’ experiences of parental support in the South African higher education context. The research question guiding this article is: What forms of economic, social and cultural capital do parents and extended families provide to students with disabilities to enable them to succeed in higher education? Method In-depth semi-structured individual and focus group interviews were conducted with 17 students with disabilities at two universities of technology. The interview transcripts were thematically analysed with a view to understanding Pierre Bourdieu’s forms of capital that parents provided. Results The study found that while parents are not always able to provide material support, they offered rich and varied forms of social and cultural capital that enabled students with disabilities’ academic success. Conclusion Given that parental support plays an important role in the success of students with disabilities, and this role changes as these students become more independent, this study recommends the need for universities to also pay more attention to involving parents in the education of the former. It is hoped that putting in place appropriate forms of parental involvement can create a conducive environment for universities to provide inclusive education holistically.
Introduction
The transition from a basic schooling system to a tertiary institution often comes with mixed emotions for both students and parents. For students, it may mean independence as they emerge from the familiar home and school environment into the wider world of the higher education and experiencing the freedom of making one's own decisions. According to Lane (2017), all students both those with disability and those without, experience transitioning to higher education as stressful -new environments, new ways of learning and meeting new people is a rite of passage for millions of young people every year. (p. 18) However, it has been widely reported that many students are underprepared academically for higher education studies, and this is associated with the high attrition and failure rate in South African universities. The DHET (2019) report reveals that some students take about a decade to complete a qualification as '68.8% (graduate) after 6 years of study and 78.8% (graduate) after 10 years of study ' (p. 30). According to Bannink, Idro and Van Geert (2016), some students become overwhelmed by academic demands and a sense of personal autonomy when they are away from their parents and a familiar environment. Pressure and confusion manifests itself among first-year students in particular when they need to acquaint themselves with the new environment, where they have to negotiate their social and academic spaces in an attempt to become productive members of the tertiary environment (Bonnani 2015). In this setting, the role of parents is relegated to a secondary position as students are expected to take charge of their studies and non-academic activities and this precludes parents (Bonanni 2015;Coccarelli 2010).
The institution enters into a contractual agreement with the student, regardless of whether the student understands it or not. Some students (first year in particular) may not understand several important issues, such as the manner in which the higher education system functions, for example how to deregister formally or what a fee increase entails. These processes deviate from their school experiences where parents were typically consulted. Moreover, universities' operations and fees are sanctioned by the university council and not with parents (Edelman 2013).
Many students with disabilities come from special schools where environments are conducive for their particular conditions and needs (Bonanni 2015;Kelepouris 2014;Mcgregor et al. 2016). This is not the case at universities, as the extract from Daily Maverick (Van der Merve 2017) below reveals: [In special schools] You have your teachers, just a few in the class, receive individual attention, and then you get to university and there are hundreds of you and nobody cares about you. (p. 1) Leaving supportive learning environments in special schools behind is fraught with challenges for both students with disabilities and their parents. These challenges, with a specific focus on the role of parental support, are the focus of this article. It was pertinent and timeous to explore students with disabilities' experiences of parental support in higher education, as their experiences could inform policy and practice within universities. Thus, the concern of the article is to explore the forms of economic, social and cultural capital that families provide which enable students with disabilities to succeed in their endeavours in a higher education setting.
A brief overview of the literature on parental support for students with disabilities
The role of parents in higher education has attracted interest globally, and this has become evident in a growing body of literature (see Bethke 2011;Chadwick 2015;Edelman 2013;Garret 2015;Wong 2008). Currently, there are contradictory understandings on the importance of parental support to students. For example, Bethke (2011) and Chadwick (2015) found that parents' involvement in the basic education of their children can be inappropriate if it is sustained in a higher education setting, although there are cases in which such support can have positive outcomes on student performance. While there is a large body of literature on the role of parents in the lives of children with disabilities in basic education in South Africa, there seems to be a dearth of literature on the significant influence of parental support on students with disabilities' academic performance at university level (Esau 2018).
The literature suggests that there are appropriate and inappropriate forms of parental involvement (Touchette 2013). Wartman and Savage (2008) describe parental support as: Parents showing interest in the lives of their students in college, gaining more information about college, knowing when and how to appropriately provide encouragement and guidance to their student connecting with the institution, and potentially retaining that institutional connection beyond the college years, (p. 5) This description suggests that if parents are willing to work with universities, they could assist both their child and the university. It is natural for parents to care about the well-being of their children emotionally, financially, socially, academically and even spiritually (Chen & Ho 2012). Religion and spirituality are considered as another form of identity, which link to the cultural capital (Blanks & Smith 2009), as care is provided regardless of disability status of the child. Moreover, parents continue this support even when the child is enrolled at a tertiary institution (Edelman 2013;Garret 2015). In some instances, parents become 'career counsellors' who decide on the 'perfect career' and the appropriate institution for their children. Many children resent this kind of support, but it is difficult to negate parental decisions as parents may have experiences of tertiary institutions (Bethke 2011). Parental support is thus associated with a dependency effect which has merits and demerits, like in any other relationship (Edelman 2013;Garret 2015).
University structures tend to be intimidating and complex for new students; therefore, parental support in the selection of appropriate academic courses and registration processes are helpful (Edelman 2013). It is a common practice in South African universities to invite parents to an orientation briefing at the beginning of the year, but after this encounter there is no or little communication with parents. Moreover, the potential for a parent-university partnership has not yet been explored, probably because universities may wish to avoid parental interference at all costs (Kiyama et al. 2015). One view is that the influence of 'helicopter parents' needs to be eradicated in the interest of the students. In this regard, Vinson (2013) contends: Helicopter parents hover from the prospective admissions stage to graduation and the job market beyond -contacting presidents of universities, deans, and professors, disputing their child's grade; requesting an extension for their child; complaining their child does not receive as much praise as the parent would like; completing assignments for their child; requesting notification of grades their child received; and even attending job fairs and interviews with their child… (p. 423) 'Helicopter parents' tend to influence every stage of their children's progress socially, pedagogically, and legally which is perceived as unprofessional, unfavourable and disruptive (Edelman 2013;Garret 2015;Haines 2017;Segrin et al. 2012). Cullaty (2011) suggests that the role of parents should remain peripheral, where they should be supportive without meddling or intervening in their children's university lives as students need to develop into responsible adults who can make their own decisions. This is because extreme parental support may have adverse effects on the development of students, thereby prolonging their transition to adulthood (Garret 2015). Touchette (2013) and Kelepouris (2014) argue that given the dynamics of global economics, the current generation of parents is more concerned with the future of their children compared with parents of the 20th century. Therefore, in response to the demand by parents for stronger involvement and support, some universities have launched programmes such as family weekends, parent orientations, family events on move-in day, parent newsletters, parent handbooks, parent associations and fundraising as an attempt to enhance parental involvement (Haines 2017). Well-designed programmes assist in building partnership between universities and parents for the benefit of the students and the university while also demarking the boundaries of parental support.
Parental support
Parents of students with disabilities tend to be more involved with their children's university life than most other parents as the challenges of the transition from high school to higher education are more demanding for these students (Lane 2017;Swart & Greyling 2011). Entry into higher education includes finding access to information (i.e. applying, finding an institution that best accommodates a specific disability and registering), finding suitable accommodation and choosing appropriate courses (Tugli et al. 2013). Adapting to a university's demands depends on a number of factors such as character, social skills, nature of a disability, attitude, background and motivation (Strydom & Mentz 2010).
Various authors describe the barriers that students with disabilities encounter at university (Kendall 2016;Matshedisho 2010;Mutanga 2017). Lane (2017) broadly categorises these barriers as physical, attitudinal, social, cultural and political. Central to the challenges these students face are attitudinal barriers (Swart & Greyling 2011). For example, there seems to be a general lack of willingness on the part of some lecturers to provide the necessary support required by students with disabilities. Such an attitudinal position has an adverse impact on the academic performance of these students and, in some instances, even leads to failure or high dropout rate (Riddell, Wilson & Tinklin 2002). It is thus important that students, institutions of higher learning, parents and service providers co-operate and honour their responsibility of providing appropriate support to students with disabilities (Eckes & Ochoa 2005;Lang 2013).
Several recent studies have identified a number of specific areas in which parental support in the form of economic capital is of particular importance. For example, a recent study on the financial implications of disability identified three main areas in which students need financial support: (1) care and support for survival and safety, (2) accessibility of services and (3) participation in community activities (Hanass-Hancock et al. 2017). The latter study found that costs varied depending on the required care and support for the students as well as mandatory assistive devices such students need. Students with disabilities in South Africa are eligible as recipients of funds supplied by the National Student Financial Aid Scheme (NSFAS). However, accessing such funds is fraught with challenges (Bawa 2013;Lourens 2015;Ndlovu & Walton 2016). Parents and the families of students with disabilities may thus have to carry the financial burden to close the gaps when the funding scheme is lacking.
A strong cultural form of capital is prevalent among Africans that is associated with the spirit of Ubuntu (Taderera & Hall 2017;Walton 2018). Ubuntu is when people are not only concerned with their own well-being but help to address the needs of others too. Extended families are common in African culture, thus the absence of biological parents or their inability to adequately fund a child's needs does not mean a student with disability will lack support, as family or siblings will often step in to ensure that the student is provided for (Williams 2011). As a supplementary supportive system, grandparents often become the caregivers when parents are busy, absent or deceased, although this support is not without challenges. Among the challenges that grandparents are likely to face are limited financial means, illiteracy and poor health (Bulanda & Jendrek 2016;Sampson 2015).
To date, the available literature reveals that there is a paucity of studies on parents' and families' support for students with disabilities in universities. The studies that could be traced tended to emphasise the role of the mother and generally find that 'support provided by the biological fathers was minimal' (Taderera & Hall 2017:8). Studies also found that students whose parents had received a university education had an advantage over 'first generation' in terms of support (Lorenzo & Cramm 2012;Williams 2011). The former group of students thus seems to be more likely to follow in their parents' footsteps, as they understand the challenges that might be encountered in higher education settings. Moreover, these parents will have a better knowledge of social services and nongovernmental organisations (NGOs) that their children could access for additional support as they know that their families have aspirations for them (Gatlin & Wilson 2016). Most students view academic success as a way of 'paying back' the investment made by parents. According to Chen and Ho (2012:317), it is a reciprocal relationship when 'parents show their love by offering possible financial, material, and psychological support for learning, while the children return love by striving for academic excellence'. Fuller et al. (2004) argue that both emotional and social forms of support were important for the academic success of students with disabilities. Many such students have a strong family culture that relies on prayer to support their academic endeavours (Kaye & Raghavan 2002). The literature suggests that while disability and student counselling units provide useful resources, emotional or spiritual support is more valued when it is obtained from those with whom students have a personal connection and who understand their backgrounds and personalities (Martinez 2015).
Students with disabilities, like most other students, create new images of themselves at university as they transform their self-images of vulnerability and dependency to that of capability independence and maturity. They soon view themselves as adults and soon-to-be professionals as they prepare themselves for the world of work (Darling 2013). Students tend to progress through various transitional stages towards emerging adulthood (Garret 2015), and they thus want to be viewed as capable, responsible and independent persons who can make their own decisions, regardless of their disability status. As their independence increases, they will no longer wish to be as dependent on parental support as before, and many even reject some forms of support (Kiyama et al. 2015). What parents should understand is that as their children reach new levels of independence and self-confidence, they should step back and engage in 'less control and more communication' (Fernández-Alonso et al. 2017:456).
Research methodology Conceptual framework: Pierre Bourdieu's forms of capital
The conceptual framework for this study draws from Bourdieu's (1986) forms of capital as applied to higher education practices (Crozier et al. 2008;Yosso 2005) as well as to health and disability (Mithen et al. 2015;Pinxten & Lievens 2014). Bourdieu proposes three forms of capital: social, cultural and economic capital, and all three were deemed pertinent to university students with disabilities and the roles of their parents. Portes (1998:7) argues that 'economic capital is in people's bank accounts, cultural capital is inside their heads, and social capital [is] in the structure of their relationships'. The term 'capital' is typically understood as the financial resources that are available for purchasing goods and services; however, for Bourdieu there are additional symbolic forms of capital. For example, among the social groups that he studied, many valued strong neighbourhood ties, family bonds and social status as forms of capital or 'wealth'. Bourdieu terms these elements 'social capital', and argues that some societies value social capital above economic capital. Cultural capital is another form of symbolic capital and describes the knowledge resources that an individual or group has accumulated. Cultural capital may also extend to religious beliefs and spirituality, which are seen as a symbol of hope across all communities (Blanks & Smith 2009). Such symbolic and abstract connections are likely to engender strong relationships, for instance between parents, children or between siblings.
People attend university to acquire particular forms of cultural capital, such as a professional knowledge and skills that can, in turn, be exchanged for economic capital. Bourdieu (1986:24) argues that the concept of capital is not necessarily limited to monetary value, but that 'the forms of capital can be converted into other forms', as in using cultural capital to acquire economic capital, using economic capital to buy books and thus gain cultural capital, or using social capital to progress in a workplace (and thus enhance economic capital). It is often more difficult for students with disabilities to acquire or 'convert' the social and cultural capital associated in higher education settings, and this can exacerbate the socio-economic disadvantage as they may find it difficult to acquire gainful employment (Mithen et al. 2015). It is thus all the more important for students with disabilities to draw on what Yosso (2005) calls 'community [of] cultural knowledge, skills, abilities and contacts possessed by socially marginalised groups that often go unrecognised and acknowledged ' (2005:69). Therefore, by using these symbolic forms of capital, students with disabilities will gain maximum benefits from their higher education studies.
Unfortunately, a lack of application of Bourdieu's forms of capital in education had the inadvertent consequences of making academic staff and administrators believe that disadvantaged students lack necessary forms of capital required for academic success, and this has, in some instances, encouraged 'deficit thinking' (Yosso 2005:69). Deficit thinking is the belief that students who do not succeed in their studies have personal deficiencies, that they are not intellectually capable of advancing or that they lack the motivation to learn. However, the application of Bourdieu's theory could emphasise the resources that people have and not resources they lack (Pinxten & Livens 2014). Against this background, this article utilises Bourdieu's theory as appropriate theoretical lens for exploring the issue of parental and extended family support for students with disabilities in the South African higher education context.
Researchers who draw on Bourdieu's forms of capital make use of many different research approaches and methods, such as surveys, questionnaires, observations and interviews. Bourdieu himself used predominantly 'ethnomethodology' (Bourdieu 1986), which is an approach that included participant observation methods and extended in-depth interviews with research participants. These methods have enabled researchers to understand the life-worlds of the groups and individuals they have studied. Central to Bourdieu's own research studies was a theorised understanding of the social groups and practices that he studied. Thus, his research was not 'grounded', but rather theoretically motivated and informed by forms of capital. For the purposes of the present study, individual and focus group interviews were conducted with a view to understanding the support that student participants received from their parents. Drawing on Bourdieu's theory, the interview transcripts were thematically analysed and clustered according to the 'forms of capital' that emerged from the data.
Sampling
Seventeen final-year students with disabilities participated in this study: 11 students participated in individual, semistructured interviews and six participated in a focus group discussion. The final-year students were purposively sampled (Creswell 2013) with the assistance of the student counselling unit and disability unit at the two higher education institutions in KwaZulu-Natal province. The type http://www.ajod.org Open Access of disability and programme of study were not the foci of the study as it concentrated on disability regardless of the type and intensity. The students were initially invited using emails and WhatsApp messages. The group was quite diverse in terms of gender and nature of their disabilities.
Semi-structured interviews and focus group as data collection methods
Semi-structured interviews are commonly used in qualitative research for their strength in allowing the researcher to gain in-depth understanding of a phenomenon (Blandford 2013), which in this case was parental support for students with disabilities. To generate thick information and enhance the credibility of the study, a focus group discussion was also used to collect data. Six students with various forms of disabilities were invited to participate in the focus group. All ethical considerations for research of this nature were rigorously adhered to (Creswell 2009(Creswell , 2013. Both data collection methods were aimed at exploring the students' experiences of parental support, and the analysis of the data was underpinned by Bourdieu's forms of capital.
Ethical consideration
It is essential to adhere to ethical considerations when conducting research using representatives of a vulnerable group such as people with disabilities ( Yin 2011). The researcher thus adhered to the process for ethical approval as required by the selected universities of technology, and both granted permission for the study to proceed. The selected participants' rights to confidentiality and to withdraw from the study at any point were explained to them, voluntary nature of their participation was emphasised and signed consent forms were procured. To adhere to confidentiality requirement, pseudonyms are used, while real names can only be accessed by the researcher.
Findings and discussion: Parental support and 'forms of capital'
The findings revealed that students had access to rich and diverse forms of capital as their parents and extended families were generally supportive of and committed to them. The findings are grouped in categories of (1) economic capital, (2) social capital and (3) cultural capital. There was considerable overlap across these groups, but the data findings are separated for analysis purposes.
Economic capital
The first, and most obvious, form of capital that parents offered their children was economic capital in the form of financial support for their daily needs as well as for various other expenses such as a wheelchair or a motorcar. Most parents supplemented government disability grants and student bursaries. Economic capital thus includes all kinds of material resources that the students required.
From daily needs to major expenses
The data reveal how some students were financially dependent on their parents for their daily needs. Student 1 explained: While non-disabled students generally find part-time employment in industries that typically employ students such as restaurants and shops, students with disabilities find it difficult to obtain part-time employment, either because of transport challenges or because of the physical nature of part-time work. A study by Majola and Dhunpath (2016) highlights the difficulty that people with disabilities face when they seek gainful employment. Most students were thus dependent on their parents for their everyday expenses as well as for the more expensive items.
Supplementing state-sponsored financial support
Most of the participants had access to economic capital through bursaries, study loans and disability grants like NSFAS. However, these funds were insufficient to cover the cost of living and needed to be supplemented by parents and families. Student 3 averred: 'My mom and I had to put money together because I get a disability grant from the government. The university didn't assist me, they knew about the situation from my first year. I've never been assisted with devices for my disability.' (Student 3, DUT, female) A student who participated in the focus group also found that the allocated budget was not sufficient: Some students needed to use their bursary funds to buy medication or buy supportive devices, such as a wheelchair. The participants found that funding from NSFAS was helpful, but there were many delays in the system that retarded payment of the funds to the students, and this caused financial hardships. Bawa (2013) recorded a similar finding. In such cases parents had to make considerable sacrifices to assist their children. Most participants felt that automated wheelchairs would make their lives easier because http://www.ajod.org Open Access they needed to move from their respective residence to other buildings just like any other student. Therefore, automated wheelchairs were considered to be a basic need. Unfortunately, many students were not able to afford a wheelchair as they are very expensive. Very few people can afford a device that costs about R30 000.00, and for these students this dream was unattainable. Parents who lacked economic capital because of low paying jobs or unemployment could not assist their children in this regard.
The financial contributions made by their parents were highly appreciated by the students, and they understood that without this economic capital support, they would have experienced even more difficulties in the pursuit of their studies. Several studies have also highlighted the impact of socio-economic status of parents on their children's career (Ali et al. 2013;Esau 2018). One of the participants explained that his parents looked forward with great interest to his graduation ceremony; he understood that his academic success was his way of repaying the cost of his parents' investment in his studies. Student 4 thus defined his graduation as follows: 'the day when the investment matures.' (Student 4, MUT, male) Cultural capital Bourdieu (1986) proposes three kinds of cultural capital: (1) the institutionalised state (which refers to educational attainment), (2) the objectified cultural capital (this concerns the possession of cultural goods) and (3) the embodied or incorporated state capital (which refers to people's values, skills, knowledge and tastes). It appeared from the participants that they benefited from the cultural capital that their parents had instilled in them, particularly in terms of spirituality and their sense of independence.
Spiritual support from parents
Spiritual support emerged as a very important aspect of support that the students had embraced. They revealed that spiritual support that their parents had instilled in them played an important role in sustaining their lives and therefore their studies. Student 3 explained: 'I come from a prayerful family … parents always pray that their children become better people.' (Student 3, DUT, female) One participant in the focus group agreed with the importance of spiritual support: 'When I finished high school in 2012. I was supposed to start university in 2013 and 2014 but unfortunately I felt very ill and could not start. So somewhere, somehow I lost hope and thought that may be education is not for me. But my mom prays a lot and encourages us to do so and she was like I shouldn't give up because I'm still young and I can still do it.' (Focus group, DUT, female) The data revealed that these students had strong faith in God and believed that through their parents' prayers, life would be better. They felt connected to their parents all the time. Prayer in this instance strengthened faith and hope so that the student felt secure and comforted, even in the face of adversity. Rule andMncwango (2010 in Schoeman 2017) also found that around 63% of South Africans prayed several times a day. However, Blanks and Smith (2009) and Hartely (2004) found that religion and spirituality were not actively encouraged in higher education because of the wide diversity of religions that exist. Nonetheless, prayer was used as a motivating factor that propelled these students to work hard and succeed not only academically, but as courageous young people who had faced and were still facing many challenges. This finding resonates strongly with Bourdieu's forms of cultural capital. While religion and spirituality are not directly actively encouraged by universities as observed by Blanks andSmith (2009) andHartely (2004), students' religious societies are allowed in most universities and students have a right to practise their religion of choice.
Parental aspiration as motivation for students to achieve
There are many ways of encouraging children to do well. Some need not to be conveyed verbally, but may be portrayed through the lifestyle standard that the family set, which could guide and motivate their children to do well in life. Student 9 described his family background as follows: 'I think the standards they have set are too high both are educated, they are graduates. My mom has a degree in Social Science or Social Work I think. My dad has a Master's degree in philosophy and had a red gown. They both graduated from the University of KwaZulu Natal.' (Student 9, MUT, female) Participants from the focus group also shared similar sentiments: 'I grew up in a family that I can say everyone is highly educated, being that mom and aunt are teachers…' (Student 5, DUT, male) Another participant also mentioned: 'I come from a home where people are studying even my mom is, my cousins and I also have a sister who was at Durban University of Technology in 2014.' (Student 4, DUT, male) Parental aspiration and educational level play a crucial role in academic performance of children (Chen & Ho 2012). Furthermore, Chen and Ho (2012:317) highlight the reciprocal relationship between the parents and their children. However, all or most parents have a vested interest in their children's education and wish for them to succeed, especially when they will be the first in the family to achieve a university qualification. In this study, the majority of the participants came from households where the parents were well educated and worked as professionals. This status encouraged the students because their parents and other family members are their role models. In most cases parents understood the machinations of university life. According to Bourdieu (1986:244), 'the scholastic yield from educational action depends on the cultural capital previously invested by the family.' This kind of relationship is reciprocal as Chen and Ho (2012:317) mention that 'parents show their love by offering http://www.ajod.org Open Access possible financial, material, and psychological support for learning, while the children return love by striving for academic excellence.'
Students' sense of independence
Some of the students seemed to have developed a very strong sense of independence and confidence in their own being. They agreed that all forms of support their parents wanted to give were welcome; however, their territory needed to be respected. Student 10 stated: Student 8 also cherished independence: 'You know I did not involve anyone in the whole process of application and registration. At the beginning of the year I came here alone since I had a provisional offer. I was up and down trying to get information like anybody else until I was accepted. I went back home to take my stuff and I could not expect my granny to come with me from all the way from home to the university, as much as she wanted to. I assured her that I would be ok. I was just phoning her about everything because I knew she was worried.' (Student 8,MUT,female) It emerged from the data that some students did not want their parents to accompany them to university (Bethke 2011), as it might create the impression that they were struggling and were different from other students. These independent students wanted to eradicate the stereotypical thinking that people with disabilities are unfit to do things on their own. A previous study also found that the self-confidence and self-image of students with disabilities improved as they become more independent (Darling 2013).
At this level, students want to build a new image of themselves by changing their image of vulnerability to being perceived as capable, independent individuals as they prepare for the world of work. It is also important to acknowledge that these students are at a transitional stage, that of emerging adulthood (Garret 2015). They thus insisted that their own mode of understanding disability should change from charity model to social model. They did not want their parents to hover over their spaces as 'helicopter parents' who want to take over and lives of their children (Kiyama et al. 2015).
Communicating progress to parents
Although students felt that they needed space to manage their lives, they also had a sense of responsibility as they updated their parents on their progress. The data showed that they were willing to share their academic progress reports with their parents. The communication channels described above seemed important in strengthening the support the students required.
One participant even mentioned that he would be happy if the university had direct communication with the parents. The students were transparent and wanted to be trusted and supported, but from a distance. This not only ensured important social connections but also gave them the freedom to manage their lives.
Social capital
Parents are generally key members of the social network and play a prominent academic role in the lives of all students (Ferrara 2015). While they might not always be able to assist their children financially or academically, they can offer forms of social support. Although some parents of participants did not have extensive business or professional networks, they were nevertheless able to provide considerable material care and moral support to their children.
Commitment and sacrifice: The wealth of mothers
Mothers played a particular role in ensuring the well-being of their children. The following extract describes the support and care Student 3 received from her mother: Not only did Student 3's mother support her child by literally ensuring that she was able to get to her classes, but she managed to accumulate funds to purchase an automated wheelchair that helped her child to become more independent. Such extensive and compassionate maternal support was not uncommon among the interviewees. Student 6 shared the following: 'My mom had to take a month leave in order to support me after the accident to see to it that I was adjusting well to my new status of disability.' (Student 6, DUT, male) Student 8 explained how her mother assisted her with childcare: 'My mom has done a lot for me, she is even looking after my two year old son whilst I am at varsity. She takes him to a day care without which I would not be here.' (Student 8,MUT,female) The care and support offered by mothers is a rich source of social capital, and it was valued by the students, and without it, they would not have been able to succeed in their studies. The participants did not refer much to the role their fathers provided; in some cases the father was referred to as deceased or not taking responsibility, which is consistent with the finding by Taderera and Hall (2017).
The extended family: A support network
Many students had access to a wider social network comprising family members and friends, and the latter included residence roommates and peers. Some participants' parents were deceased or not able to support them because of poor health. In such instances other family members supported them, as Student 6 explained: The spirit of Ubuntu was clear in cases where orphans were able to pursue their studies with the assistance of grandmothers (Sampson 2015) and extended families. This spirit is based on a culture of taking care of others, and not only of blood relatives. The significance of the interplay of Bourdieu's 'forms of capital' becomes evident when helping an orphan; it has a social capital, economic and cultural capital impacts.
Emotional and practical support
Most of the students admitted that it would have been difficult to cope without the emotional and practical support of their parents (or supportive others). Knowing that your parents were consistently supportive and were always available was important for Student 5.
'My parents are supportive and always call to check how my exams went and the results. I remember one day I was panicking because my duly performed (permission to write exams) was very low for a certain subject. I had explained that to my mom because we talk about everything. On the day of examination she called in the morning she could feel that I was crying. I was much stressed she calmed me down and encouraged me, saying that I have worked so hard thus far and this time around I will make it again. You know what, I passed that module with 60 per cent!' (Student 5 DUT, male) Some focus group participants confirmed that while the emotional support of parents was important, they sometimes needed to be 'selective' in what they shared: 'I would say emotional support is very important, especially when it comes to your academics. In varsity we go through a lot, you meet a lot of different people, from different backgrounds and you might want to tell your parent about all the stuff you are going through. But they might not understand so you then become selective in what you share with them. And it may become difficult to cope when you do not have any emotional support from parents.' (Focus group, MUT, female) While students appreciated the support they received from their parents and acknowledged their contribution to their academic success, it also became clear that they preferred not to completely share all of their challenges with their parents. The students wanted to protect their parents from some of the distress that they were experiencing, but they also did not want their parents to feel that they were not coping with university life. It underscored the reciprocal nature of social capital.
The students also did not refer to support offered by disability or student counselling units. This finding is supported by the findings of Martinez (2015), who found that students with disabilities benefited more from personal and familial contacts than from institutional support.
Conclusion
Drawing on Bourdieu's forms of capital as theoretical lens, this article has reported on a study that explored forms of economic, social and cultural capitals that parents and extended families provide to students with disabilities to enable them to succeed in two higher education settings. The study found that while parents struggled in the economic capital sphere as it was costly to provide expensive items such as automated wheelchairs and other assistive technologies, they were often able to assist with more basic requirements and to supplement state provisions. The study also found that parents and extended families were able to provide rich and varied forms of cultural and social capital. For example, while economic capital was necessary for these students with disabilities to cope with the challenges they faced, it was generally the cultural and social capital that their mothers provided that formed the basis of their of their support. This article also suggests that universities of technology in South Africa should explore the potential of parental support for students with disabilities. | 2019-10-24T09:15:48.932Z | 2019-10-21T00:00:00.000 | {
"year": 2019,
"sha1": "637c2919ffee1c1be4957ddccb20fb056d35a05d",
"oa_license": "CCBY",
"oa_url": "https://ajod.org/index.php/ajod/article/download/592/1216",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "759b5b0c1bf456db0f03b9d9ac87af01e2b71705",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
15271441 | pes2o/s2orc | v3-fos-license | Attenuation of TRPV1 and TRPV4 Expression and Function in Mouse Inflammatory Pain Models Using Electroacupuncture
Although pain is a major human affliction, our understanding of pain mechanisms is limited. TRPV1 (transient receptor potential vanilloid subtype 1) and TRPV4 are two crucial receptors involved in inflammatory pain, but their roles in EA- (electroacupuncture-) mediated analgesia are unknown. We injected mice with carrageenan (carra) or a complete Freund's adjuvant (CFA) to model inflammatory pain and investigated the analgesic effect of EA using animal behavior tests, immunostaining, Western blotting, and a whole-cell recording technique. The inflammatory pain model mice developed both mechanical and thermal hyperalgesia. Notably, EA at the ST36 acupoint reversed these phenomena, indicating its curative effect in inflammatory pain. The protein levels of TRPV1 and TRPV4 in DRG (dorsal root ganglion) neurons were both increased at day 4 after the initiation of inflammatory pain and were attenuated by EA, as demonstrated by immunostaining and Western blot analysis. We verified DRG electrophysiological properties to confirm that EA ameliorated peripheral nerve hyperexcitation. Our results indicated that the AP (action potential) threshold, rise time, and fall time, and the percentage and amplitude of TRPV1 and TRPV4 were altered by EA, indicating that EA has an antinociceptive role in inflammatory pain. Our results demonstrate a novel role for EA in regulating TRPV1 and TRPV4 protein expression and nerve excitation in mouse inflammatory pain models.
Introduction
Pain, which affects more than 20% of the population worldwide, is a complicated therapeutic challenge with mechanisms that are not fully understood. Pain can be evoked by tissue damage, noxious environmental stimuli, hypoxia, acidosis, and inflammation [1,2]. Tissue damage causes the injured regions to release inflammatory mediators such as bradykinin, prostaglandins, protons, and neurotransmitters, which activate nerve terminals to pain signal transduction [3].
TRPV1 is usually considered to be involved in the perception of inflammatory and thermal pain, especially pain from heat above 43 • C [4,9]. TRPV1 is highly expressed in dorsal root ganglion (DRG) neurons, especially in C-fiber neurons, and activation of TRPV1 leads to sodium and calcium influx, causing cell depolarization [10,11]. Depletion of TRPV1 results in decreased sensitivity to noxious heat and delays radial heat and hot-plate tests [12]. Luo et al. showed 2 Evidence-Based Complementary and Alternative Medicine the change of the TRPV1 expression after CFA-induced inflammatory pain. TRPV1 protein was increased from day 1 to day 21 and reduced at day 28. Subcutaneous or intrathecal injection of TRPV1 antagonist capsazepine (CPZ) could reliably reduce CFA-induced thermal hyperalgesia [13,14].
TRPV4 is highly associated with osmotic pressure and mechanical sensitivity and has been expressed in heterologous systems [15,16]. Mice lacking TRPV4 have diminished regulation of serum osmolarity and are less sensitive to noxious stimuli [17,18]. TRPV4 also participates in many different types of pain mediation, such as pain resulting from mechanical hyperalgesia and complications of vincristine chemotherapy, diabetes, alcoholism, and acquired immune deficiency syndrome therapy [19,20]. Moreover, TRPV4 mutant mice showed normal behavior on thermal test after CFA injection and also participated in carrageenan-and inflammation-mediators-induced thermal and mechanical hyperalgesia [21][22][23].
Acupuncture is an ancient Chinese method for curing pain for more than 3000 years. However, the detailed mechanism of acupuncture effects remains an important unresolved issue [24]. Several studies have shown that injection with the local anesthetic procaine inhibits the analgesic effect of acupuncture [25][26][27][28][29].
Recently, several studies revealed that TPRV1 and TRPV4 are both involved in mechanical and thermal hyperalgesia [13,14,[21][22][23], but few reports showed the relationship between acupuncture with TRPV1 and TRPV4. We have investigated whether TRPV1 and TRPV4 are key mediators for the effects of acupuncture therapy on inflammatory pain, as indicated by our previous research [30]. Our results demonstrate that electroacupuncture (EA) is effective in inducing analgesia in inflammation-induced hyperalgesia by downregulating TRPV1 and TRPV4 expression.
Animals and EA Pretreatment.
Adult ICR (BioLASCO Taiwan Co., Ltd.) female mice aged 8 to 12 weeks were used in the experiment. The usage of these animals was approved by the Institute of Animal Care and Use Committee of China Medical University, Taiwan, following the Guide for the Use of Laboratory Animals (National Academy Press). EA pretreatment was applied by stainless steel acupuncture needles (1.5 inch, 30 G, Yu-kuang, Taiwan) that were inserted into the muscle layer at a depth of 2-3 mm at the ST36 acupoint. EA was performed after the injection of carrageenan or CFA and performed every day at the same time (12:00-14:00) for totally four days. Electrical square pulses were delivered for 15 min with duration of 1 ms and 2 Hz in frequency generated from the stimulator. The stimulation amplitude was 2 mA. The same treatment was given to nonacupoint (the gluteal muscle) to set as the sham control group [30]. [32]. Both hot/cold-induced pains were measured using a hot/cold plate (Panlab, Harvarf Apparatus). Five minutes of animal behavior were recorded using a digital camera and were analyzed offline using a personal computer [32,33].
Immunohistochemistry and Image
Analysis. Animals were anesthetized with an overdose of choral hydrate and intracardially perfused with saline followed by 4% paraformaldehyde. L3-L5 DRG neurons were immediately dissected and postfixed with 4% paraformaldehyde. Postfixed tissues were then placed in 30% sucrose overnight for cryoprotection. The DRGs were then embedded in OCT and rapidly frozen at −20 • C. Frozen sections were cut in a 15 μm thick on a cryostat. Samples were next incubated with blocking solution containing 3% BSA, 0.1% Triton X-100, and 0.02% sodium azide in PBS for 120 min at room temperature. After blocking, DRGs were incubated with primary antibodies prepared in blocking solution at 4 • C overnight against TRPV1 (1 : 1000, Alomone) and TRPV4 (1 : 1000, Alomone). The secondary antibodies were goat anti-rabbit (Molecular Probes, Carlsbad, CA, USA). Slides were visualized by the use of fluorescence-conjugated secondary antibodies and mounted on cover slips. The images of TRPV1-and TRPV4-positive neurons were calculated to differentiate cell size using NIH ImageJ software (Bethesda, MD, USA) and showed the ratio of TRPV1-and TRPV4positive staining in a different size.
DRG Primary Cultures and Whole-Cell Patch-Clamp
Recording. CD1 mice aged 8-12 weeks were killed by use of CO 2 to minimize their suffering. Lumbar (L3-L5) DRG neurons were dissected from ipsilateral site and placed in a tube containing DMEM and then transferred to DMEM with type I collagenase (0.125%, 120 min) for digestion at incubator at 37 • C. Neurons were then plated on poly-L-lysine-coated cover slides. All recordings were completed within 24 hours after plating. Glass pipettes (Warner Products 64-0792) were prepared (2)(3)(4)(5) with the use of a vertical puller (NAR-ISHIGE PC-10). Whole-cell recordings involved the use of an Axopatch MultiClamp 700B (Axon Instruments). Stimuli were controlled and digital records captured with the use of Signal 3.0 software and a CED1401 converter (Cambridge Electronic Design). Cells with a membrane potential more positive than −40 mV were not accepted. The bridge was balanced in current clamping recording and series resistance was compensated 70% in voltage-clamping recording with Axopatch 700B compensation circuitry. Recording cells were superfused in artificial cerebrospinal fluid (ACSF) containing (in mM) 130 NaCl, 5 KCl, 1 MgCl 2 , 2 CaCl 2 , 10 glucose, and 20 HEPES, adjusted to pH 7.4 with NaOH. ACSF solutions were applied by the use of gravity. The recording electrodes were filled with (in mM) 100 KCl, 2 Na2-ATP, 0.3 Na3-GTP, 10 EGTA, 5 MgCl 2 , and 40 HEPES, adjusted to pH 7.4 with KOH. Osmolarity was approximately 300-310 mOsm. Capsaicin was prepared from a 100-μM stock solution (in 100% ethanol) to a final concentration of 1 μM in ACSF. 4α-phorbol 12, 13-didecanoate (4αPDD) was prepared from a 300-μM stock solution (in 100% ethanol) to a final concentration of 3 μM in ACSF. All drugs were purchased from Sigma Chemical (St. Louis, MO, USA).
Statistical Analysis.
All statistic data are presented as the mean ± standard error. Statistical significance between control, inflammation, and EA group was tested using the ANOVA test, followed by a post hoc Tukey's test (P < 0.05 was considered statistically significant).
EA Attenuated Carrageenan-Elicited Inflammatory Pain by the Hot/Cold-Plate Pain Test.
Intraplantar pretreatment with carrageenan significantly (P < 0.01) induced thermal hyperalgesia according to the licking latency parameter with a hot plate at 50 • C at day 4 after injection (Figure 2(a): control, black, 16.38 ± 1.14 s; carrageenan, red, 7.0 ± 1.13 s; n = 8 for each group; P < 0.01). As shown in Figure 2(a), 2 Hz EA eliminated the effect of pain induced by carrageenan (blue, 15.38 ± 0.7 s), but sham EA did not (green, 6.0 ± 1.13 s; n = 8 for each group; P < 0.01). Furthermore, injection of carrageenan decreased the jumping latency induced with the hot plate at 50 • C from 149.75 ± 11.38 s to 71.75 ± 2.74 s (Figure 2(b): black and red, respectively; n = 8 for each group; P < 0.01). The jumping latency was attenuated by 2 Hz EA but not by sham EA (Figure 2(b): blue and green, 131.88 ± 5.5 s and 75.88 ± 2.89 s, respectively; n = 8 for each group; P < 0.01), suggesting acupoint specificity. Next to verify the effect of carrageenan and EA on thermal hyperalgesia with a cold plate at 4 • C, we analyzed rearing and licking numbers in the four groups. We consistently found that intraplantar injection of carrageenan significantly increased the rearing number from 1.63 ± 0.22 to 2.38 ± 0.28 (Figure 2(c): black and red, respectively; n = 8 per group; P < 0.01). Interestingly, 2 Hz EA also had a potential effect on cold hyperalgesia induced by carrageenan injection compared with sham EA-treated mice (Figure 2 Licking number (n) Cold plate * * * *
Con Carra
Carra + EA Carra + sham (d) Figure 2: Acupuncture effects on nociceptive responses to noxious cold/hot plates after carrageenan induction. (a and b) Four groups of mice were exposed to a hot plate at 50 • C, and licking and jumping latencies were analyzed. (c and d) The four groups were exposed to a cold plate at 4 • C, and the rearing and licking numbers were analyzed. * * P < 0.01 compared with the control group. ## P < 0.01 for carra + sham compared with carra + EA groups (n = 8 per group). Con: control; carra: carrageenan-induced; EA: electroacupuncture at ST36; sham: EA at nonacupoint.
EA at the ST36 Acupoint Altered Electrophysiological
Properties in Inflamed DRG Neurons. We examined the membrane properties of acutely isolated DRG neurons through whole-cell patch clamp recordings. Compared with the control group, DRG neuronal excitability was increased in mice 4 days after CFA-induced inflammation. The resting membrane potential and capacitance were similar among the control, CFA-induced, and EA-treated groups, indicating similar properties of the neurons. The AP threshold and rheobase were decreased in the CFA-inflamed group, indicating increased excitability, and these were attenuated in the EA group (Figure 4(a)). In addition, both the AP rise and fall times were significantly shorter for neurons in the inflammation group compared with those in the control group, and this result was reversed in the EA-treated group (Figure 4(b)). Moreover, no significant differences were found among the three groups in AP amplitude and after hyperpolarization (AHP) duration. To investigate the electrophysiological properties of TRPV1 and TPRV4, we injected TRPV1 or TPRV4 specific agonist capsaicin or 4αPDD to primary cultured DRG neurons to induce inward current. Notably, the percentage of TRPV1-positive neurons and the amplitude of TRPV1-induced inward current induced by the TRPV1 agonist capsaicin were potentiated by CFA-elicited inflammation and further ameliorated by EA treatment (Figure 4(c)). Similar results were also observed in TRPV4 agonist 4αPDD induced neurons. The statistically analyzed data are presented in Table 1.
TRPV1 and TRPV4 Expression in DRG Neurons from Carrageenan-Induced Hyperalgesia Was Decreased by EA.
To correlate the development of inflammatory pain and the curative effects of EA with changes in TRPV1 and TRPV4 in DRG neurons, we first used immunohistochemistry to verify TRPV1 and TRPV4 expression. The expression of TRPV1 was observed in DRG neurons ( Figure 5(a)). Following intraplantar injection of carrageenan, the TRPV1 staining intensity significantly increased in DRG neurons ( Figure 5(b)). This increased expression of TRPV1 reverted to that of the normal control group with 2 Hz EA ( Figure 5(c)). Figure 3: Acupuncture effects on nociceptive responses to noxious cold/hot plates after CFA (complete Freund's adjuvant) induction. (a and b) Four groups were exposed to a hot plate at 50 • C, and the licking and jumping latencies were analyzed. (c and d) The four groups were exposed to a cold plate at 4 • C cold, and the rearing and licking numbers were analyzed. * * P < 0.01 compared with the control group. ## P < 0.01 for CFA + sham compared with CFA + EA groups (n = 8 per group). Con: control; EA: electroacupuncture at ST36; sham: EA at nonacupoint.
TRPV4 was also present in DRG neurons ( Figure 5(d)) and its expression increased after carrageenan injection ( Figure 5(e)). The overexpression of TRPV4 was attenuated by EA stimulation (Figure 5(f)).
EA Abated the CFA-Mediated Inflammatory Pain
Response by Altering TRPV1 and TRPV4 Levels. We next determined the alterations in TRPV1 and TRPV4 levels in CFA-elicited inflammatory hyperalgesia. TRPV1 levels were normal in DRG neurons (Figure 6(a)) and increased following CFA injection ( Figure 6(b)). This phenomenon was reversed by 2-Hz EA stimulation at the ST36 acupoint ( Figure 6(c)). TRPV4 also was observed in DRG neurons ( Figure 6(d)). TRPV4 levels increased following CFA injection ( Figure 6(e)) and were dramatically attenuated by 2-Hz EA stimulation (Figure 6(f)). Cell area versus frequency histograms (Figure 7) showed that TRPV1 proteins were mainly presented in small neurons (cell area < 800 μm 2 ). At day 4 after carrageenan injection, the TRPV1-reactive neurons were increased within small-medium (800-1200 μm 2 ) neurons compared with control group (P < 0.05). The potentiation of TRPV1 protein level was attenuated with EA stimulation. The similar results were obtained in a CFAtreated group. The TRPV4-positive neurons were mainly expressed in medium to large neurons and the ratio of all the population did not alter with both carrageenan and CFAinduced inflammatory pain models. However, our Western blot found TRPV4 was increased with both carrageenan and CFA-elicited inflammatory pain models; we suggested that TRPV4 was increased in all types of DRGs.
EA at ST36 Ameliorated Overexpression of TRPV1 and TRPV4 in DRG Neurons by Western Blotting.
We used Western blotting to further analyze the levels of TRPV1 and TRPV4 proteins in DRG neurons. TRPV1 protein was expressed normally in the control group. After carrageenaninduced hyperalgesia, TRPV1 protein expression was greatly increased (Figure 8(a), 141.37 ± 7.59% compared with control group; n = 6, P < 0.05). This increase was effectively downregulated by 2 Hz EA stimulation at the .93 # * P < 0.05 compared with the control group; * * P < 0.01 compared with the control group; # P < 0.05 between the inflammation and EA groups; ## P < 0.01 between the inflammation and EA groups. Con: control; CFA: complete Freund's adjuvant; EA: electroacupuncture; AP: action potential; AHP: afterhyperpolarization.
Discussion
EA at ST36 can effectively decrease inflammation-induced pain, but the detailed mechanism remains unknown [34,35]. Both TRPV1 and TRPV4 are highly correlated with mechanical and thermal pain. We hypothesized that EA at ST36 could attenuate inflammation-induced pain through the mediation of TRPV1 and TRPV4 channels. TRPV1 and TRPV4 are both cation channels that are activated at temperatures over 43 • C or 25 • C, and both are essential for thermal and mechanical hyperalgesia [22,36]. Both are highly expressed in DRG neurons after inflammation induction that results in thermal and mechanical hyperalgesia [22,36]. Many reports have found that TRPV1 and TRPV4 antisense oligonucleotides or antagonists can effectively ameliorate thermal and mechanical hyperalgesia. Moreover, depletion of TRPV1 or TPRV4 in mice results in higher withdrawal latencies in the von Frey's or Hargraves' tests. These data indicate that both TRPV1 and TRPV4 are essential for mediating thermal and mechanical sensations [22,23,36]. Our results show that thermal and mechanical sensitivities, as measured by hot-plate-induced licking and jumping latency, are altered following inflammatory pain and that these phenomena are attenuated by 2 Hz EA stimulation. We suggest that these behavioral changes are mediated through TRPV1 and TRPV4 downregulation by a 2 Hz EA. Our data also show that a cold plate at 4 • C increases rearing and licking numbers in mice and that this result is abated by 2 Hz EA. These results suggest that EA may also regulate different channels such as TRPA1 or TRPM8 [23]. It has long been recognized that TRPV1 is involved in pain sensations and can increase synaptic transmission in the hippocampus, hypothalamus, and spinal cord with increased miniature excitatory postsynaptic current (mEPSC) frequency after capsaicin application [37,38]. Activation of TRPV1 induces long-term depression in the nucleus accumbens and hippocampal dentate gyrus [37,38]. TRPV1 is suggested to mediate both thermal and mechanical hyperalgesia in inflammatory hyperalgesia. Deletion of TRPV1 is crucial for decreasing CFA-elicited mechanical and thermal hyperalgesia in knee joint and muscle inflammation models. TRPV1 antagonists or antisense oligonucleotides have similar effects in decreasing inflammatory pain symptoms [10,36]. TRPV1 is reported to be activated by mediators and secondary messengers in inflammatory conditions and with tissue injury and ischemia. Under these conditions, peripheral acidosis with low pH is also thought to activate TRPV1 and contribute to pain sensation [23]. Our results suggest that both carrageenan and CFA injection can enhance TRPV1 protein levels in peripheral DRGs. Furthermore, EA manipulation can reliably ameliorate this inflammationinduced upregulation of TRPV1. This phenomenon has also been observed in a tumor pain model, and the powerful therapeutic effect of TRPV1 blockage is being explored [34].
Peripheral synaptic transmission from DRG neurons to the spinal cord dorsal horn (SCDH) is crucial for pain signaling [3]. At these synapses, glutamate is released from presynaptic nerve terminals by several types of stimuli and binds to postsynaptic receptors. These signals are further transferred into electrical signals to the brain for pain sensation and pain responses [39]. In the inflammatory pain process, the probability of augmented glutamate release leads to central nervous system sensitization. Our results show that carrageenan-and CFA-induced inflammatory pain reliably induces mechanical and thermal pain accompanied by TRPV4 increase, as shown by immunostaining and Western blotting. This phenomenon can be reversed through low-frequency (2-Hz) EA stimulation. This is a novel mechanism underlying acupuncture therapy. Recently, activation of TRPV4 by application of its agonist, 4αPDD, was shown to significantly potentiate the frequency of mEP-SCs, implying that presynaptic transmission is responsible for TRPV4 action [38]. Cao and colleagues have demonstrated that TRPV4-elicted membrane currents and synaptic transmission occur primarily through protein kinase C activation [38]. Accordingly, increased TRPV4 may result in enhanced excitability of pain signaling and further induce central sensitization. Mechanical hyperalgesia is decreased in animals by intrathecal administration of TRPV4 antisense oligonucleotides or TRPV4 gene depletion [19,40]. Ding et al. also have reported that TRPV4 is crucial for the thermal pain process induced by chronic compression of DRG neurons in rats through mechanisms that activate TRPV4-NO-cGMP-PKG pathways [41].
Recent studies have shown that ATP is released at ST36 after acupuncture, and ATP is metabolized to adenosine by specific enzymes [42]. Activation of the A1R by adenosine decreases TPRV1 activation by depleting PIP2 (phosphatidylinositol 4, 5-bisphosphate), because PIP2 is important for TRPV1 channel activation [43]. Our results show that TRPV1 expression and physiological function are affected by acupuncture, and we suggest that this phenomenon is influenced by A1R activation. Chen et al. have also reported that it activates PAR2 (protease-activated receptor 2), which then activates PKA (protein kinase A) and PKC (protein kinase C), causing mechanical and thermal (both heat and cold) hypersensitivity. Furthermore, this hypersensitivity is effectively inhibited by TRPV1 and TRPV4 antagonists [44]. A1R is a GPCR (G-protein-coupled receptor), and activation of A1R decreases adenylyl cyclase activity through activation of pertussis toxin-sensitive Gi proteins that then inhibit PKA activity [45][46][47]. We suggest that the mechanism underlying acupuncture-mediated analgesia may be A1R activation, which then inhibits PKA activation resulting in downregulation of TRPV1 and TRPV4.
Studies of the mechanism of pain signaling may lead to the development of additional drugs and therapies. Hurt and colleagues have reported that PAP (prostatic acid phosphatase) is an ectonucleotidase that can hydrolyze extracellular AMP to adenosine in the nociceptive system. Injection of PAP to the Weizhong acupoint has antinociceptive effects in mouse inflammatory pain models [48]. Therese and colleagues have found that TRPV1 is more highly expressed at the BL40 acupoint skin than in the nonacupoint control skin and that TRPV1 expression can be influenced by EA stimulation [49]. This indicates that TRPV1 is associated with the BL40 acupoint. Our data show that acupuncture can mediate TPRV1 and TRPV4 expression Figure 8: TRPV1 and TRPV4 protein levels. DRG (dorsal root ganglion) lysates underwent immunoreactions with specific TRPV1 (a and b) and TRPV4 (c and d) antibodies. TRPV1 and TRPV4 increased substantially after carra (carrageenan) or CFA (complete Freund's adjuvant) injection as compared with the saline-injected group (Con). TRPV1 and TRPV4 protein levels were attenuated by electroacupuncture (EA) at the ST36 acupoint as compared with the carra-and CFA-induced groups.
in DRG neurons. Furthermore, we may be able to develop more effective therapies by combining a TPRV1 antagonist or agonist with acupoints to prolong the effects of acupuncture therapy.
Conclusion
This current study suggests that TRPV1 and TRPV4 were augmented in mice DRG neurons in both the carrageenan and CFA-induced inflammatory pain models. Accordingly, this is the first paper regarding the functional role of acupuncture in pain signaling and its novel findings regarding TRPV1 and TRPV4 channel downregulation. The phenomenon can be further attenuated by EA at the ST36 acupoint, rather than sham group. These results indicate highly valuable data from investigating acupuncturemediated analgesia mechanisms and can be further applied to clinical medicine. | 2018-04-03T00:48:02.372Z | 2012-11-13T00:00:00.000 | {
"year": 2012,
"sha1": "297dbd66be29b0b1551f157c40a3f44ab96815ee",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2012/636848.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "297dbd66be29b0b1551f157c40a3f44ab96815ee",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
153288185 | pes2o/s2orc | v3-fos-license | Plant–necrotroph co-transcriptome networks illuminate a metabolic battlefield
A central goal of studying host-pathogen interaction is to understand how host and pathogen manipulate each other to promote their own fitness in a pathosystem. Co-transcriptomic approaches can simultaneously analyze dual transcriptomes during infection and provide a systematic map of the cross-kingdom communication between two species. Here we used the Arabidopsis-B. cinerea pathosystem to test how plant host and fungal pathogen interact at the transcriptomic level. We assessed the impact of genetic diversity in pathogen and host by utilization of a collection of 96 isolates infection on Arabidopsis wild-type and two mutants with jasmonate or salicylic acid compromised immunities. We identified ten B. cinereagene co-expression networks (GCNs) that encode known or novel virulence mechanisms. Construction of a dual interaction network by combining four host- and ten pathogen-GCNs revealed potential connections between the fungal and plant GCNs. These co-transcriptome data shed lights on the potential mechanisms underlying host-pathogen interaction.
Introduction
How a host and pathogen manipulate each other within a pathosystem to facilitate their own fitness remains a long-standing question. The difference between the pathogen's ability to infect and the host's ability to resist generates the resulting disease symptomology. This interaction forces hostpathogen dynamics to shape the genomes of the two species via adaptive responses to each other (Dangl and Jones, 2001;Bergelson et al., 2001;Benton, 2009;Kanzaki et al., 2012;Karasov et al., 2014). Plants have evolved a sophisticated set of constitutive and inducible immune responses to cope with constant selective pressures from antagonistic microbes (Jones and Dangl, 2006). Reciprocally, plant pathogens have also evolved a variety of different invasion and virulence strategies to disarm or circumvent plant defense strategies (Glazebrook, 2005;Toruño et al., 2016). This has resulted in complex relations between plant hosts and fungal pathogens for survival and fitness.
The plant innate immune system includes several functional layers with overlapping functions to detect and defend against phytopathogens. This multi-layer immune system can be categorized as a signal monitor system to detect invasion, local and systemic signal transduction components to elicit and coordinate responses, and defensive response proteins and metabolites focused on combatting the invading pathogen (Tsuda and Katagiri, 2010;Corwin and Kliebenstein, 2017). These functional layers, as well as the components within them, are highly interconnected and tightly regulated by the host plant to respond appropriately to various phytopathogens (Couto and Zipfel, 2016;Tang et al., 2017). For instance, Arabidopsis utilizes a complex signaling network to regulate the production of indole-derived secondary metabolites, such as camalexin and indole glucosinolates, that contribute to resistance against pathogens (Kliebenstein et al., 2005;Clay et al., 2009;Bednarek et al., 2009;Frerigmann et al., 2016;Xu et al., 2016;Mine et al., 2018). This layered immune system provides pathogens with numerous targets in the plant immune system that the pathogen can utilize, evade or attack. Most biotrophic pathogens, evolved from commensal microbes, attempt to dismantle the plant immune system by injecting effector proteins into host cells or the inter-cellular space (Dangl and Jones, 2001;Büttner and He, 2009;Stergiopoulos and de Wit, 2009). For example, the biotrophic bacterial pathogen Pseudomonas syringae can utilize the jasmonic acid (JA) signaling pathway through the production of a JA-mimic, coronatine, to enhance its fitness (Mittal and Davis, 1995;Brooks et al., 2005;Cui et al., 2018). Alternatively, necrotrophic pathogens, which often evolved from environmental saprophytic microbes, can utilize toxic secondary metabolites, small secreted proteins, and small RNAs to aggressively attack host defenses while also defending against host-derived toxins (Choquer et al., 2007;Arbelet et al., 2010;Mengiste, 2012;Weiberg et al., 2013;Kubicek et al., 2014;Macheleidt et al., 2016). In addition, pathogens can directly resist downstream defenses as is done by B. cinerea, where it has an ATP-binding cassette (ABC) transporter BcatrB that provides resistance by exporting camalexin from the pathogen cell (Stefanato et al., 2009). This high level of interactivity between the immune system and pathogen virulence mechanisms generates the final level of disease severity. However, a functional description of this combative cross-kingdom communication between a plant host and necrotrophic pathogen remains elusive.
Co-transcriptomic approaches whereby the host and pathogen transcriptomes are simultaneously analyzed provide the ability to systematically map the cross-kingdom communication between plant hosts and their pathogens, both for individual genes and gene co-expression network (GCN) levels (Stuart et al., 2003;Musungu et al., 2016;Zhang et al., 2017;Lanver et al., 2018;McClure et al., 2018). Recent advances have enabled the measurement of pathogen in planta transcriptome. For eLife digest Infections are complex interactions between two organisms. When a diseasecausing microbe and a potential host engage, molecules continuously flow in both directions. This creates an inter-connected loop of messages and counter-messages, attacks, counter-attacks and resistance. This communication determines the final winner and the outcome of the disease. Yet it is technically difficult to measure it from both organisms at the same time, mostly because it is often impossible to tell whether a given molecule came from the microbe or the host. As such, little is known about how most infections play out at the molecular level. Now, rather than looking directly at the communication molecules, Zhang et al. have measured the active genes in samples of a plant infected with a fungus. While a molecule released by the plant may be indistinguishable from one from the fungus, the genes needed to make those molecules will be different in each species. The experiments involved two species where databases of gene sequences already exist: Arabidopsis thaliana, a plant often used in laboratory studies, and a fungus known as Botrytis cinerea, which infects many plants. Zhang et al. showed that the interactions between the two organisms are diverse and, rather than single genes, they largely involve sets of genes that are all switched on together as so-called gene co-expression networks (or GCNs for short). Ten of these networks encoded mechanisms that allow the fungus to attack plant hosts. Further analysis identified potential connections between networks of genes in the plant and fungus. These connections may reveal some of the targets of the fungus's toxins or counter mechanisms that plants can use to attempt to defend themselves.
These findings show that it is possible to listen to the molecular communication between two organisms during an infection. In the future, a similar approach may make it possible to ask if a host plant communicates with all of its possible disease-causing microbes with a few distinct pathways, or if instead, hosts have the flexibility to uniquely communicate with each microbe in a different way. example, in planta measurements of the pathogens' transcriptome within the biotrophic Arabidopsis-Pseudomonas syringae pathosystem has enabled the investigation of early effects on Arabidopsis host immunity and the consequent effects on bacterial growth (Nobori et al., 2018). This enabled the identification of a bacterial iron acquisition pathway that is suppressed by multiple plant immune pathways (Nobori et al., 2018). This shows the potential for new hypothesis to be generated by a co-transcriptome approach (Swierzy et al., 2017;Westermann et al., 2017;Lee et al., 2018).
The Arabidopsis-B. cinerea pathosystem is well suited for exploring plant-pathogen interaction to understand host defenses and necrotrophic virulence in ecological and agricultural settings. B. cinerea is a necrotrophic generalist pathogen that attacks a broad range of diverse plant hosts, including dicots, gymnosperms, and even bryophytes (Williamson et al., 2007). This necrotrophic pathogen is endemic throughout the world and can cause severe pre-and post-harvest losses in many crops. A high level of standing natural genetic variation within B. cinerea population is hypothesized to facilitate the extreme host range of B. cinerea. This genetic variation affects nearly all known B. cinerea virulence strategies, including penetration and establishment, evading detection, and combatting/coping with plant immune responses (Atwell et al., 2015;Walker et al., 2015;Corwin et al., 2016b). For example, a key virulence mechanism is the secretion of phytotoxic secondary metabolites, including the sesquiterpene botrydial (BOT) and the polyketide botcinic acid (BOA) that trigger plant chlorosis and host cell collapse (Deighton et al., 2001;Colmenares et al., 2002;Wang et al., 2009;Rossi et al., 2011;Ascari et al., 2013;Porquier et al., 2016). These metabolites are linked to virulence, but some pathogenic field isolates fail to produce either compounds pointing to additional pathogenic strategies. The combination of a high level of genetic diversity and extensive recombination means that a population of B. cinerea is a mixed collection of virulence strategies that can be used to interrogate by the co-transcriptome.
In the present study, the Arabidopsis-B. cinerea pathosystem is used to test how the transcriptomes of the two species interact during infection and assess how natural genetic variation in the pathogen impacts disease development. Isolates were inoculated on Arabidopsis Col-0 wild-type (WT) in conjunction with immune-deficient hormone mutants coi1-1 (jasmonate defense signaling) and npr1-1 (salicylic acid defense signaling). A collection of 96 isolates of B. cinerea was used for infection, which harbor a wide scope of natural genetic variation within the species (Atwell et al., 2015;Corwin et al., 2016a;Zhang et al., 2016;Corwin et al., 2016a;Zhang et al., 2017;Soltis et al., 2018;Fordyce et al., 2018). From individual infected leaves, both Arabidopsis and B. cinerea transcripts at 16 hr post-infection (HPI) were simultaneously measured. Arabidopsis transcripts was analyzed previously to identify four host-derived GCNs that are sensitive to natural genetic variation in B. cinerea (Zhang et al., 2017). In present analysis, ten fungal pathogen-derived GCNs were identiftied, which encode either known or novel virulence mechanisms within the species. Some of these B. cinerea GCNs responsible for BOT production, exocytosis regulation and copper transport are highly linked with the host's defense phytohormone pathways. By combining the plant host-and pathogen-GCNs into a single network, a dual-transcriptomic network was constructed to identify potential interactions between the components of plant host innate immune system and fungal pathogen virulence. These connections highlight potential targets for fungal pathogen phytotoxins and prevailing counter-responses from plant host. Collectively, co-transcriptomic analysis shed lights on the potential mechanisms underlying how the host and pathogen combat each other during infection and illustrate the continued need for advancements of in planta analysis of dual-species interaction.
Genetic variation in pathogen and hosts influence B. cinerea transcriptome
To investigate how genetic variation within a pathogen differentially interacts with plant host immunity at the transcriptomic level, we profiled the in planta transcriptomes of 96 B. cinerea isolates infection across three host genotypes, the Arabidopsis accession Col-0 WT and two immune-signaling mutants coi1-1 and npr1-1 that are respectively compromised in JA or salicylic acid (SA) driven immunity. This previously described collection of 96 isolates represents a broad geographical distribution and contains considerable natural genetic variation that affects a diversity of virulence strategies within B. cinerea (Denby et al., 2004;Rowe and Kliebenstein, 2007;Atwell et al., 2015;Corwin et al., 2016b;Zhang et al., 2016). Four independent biological replicates across two separate experiments per isolate/genotype pair were harvested at 16HPI for transcriptome analysis. A total of 1152 independent RNA samples were generated for library preparation and sequenced on Illumina HiSeq platform (NCBI accession number SRP149815). These libraries were previously used to study Arabidopsis transcriptional responses to natural genetic variation in B. cinerea (Zhang et al., 2017). Mapping the dual-transcriptome reads against the B. cinerea reference genome (B05.10), we identified 9284 predicted gene models with a minimum of either 30 gene counts in one isolate or 300 gene counts across 96 isolates. The total of identified genes corresponds to~79% of the 11,701 predicted encoding genes in B05.10 reference genome (Van Kan et al., 2017). The two different thresholds allowed the identification of pathogen transcripts that express only in a specific isolate.
Measuring the abundance of individual pathogen transcripts in relation to the host transcripts can be used as a molecular method to estimate fungal biomass (Blanco-Ulate et al., 2014). Given this, we hypothesized that the fraction of total reads that map to B. cinerea might be a biologically relevant indicator of pathogen virulence (Figure 1-source data 1). Comparing B. cinerea transcript abundance at 16HPI to lesion development at 72HPI revealed a significant partial correlation in the WT Col-0 (R 2 = 0.1101, p-value=0.0016, Figure 1). In contrast to WT, the early transcriptomic activities of most B. cinerea isolates were more vigorous in the two Arabidopsis mutants, resulting in a significant curvilinear relationship between total fraction of B. cinerea reads and final lesion area (p-value=3.914e-07, p-value=0.0001, respectively, Figure 1). Interestingly, the total reads fraction was better correlated with final lesion area in coi1-1 (R 2 = 0.2562) than either WT (R 2 = 0.1101) or npr1-1 (R 2 = 0.161). This suggests that early transcriptomic activity from the pathogen can be a partial indicator of pathogen virulence, but also depends on the respective resistance from the plant host.
Plant defense phytohormone networks, like SA and JA, help shape the immune responses of a plant host while also shape the virulence gene expression within bacterial pathogens, such as Pseudomonas syringae (Nobori et al., 2018). To test how variation in host SA/JA-signaling influences the fungal pathogen transcriptome, we applied a generalized linear model linked with negativebinomial function (nbGLM) to each B. cinerea transcript across the experiment. This analysis allowed us to estimate the relative broad-sense heritability (H 2 ) of genetic variation from the pathogen, plant host, or their interaction contributing to each transcript (Figure 2-source data 1-3). Of the 9284 detectable B. cinerea transcripts, 8603 and 5244 transcripts were significantly influenced by genetic variation in pathogen and host, respectively (74% and 45% of predicted B. cinerea gene models, respectively) ( Figure 2A, Figure 2-source data 3 and 4). While this result shows that the plant phytohormone pathways influence B. cinerea gene expression, the variation in host defense responses (average H 2 Host = 0.010) has far less influence on B. cinerea gene expression than that of the pathogens' own natural genetic variation (average H 2 Isolate = 0.152). The host defense hormones also affected B. cinerea gene expression in a genotype-by-genotype dependent manner on 4541 genes (39% of B. cinerea predicted gene models, average H 2 Isolate x Host = 0.116) ( Figure 2B-2I). Illustrating this potential for host x pathogen interactions on pathogen gene expression are the two genes encoding the well-studied polygalacturonase 1 (Bcpg1) and oxaloacetate acetyl hydrolase. The two virulence associated genes showed dramatic expression variation across 96 isolates in different host backgrounds ( Figure 3, Figure 3-figure supplement 1, and Figure 2-source data 1). Extending this to 500 genes showing the strongest host x pathogen effect showed that there is a wide range of patterns that differs in the host coi1-1 or npr1-1 background with diverse pathogen strain specific patterns ( Figure 4). One potential complication of this analysis is for sequence variation between the reference B05.10 genome and the diverse strains to create artificially low expression estimates. However, very few genes showed consistently low expression within a strain and instead when a gene showed no expression in one host genotype, it was expressed in a different host genotype ( Figure 3 and Supplementary file 1). This conditionality argues against a sequencing error as the sequence has not altered. The genes that did show a loss of expression across all host genotypes within a strain (i.e. BOT and BOA genes) were frequently linked to whole gene deletions that abolished their expression (Soltis et al., 2019). Thus, while there are likely some sequence variation associated expression errors, they are not a dominant signature in the data. Thus, within the Arabidopsis/B. cinerea pathosystem the pathogens transcriptional responses are influenced by a blend of the pathogens' natural variation and its interaction with the host, while there is less evidence for the host's defense responses to unilaterally affect B. cinerea. Future work will hopefully assess how this extends to other host-pathogen systems.
In Planta virulence Gene Co-expression Networks (GCNs) in B. cinerea
To develop a systemic view of fungal pathogen in planta gene expression, we used a co-expression approach to identify B. cinerea networks that associated with growth and virulence in planta. Using solely B. cinerea transcriptome at 16HPI from Arabidopsis Col-0 WT infected leaves, we calculated Spearman's rank correlations of gene counts across all B. cinerea isolates, filtered gene pairs with correlation greater than 0.8. We then used the filtered gene pairs as input to construct GCNs. We identified ten distinct GCNs containing more than five B. cinerea genes ( Figure These networks are associated with diverse array of virulence functions, including the regulation of exocytosis, copper transport, the production of peptidases and isoprenoid precursors (IPP), and polyketide secretion.
In contrast to the whole-genome distributed GCNs, three of the smaller GCNs were predominantly comprised of genes tandemly clustered within a single chromosome with no or a few genes on other chromosomes ( Figure 5-BOA, -Cyclic Peptide, -BOT, Figure 5-figure supplement 1C, E and G). A functional analysis showed that all of the genes within these networks encoded known or putative biosynthetic enzymes for specialized metabolic pathways. For example, seven genes responsible for BOT biosynthesis cluster on chromosome 12 and form a small GCN with a Zn(II) 2Cys6 transcription factor that is specific to the pathway ( Figure 6A The following source data is available for figure 2: . Expression profiles of an endopolygalacturonase gene Bcpg1 from diverse B. cinerea isolates across Arabidopsis genotypes. Rank plot shows the relationship of Bcpg1 expression from 32 diverse B. cinerea isolates (right) across three Arabidopsis genotypes (x axis). Three Arabidopsis genotypes are wild-type Col-0 (purple dot), jasmonate insensitive mutant coi1-1 (green triangle), and salicylic acid mutant npr1-1 (orange diamond). The model-corrected means (log 2 ) for the transcript of Bcpg1 (Bcin14g00850.1) encoding an endopolygalacturonase gene are utilized for plotting. The transcript expression levels from the same isolate across three Arabidopsis genotypes are connected with a colored line. The names of 32 isolates are represented with the same colored lines as induced Bcpg1 expression levels. Black lines indicate the expression levels of Bcpg1 are higher in coi1-1 and npr1-1 than in Col-0. Red lines indicate the higher expression levels of Bcpg1 in coi1-1 but lower in npr1-1. Blue lines indicate the highest expression levels of Bcpg1 are in Col-0. Dark green lines indicate the higher expression levels of Bcpg1 in npr1-1 but lower in coi1-1. (Siewers et al., 2005;Pinedo et al., 2008;Urlacher and Girhard, 2012;Moraga et al., 2016). Similarly, all 13 genes involved in BOA biosynthesis cluster in Chromosome one and form a highly connected GCN ( (Dalmais et al., 2011;Porquier et al., 2019). In addition to previously characterized secondary metabolic pathways, we identified an uncharacterized set of ten genes that cluster on Chromosome 1 ( Figure 5-Cyclic Peptide, Figure 6F, Figure 5-figure supplement 1E and Figure 5-source data 1). These genes share considerable homology with enzymes related to cyclic peptide biosynthesis and may represent a novel secondary metabolic pathway in B. cinerea ( Figure 5-source data 1). The expression of these pathways in planta was extremely variable among the isolates and included some apparent natural knockouts in the expression of the entire biosynthetic pathway ( Figure 6G and Figure 2-source data 1). Isolate 94.4 was the sole genotype lacking the entire BOT pathway, while 19 isolates and 24 isolates did not transcribe respectively the BOA and the putative cyclic peptide pathways ( Figure 6E-6G and Figure 2-source data 1). We decomposed the expression of these pathways into expression vectors, referred to as eigengenes, using a principle component analysis and used a linear mixed model to test for a relationship between early expression of secondary metabolic pathways and later lesion area. This showed a significant relationship between the expression of BOT and BOA pathways and lesion area measured at 72HPI (Supplementary file 2). In contrast, the putative cyclic peptide pathway was only associated with lesion development in a BOT-dependent manner, suggesting that it may have a synergism to BOT (Supplementary file 2). Thus, in planta analysis of the fungal transcriptome can identify known and novel potential virulence mechanisms and associate them with the resulting virulence.
Covariation of fungal virulence networks under differing plant immune responses
The B. cinerea GCNs measured within Arabidopsis WT provide a reference to investigate how phytohormone-signaling in host innate immunity may shape the pathogen's transcriptional responses during infection. Comparing the B. cinerea GCN membership and structure across the three Arabidopsis genotypes (WT, coi1-1, and npr1-1) showed that the core membership within networks was largely maintained but the specific linkages within and between GCNs were often variable ( In contrast, some GCNs have a highly robust structure across three host genotypes, including three GCNs associated with BOT, BOA and cyclic peptide production, and GCNs associated with exocytosis regulation, copper transport, and peptidase activity (Supplementary file 1, Figure 7 Cross-kingdom Co-transcriptomic networks revealed direct Gene-forgene interaction To test the interaction between individual genes from two organisms, we generated Arabidopsis-B. cinerea GCNs using co-transcriptome data under each host genotype. We calculated Spearman's rank correlation coefficients among 23,898 Arabidopsis transcripts and 9,284 B. cinerea transcripts. This approach identified three cross-kingdom GCNs (CKGCNs) . These CKGCNs also contain genes associated with extensive host defense responses, that is genes encoding membrane-localized leucine-rich repeat receptor kinases (LRR-RKs), stress signal sensing and transduction, tryptophan-derived phytoalexin production, regulation of cell death, cell wall integrity, nutrition transporters, etc. (Figure 8-source data 1). The topological structure and gene content of the CKGCNs shifted across the three Arabidopsis genotypes (Figure 8). These changes illustrate how the host genotype can influence the intercommunication in the host-pathogen interaction.
A dual interaction network reveals fungal virulence components targeting host immunity
To begin assessing how two species influence each other's gene expression during infection, we constructed a co-transcriptome network using both the host-and pathogen-derived GCNs (Figure 9 and Figure 9-figure supplement 1). We converted the ten B. cinerea GCNs and the four Arabidopsis GCNs into eigengene vectors that capture the variation of the general expression of all genes within a GCN into a single value (Zhang et al., 2017). The Arabidopsis GCNs were defined in response to this same transcriptome but by using solely the host transcripts. Of these four Arabidopsis GCNs, one is largely comprised of genes in Defense/camalexin signaling, two are linked to different aspects of photosynthesis and the fourth is largely comprised of host genes in cell division. We calculated Spearman's rank coefficients among each GCN eigengene pairs without regard for the species. In this dual transcriptome network, the Arabidopsis/B. cinerea GCN eigengenes are displayed as nodes and positive/negative correlations between the GCNs as edges (Figure 9 and Figure 9-figure supplement 1). Of the host-derived GCNs, the Arabidopsis Defense/camalexin and Photosystem I (PSI) GCNs have a higher degree of centrality than do the Cell Division or Plastid GCNs across all three host genotypes, suggesting that they have the most interactions with B. cinerea GCNs. In contrast, the fungal GCNs' centrality was more dependent on the host genotype. In WT Col-0, the highest degrees were associated with the exocytosis regulation, BOT, and IPP, whereas they were more peripheral or even not present in the co-transcriptome network in the npr1-1 or coi1-1 host genotypes. Interestingly, in the WT Col-0 host fungal GCNs (Copper transport, Exocytosis regulation, BOT and IPP biosynthesis) that were positively correlated with the host Defense/ camalexin GCN showed negative correlations with PSI eigengene. However, the host genotype can change these GCN relationships. In the npr1-1 host, the host Defense/camalexin and PSI GCNs shift to a positive correlation. This may reflect a shift in how the B. cinerea BOT GCN has a positive correlation with the Defense/camalexin GCN in the Col-0 host but a negative correlation in the npr1-1 host genotype. This suggests that there are dynamics in the host-pathogen co-transcriptome that can be interrogated to potentially identify causational relationships. . Gene co-expression networks identified from B. cinerea transcriptomic responses to Arabidopsis wild-type Col-0 immunity. Ten gene coexpression networks (GCNs) with more than five nodes were identified from 96 B. cinerea isolates infecting on Arabidopsis wild-type Col-0. The similarity matrix is computed using Spearman's rank correlation coefficient. Nodes with different colors represent B. cinerea genes condensed in GCNs with different biological functions. Edges represent the Spearman's rank correlation coefficients between gene pairs. Trans-and cis-GCNs means GCNs are regulated by trans-and cis-regulatory elements, respectively. GCNs were named after their biological functions, which were determined by hub and bottleneck genes within each network. GCNs are: vesicle/virulence (red), translation/growth (green), exocytosis regulation (pink), cyclic peptide (yellow), peptidase (gray), isopentenyl pyrophosphate (IPP, turquoise), polyketide (violet), botcinic acid (BOA, blue), copper transport (slate blue), botrydial (BOT, purple). To test if these connections were dependent upon the host immunity, we used the eigengene values derived from fungal GCNs to conduct mixed linear modelling of how they were linked to variation in the host genotype and/or host GCNs (Supplementary file 3 and 4). Some B. cinerea GCNs (Vesicle/virulence and TSL/growth, etc.) were more affected by variation in the host genotypes while others had less host dependency on their expression (BOT, Copper transport, etc.). Collectively, pathogen virulence and host immunity GCNs showed complex connections within dual interaction network identified from co-transcriptome data, suggesting functional relationships between host defense and pathogen virulence mechanisms for future experimentation.
Germinations influence on the Co-Transcriptome
One potential complicating factor that may influence the co-transcriptome is variation in germination of the spores between B. cinerea strains. The lack of universal genomic patterns for host x pathogen interactions in the co-transcriptome argues that germination is not causing global effects on the cotranscriptome (Figures 3 and 4). To begin examining how variation in B. cinerea spore germination may influence the co-transcriptome and our identified link to virulence, we investigated the germination of 19 isolates. This showed that there was some variation in germination with all but a few isolates germinating within the 6-7 hr' time frame at room temperature (Figure 9-figure supplement 2). To extend this to an in planta analysis, we utilized an existing microarray study on B. cinerea germination to develop an eigengene that estimates the relative germination between the strains using the in planta transcriptomic data (Leroch et al., 2013). We then used this in planta estimation of germination to test if our previously identified co-transcriptome to virulence links were altered by controlling for germination. Using linear models, we ran the same test whereby the major B. cinerea GCNs were tested for a link to virulence although this time, we included the germination eigengene as a co-variate. This analysis showed that the in planta estimation of germination significantly associated with virulence. Critically, even with germination taken into account, all the B. cinerea networks remained significantly associated to lesion area. Some GCNs link, that is BOT and BOA, were largely unaffected by the germination estimates (Supplementary file 6), showing that some aspects of virulence are independent of spore germination. In contrast other GCNs like the vesicle linked GCN had their link to virulence decreased but not abolished by including the germination co-variate. Thus, while spore germination plays a role in our measurement of the plant-pathogen interaction, it is only one of multiple factors influencing the co-transcriptome and is not imparting a dominant global influence on the observed patterns.
Discussion
In recent decades, improvements in the understanding of the molecular basis of plant-pathogen dynamics have facilitated breeding strategies for disease resistance in a variety of crop species. However, breeding for disease resistance remains difficult for crops susceptible to pathogens that harbor diverse polygenic virulence strategies targeting multiple layers and components of the plant innate immune system . In this study, a co-transcriptomic approach was used to investigat the transcriptome profiles of both fungal pathogen B. cinerea and plant host Arabidopsis at an early infection stage. The results showed that the transcriptional virulence strategy employed by B. cinerea is dependent both on fungal genotype and the functional response of the host plant's immune system. A set of B. cinerea transcripts were identified with earlier expression associated with later lesion development. Furthermore, ten pathogen GCNs were found responsible for mediating virulence in B. cinerea, including a potential specialized metabolic pathway of cyclic There are some potential limitations on the utility of cross-species GCNs. Predominantly, they are a correlational approach where links are made between host and pathogen transcriptional changes. While this leads to the development of new hypothesis, it will equally require future validation efforts to assess if these are direct or indirect relationships. Additionally, the cross-species GCN approach as implemented in this work does not distinguish between host/pathogen cells that are directly interacting versus those that are having long-distance responses. An important future avenue will be to integrate cell-specific RNA sequencing approaches to better delineate what are the responses within host/pathogen cells that are directly interacting versus the long-distance responses. This would greatly increase the power of elucidating direct versus indirect effects in this system. Figure 5-source data 1). Further, the expression of these pathways displayed a large range of phenotypic variation across the isolates ( Figure 6G and Figure 2-source data 1). However, the topology and memberships of GCNs for the three pathways are largely insensitive to variation in host immunity. Robustness to host immunity suggests that these GCNs are somehow insulated from the host's immune response, possibly to protect toxin production from a host counter-attack. The co-transcriptome approach showed the ability to identify known and novel secondary metabolic pathways that mediate plant host and fungal pathogen interaction.
Secondary metabolites may mediate plant and fungus transcriptomic interactions during infection
Importantly, the dual interaction networks provide hypothesis for pathogen-GCNs responsible for fungal secondary metabolites production link to specific plant host-GCNs (Figure 9 and Figure 9figure supplement 1). Specifically, the co-transcriptome approach revealed that B. cinerea GCNs responsible for secondary metabolite production are associated with both plant immune responses and primary plant metabolism (Figure 9, Figure 9-figure supplement 1, Supplementary file 3 and 4). For example, in the WT Col-0 host genotype the BOT GCN shows a strong positive correlation with the Arabidopsis Defense/camalexin GCN, suggesting that BOT production may directly induce the host's defense system. Concurrently, the BOT GCN is negatively linked to the plant's PSI GCN, suggesting that BOT may repress the plant's photosynthetic potential. Critically, this relationship changes in the npr1-1 host genotype with the BOT GCN now having a negative correlation to the Arabidopsis Defense/camalexin GCN. Further work is needed to test if these host/pathogen GCN interactions are causal and how the SA pathway in the host may influence these interactions. Collectively, these results strongly implicate the ability of secondary metabolites biosynthesis to cinerea isolates in responding to Arabidopsis wild-type Col-0 immunity. The model-corrected means (log 2 ) of transcripts were used for plotting. (D) Scatter plot illustrates the positive correlations between lesion area and accumulation of BcBOT2 transcript across the 96 isolates in response to varied Arabidopsis immunities. Model-corrected lesion area means were estimated for three Arabidopsis genotypes at 72 hr post-infection with 96 B. cinerea isolates. The three Arabidopsis genotypes are labeled next to the confidence ellipse curves: wild-type Col-0 (purple dot), jasmonate insensitive mutant coi1-1 (green triangle), and salicylic acid mutant npr1-1 (orange diamond). The 90% confidence ellipse intervals are plotted for each Arabidopsis genotype for reference. Linear regression lines: Col-0: y = 3.2532 x+4.4323, p=1.008e-10, Adjusted R 2 = 3.3537; coi1-1: y = 7.4802 x+10.3289, p=7.895e-15, adjusted R 2 = 0.4700; npr1-1: y = 3.7086 x+7.3487, p=2.425e-11, adjusted R 2 = 0.3726. (E) and (F) Bar plots compare expression variation of BcBOA6 in botcinic acid (BOA) pathway and Bcin01g11460. in cyclic peptide pathway across 96 B. cinerea isolates in response to Arabidopsis wild-type Col-0 immunity. (G) Venn diagram illustrates the number of B. cinerea isolates with the ability to induce BOT, BOA, and cyclic peptide. DOI: https://doi.org/10.7554/eLife.44279.017
Fungal virulence components correlated with plant immune response
In addition to secondary metabolite biosynthesis, the co-transcriptome identified a number of key virulence mechanisms that could be mapped to the two species interaction. One key GCN is enriched for genes involved in exocytosis associated regulation ( Figure 5-Exocytosis regulation and Figure 5-source data 1). The exocytosis complex is responsible for delivery of secondary metabolites and proteins to the extra-cellular space and plasma membrane in fungi (Colombo et al., 2014;Rodrigues et al., 2015). Additionally, we found many B. cinerea genes associated with secretory vesicles within the membrane/vesicle virulence GCN that likely serve a similar function during infection ( Figure 5-Vesicle/virulence and Figure 5-source data 1). These GCNs also provide support for the role of exocytosis-based spatial segregation of different materials during fungal hyphae growth in planta (Samuel et al., 2015). The dual interaction network suggests that the exocytosis regulation and membrane/vesicle virulence GCNs are differentially linked to the Arabidopsis Defense/camalexin GCN, indicating varied connections between fungal secretory pathways and plant immune responses (Figure 9 and Supplementary file 3 and 4). Another conserved GCN in the B. cinerea species is associated with copper uptake and transport ( Figure 5-Copper transport, Figure 7-figure supplements 1, 2 and 3, and Figure 5-source data 1). Although copper is essential for B. cinerea penetration and redox status regulation within plant tissues, further work is required to decipher the precise molecular mechanism involved in acquisition and detoxification of copper. Thus, the co-transcriptome approach can identify both known and unknown mechanisms and links within the host-pathogen interaction.
Fungal virulence transcriptomic responses are partly shaped by host immunity
It is largely unknown how plant host immunity contributes to the transcriptomic behavior of the fungus during infection. Even less is known about the role of genetic variation in the pathogen in responding to, or coping with, the inputs coming from the host immune system. In the current study, we found that the host immune system's effect on pathogen transcripts and GCNs was largely via an interaction with the pathogen genotypes ( Figure 2, Figure 7 under three Arabidopsis genotypes are compared. Three Arabidopsis genotypes are wild-type Col-0, jasmonate insensitive mutant coi1-1, and salicylic acid mutant npr1-1. Nodes marked with red and green colors represent B. cinerea genes condensed in GCNs with different biological functions. The same node condensed in GCNs across three Arabidopsis genotypes was marked with same color. Nodes specificaly condensed in GCNs under two mutants coi1-1 and npr1-1 background are marked with orange color. Edges represent the Spearman's rank correlation coefficients between gene pairs. DOI: https://doi.org/10.7554/eLife.44279.018 The following figure supplements are available for figure 7: Figure 5-source data 1). Critically, the gene membership of these GCNs is largely stable across the collection of pathogen isolates, even while their expression level across the B. cinerea isolates is highly polymorphic ( Figure 5-source data 1and Figure 7-figure supplement 4). This suggests that natural variation in the host immunity and pathogen shapes how the co-transcriptome responds to host's immune system. Further, the natural variation in the pathogen may be focused around these functional GCNs.
Plant disease development can be predicted by early transcriptome data
Plant disease development is an abstract phenomenon that is the result of a wide set of spatiotemporal biological processes encoded by two interplaying species under a specific environment. In current study, we used late stage lesion area as a quantitative indicator of B. cinerea virulence. We have previously shown that early Arabidopsis transcriptomic response could be linked to later lesion development (Zhang et al., 2017). Here, our findings suggest that the late-stage disease development of a B. cinerea infection is determined during the first few hours of infection by the interaction of plant immune and fungal virulence responses. It was possible to create a link between early transcripts' accumulation and late disease development using solely the B. cinerea transcriptome (Figure 1 and Figure 3-source data 1). This could be done using either individual pathogen genes, GCNs, or more simply the total fraction of transcripts from the pathogen. As the transcriptomic data were from plant leaf tissue only 16HPI, there is not a significant amount of pathogen biomass and this is more likely an indicator of transcriptional activity in the pathogen during infection. It is possible to develop these methods as possible biomarkers for likely fungal pathogen caused disease progression.
Conclusion
The co-transcriptome analysis of a B. cinerea population infection on Arabidopsis identified a number of B. cinerea GCNs that contained a variety of virulence-associated gene modules with different biological functions. The characterization of these GCNs simultaneously identified mechanisms known to enhance B. cinerea virulence and implicated several novel mechanisms not previously described in the Arabidopsis-B. cinerea pathosystem. In addition, the plant-fungus co-transcriptome network revealed the potential interaction between fungal pathogen-and plant host-GCNs. Construction of GCNs within single species, CKGCNs and dual networks shed lights on the biological mechanisms driving quantitative pathogen virulence in B. cinerea and their potential targets in the plant innate immune system.
Figure 8 continued
Within each connectivity plot, orange and green nodes show transcripts from B. cinerea and Arabidopsis, respectively. Nodes with red and violet colors represent the B. cinerea transcripts that were found to be members of the B. cinerea membrane/vesicle virulence network and BOT network, respectively. Node size shows the number of interactions with a specific gene. The connectivity between the nodes was derived using Spearman's rank correlation analysis. DOI: https://doi.org/10.7554/eLife.44279.023 The following source data and figure supplement are available for figure 8:
Materials and methods
Collection and maintenance B. cinerea isolates A collection of 96 B. cinerea isolates were selected in this study based on their phenotypic and genotypic diversity (Denby et al., 2004;Rowe and Kliebenstein, 2007;Corwin et al., 2016a;Zhang et al., 2016;Zhang et al., 2017). This B. cinerea collection was sampled from a large variety of different host origins and contained a set of international isolates obtained from labs across the world, including the well-studied B05.10 isolate. A majority of isolates are natural isolates that isolated from California and can infect a wide range of crops. Isolates are maintained in À80˚C freezer stocks as spores in 20% glycerol and were grown on fresh potato dextrose agar (PDA) 10 days prior to infection. Figure 9. A dual interaction network reveals links between Arabidopsis immunity and B. cinerea virulence. A dual interaction network was constructed using gene co-expression networks (GCNs) from Arabidopsis and B. cinerea co-transcriptome. The first eigenvectors were derived from individual GCNs and used as input to calculate Spearman's rank correlation coefficiency between GCN pairs. Green dots and orange triangles represent Arabidopsis immune-and B. cinerea virulence-GCNs, respectively. Blue and red lines (edges) represent the positive and negative Spearman's rank correlation coefficients between GCN pairs, respectively. The thickness of line signifies the correlational strength. DOI: https://doi.org/10.7554/eLife.44279.026 The following figure supplements are available for figure 9:
Plant materials and growth conditions
The Arabidopsis accession Columbia-0 (Col-0) was the wildtype background of all Arabidopsis mutants used in this study. The three Arabidopsis genotypes used in this study included the WT and two well-characterized immunodeficient mutants, coi1-1 and npr1-1, that abolish the major JA-or SA-defense perception pathways, respectively (Cao et al., 1997;Xie et al., 1998;Xu, 2002;Pieterse and Van Loon, 2004). All plants were grown as described previously (Zhang et al., 2017). Two independent randomized complete block-designed experiments were conducted and a total of 90 plants per genotype were grown in 30 flats for each experiment. Approximately 5 to 6 fully developed leaves were harvested from the five-week old plants and placed on 1% phytoagar in large plastic flats prior to B. cinerea infection.
Inoculation and sampling
We infected all 96 isolates onto each of the three Arabidopsis genotypes in a random design with 6fold replication across the two independent experiments. A total of twelve infected leaves per isolate/genotype pair were generated. For inoculation, all B. cinerea isolates were cultured and inoculated on three Arabidopsis genotypes as described previously (Denby et al., 2004;Corwin et al., 2016b;Zhang et al., 2017). Briefly, frozen glycerol stocks of isolate spores were first used for inoculation on a few slices of canned peaches in petri plates. Spores were collected from one-week-old sporulating peach slices. The spore solution was filterred and the spore pellet was re-suspended in sterilized 0.5x organic grape juice (Santa Cruz Organics, Pescadero, CA). Spore concentrations were determined using a hemacytometer and suspensions were diluted to 10spores/mL. Detached leaf assays were used for a high-throughput analysis of B. cinerea infection, which has been shown to be consistent with whole plant assay (Govrin and Levine, 2000;Mengiste et al., 2003;Denby et al., 2004;Sharma et al., 2005;Windram et al., 2012). Five-week old leaves were inoculated with 4 mL of the spore solution. The infected leaf tissues were incubated on 1% phytoagar flats with a humidity dome at room temperature. The inoculation was conducted in a randomized complete block design across the six planting blocks. All inoculations were conducted within one hour of dawn and the light period of the leaves was maintained. Two blocks were harvest at 16HPI for RNA-Seq analysis. The remaining four blocks were incubated at room temperature until 72HPI when they were digitally imaged for lesion size and harvested for chemical analysis as described previously (Zhang et al., 2017).
RNA-Seq library preparation, Sequencing, Mapping and Statistical Analysis
Two B. cinerea infected leaf tissues of the six blocks were sampled at 16HPI for transcriptome analysis, which resulted in a total of 1,052 mRNA libraries for Illumina HiSeq sequencing. RNA-Seq libraries were prepared according to a previous method (Kumar et al., 2012) with minor modifications (Zhang et al., 2017). Briefly, infected leaves were immediately frozen in liquid nitrogen and stored at À80˚C until processing. RNA extraction was conducted by re-freezing samples in liquid nitrogen and homogenizing by rapid agitation in a bead beater followed by direct mRNA isolation using the Dynabeads oligo-dT kit. First and second strand cDNA was produced from the mRNA using an Invitrogen Superscript III kit. The resulting cDNA was fragmented, end-repaired, A-tailed and barcoded as previously described. Adapter-ligated fragments were enriched by PCR and size-selected for a mean of 300 base pair (bp) prior to sequencing. Barcoded libraries were pooled in batches of 96 and submitted for a single-end, 50 bp sequencing on a single lane per pool using the Illumina HiSeq 2500 platform at the UC Davis Genome Center (DNA Technologies Core, Davis, CA). All statistical analysis were conducted within R (R Development Core Team, 2014).
Transcriptomic data analysis
Fastq files from individual HiSeq lanes were separated by adapter index into individual RNA-Seq library samples. The quality of individual libraries was estimated for overall read quality and overrepresented sequences using FastQC software (Version 0.11.3,www.bioinformatics.babraham.ac.uk/ projects/). We conducted downstream bioinformatic analysis, like reads mapping, normalization and nbGLM model analysis, using a custom script from the Octopus R package (https://github.com/ WeiZhang317/octopus; Zhang, 2018; copy archived at https://github.com/elifesciences-publications/octopus). The mapping of processed reads against Arabidopsis and B. cinerea reference genomes was conducted by Bowtie 1 (V.1.1.2, http://sourceforge.net/projects/bowtie-bio/files/ bowtie/1.1.2/) using minimum phred33 quality scores (Langmead et al., 2009). The first 10 bp of reads was trimmed to remove low quality bases using the fastx toolkit (http://hannonlab.cshl.edu/ fastx_toolkit/commandline.html). Total reads for each library were firstly mapped against the Arabidopsis TAIR10.25 cDNA reference genome. The remaining un-mapped reads were then aligned against B. cinerea B05.10 isolate cDNA reference genome (Lamesch et al., 2010;Lamesch et al., 2012;Krishnakumar et al., 2015;Van Kan et al., 2017) and the gene counts for both species were pulled from the resulting SAM files (Li et al., 2009).
For pathogen gene expression analysis, we first filtered genes with either more than 30 gene counts in one isolate or 300 gene counts across 96 isolates. We normalized B. cinerea gene counts data set using the trimmed mean of M-values method (TMM) from the EdgeR package (V3.12) (Robinson and Smyth, 2008;Bullard et al., 2010;Robinson and Oshlack, 2010). We then ran the following generalized linear model (GLM) with a negative binomial link function from the MASS package for all transcripts using the following equation (Venables and Ripley, 2002): where the main categorical effects E, I, and H are denoted as experiment, isolate genotype, and plant host genotype, respectively. Nested effects of the growing flat (Gf) within the experimental replicates and agar flat (Af) nested within growing flat are also accounted for within the model. Model corrected means and standard errors for each transcript were determined for each isolate/ plant genotype pair using the lsmeans package (Lenth, 2016). Raw P-values for F-and Chi Squaretest were determined using Type II sums of squares using car package (Fox and Weisberg, 2011). P-values were corrected for multiple testing using a false discovery rate correction (Yoav and Daniel, 2001). Broad-sense heritability (H 2 ) of individual transcripts was estimated as the proportion of variance attributed to B. cinerea genotype, Arabidopsis genotype, or their interaction effects.
Gene Ontology analysis
GO analysis was conducted for several B. cinerea gene sets that were identified with high heritability, correlated with lesion size, and condensed in network analysis. We first converted sequences of these B. cinerea genes into fasta files using Biostrings and seqRFLP packages in R (Qiong and Jinlong, 2012;Pages et al., 2017). The functional annotation of genes was obtained by blasting the sequences against the NCBI database using Blast2GO to obtain putative GO annotations (Conesa et al., 2005;Gö tz et al., 2008). The GO terms were compared to the official GO annotation from the B. cinerea database (http://fungi.ensembl.org/Botrytis_cinerea/Info/Index) and those obtained by Blast2GO analysis. The official gene annotations for host genes was retrieved from TAIR10.25 (https://apps.araport.org/thalemine/bag.do?subtab=upload).
B. cinerea Gene Co-expression Network Construction
To obtain a representative subset of B. cinerea genes co-expressed under in planta conditions, we generated gene co-expression networks (GCNs) among genes in the B. cinerea transcriptome. GCNs were generated using the model-corrected means of 9,284 B. cinerea transcripts from individual isolate infection across three Arabidopsis genotypes. Only genes with average or medium expression greater than zero across all samples were considered. This preselection process kept 6372 genes and those with negative expression values were adjusted to set expression at zero before network construction. Spearman's rank correlation coefficients for each gene pair was calculated using the cor function in R. Three gene-for-gene correlation similarity matrixes were generated independently for each of the three Arabidopsis genotypes. Considering the cutoff for gene-pair correlation usually generates biases of GCN structure and the candidate gene hit, we utilized several cutoff threshold values at 0.75, 0.8, 0.85, and 0.9 to filter the gene set. Comparing the structure and content of GCNs among those GCN sets using filtered gene set as input, we selected the correlation threshold at 0.8. A total of 600, 700 and 494 B. cinerea candidate genes passed the criterion under Arabidopsis WT, mutants coi1-1 and npr1-1, respectively. To obtain a representative subset of B. cinerea gene candidates across three host genotypes, we selected gene candidates that presented across the above three gene subsets. This process generated a gene set with 323 B. cinerea candidate genes that were common to each of the plant genotype backgrounds and had at least 0.8 significant correlations. Using this gene set as kernel, we extended gene candidate sets under each Arabidopsis genotype. The expanded B. cinerea gene candidate set under individual Arabidopsis genotypes was further used as input for gene co-expression network construction.
GCNs were visualized using Cytoscape V3.2.1 (Java version:1.8.0_60) (Shannon et al., 2003). The nodes and edges within each network represent the B. cinerea genes and the Spearman's rank correlations between each gene pair. The importance of a given node within each network was determined by common network analysis indices, such as connectivity (degree) and betweenness. Nodes with higher connectivity and betweenness were considered as hub and bottleneck genes, respectively, and the biological functions of each network were determined by the GO terms of hub and bottle neck genes using Blast2GO.
Cross-kingdom Arabidopsis-B. cinerea Gene Co-expression Network construction
We used model-corrected means of transcripts from three Arabidopsis host genotypes and 96 B. cinerea isolates to construct the cross-kingdom Arabidopsis-B. cinerea GCNs. Model-corrected means of 23,959 Arabidopsis transcripts and 6,372 B. cinerea transcripts derived from two negative binomial linked generalized linear models were served as input data sets (Zhang et al., 2017). Spearman's rank correlation coefficient was calculated between genes from Arabidopsis and B. cinerea data sets. The gene pairs with positive correlations greater than 0.74 under each Arabidopsis genotype were considered to construct cross-kingdom GCNs.
Dual interaction network construction
To construct a cross-kingdom, dual interaction network of plant-pathogen GCNs, we performed principle component analysis on individual GCNs within each species to obtain eigengene vectors describing the expression of the entire gene network as previously described (Zhang and Horvath, 2005;Langfelder and Horvath, 2008;Okada et al., 2016). From these eigengene vectors, we calculated the Spearman's rank correlation coefficient between the first eigengene vectors for each network. The resulting similarity matrices were used as input to construct the interaction network and Cytoscape was used to visualize the resulting network.
Statistical analysis of network components
All the analyses were conducted using R V3.2.1 statistical environment (R Core Team, 2014). To investigate how secondary metabolite induction in B. cinerea contributes to disease development, we conducted a multi-factor ANOVA on B. cinerea three secondary metabolic pathways upon impacts on host genotypes. The three secondary metabolic pathways included the biosynthetic pathways of two well-known secondary metabolites, BOT and BOA, and a cyclic peptide biosynthetic pathway predicted in this study. We calculated the z-scores for all transcripts involved in BOT pathway, the BOA, and the putative cyclic peptide pathway for each isolate/plant genotype pair. The multi-factor ANOVA model for lesion area was: where T, A, C, and G h stand for BOT, BOA, Cyclic peptide, and host genotype, respectively.
In addition, we used multi-factor ANOVA models to investigate interactions between GCNs within species for impacts upon host genotypes. The ANOVA models contain all GCNs within a species. The first eigengene vector derived from principal component analysis on each network was used in ANOVA models. The ANOVA model for individual B. cinerea GCNs was: where D, P, C, PSI, and G h stand for Arabidopsis Defense/Camalexin GCN, Arabidopsis Plastid GCN, Arabidopsis Cell/Division GCN, Arabidopsis PSI GCN, and Host genotypes, respectively. G h stands for HostGenotype, respectively. BcNeti represents one of the ten B. cinerea GCNs identified in this study. The ANOVA model for individual Arabidopsis GCNs was: BcNeti represents the summation of each of the ten B. cinerea GCNs identified in this study: BcVesicle/Viru GCN, BcTSL/Growth GCN, BcBOA GCN, BcExocytoRegu GCN, BcCycPep GCN, BcCopperTran GCN, BcBOT GCN, BcPeptidase GCN, BcIPP GCN, BcPolyketide GCN, while G h stands for Host genotypes. Interactions among the terms were not tested to avoid the potential for overfitting. AtNeti stands for one of the four Arabidopsis GCNs (e.g. AtDefense/Camalexin GCN, AtPlastid GCN, AtCell/Division GCN, AtPSI GCN). All multi-factor ANOVA models were optimized by trimming to just the terms with a significant P-value (P-value < 0.05).
Germination assay
To assess the potential for natural variation in germination time in the isolate collection, 19 B. cinerea isolates were investigated by germination assay. The isolates were grown on PDA. Mature spores were collected in water, filtered and resuspended in 50% grape juice, as previously described, and further diluted to 1000spores/mL. To prevent germination before the beginning of the assay, spores were continuously kept on ice or in the fridge at 4˚C. During the germination assay, the spores were maintained at 21˚C in 1.5 mL tubes. Every hour, the tubes were mixed by manual inversion and sampled for 25 mL that were transferred to microscope slides. The spores within the drops were let to set down shortly. Without using slide covers, the spores were observed within the drops at two locations, used as technical replicates. The spores were categorized and counted based on the picture of every microscope observations taken every hour from 2 to 11 hr. Germination was defined as the hyphae emerged out of the spore.
To assess the contribution of germination to the observed B. cinerea transcriptomic networks involved in lesion development, we generated germination estimates based on gene expression by extracting the first principal component of a publicly available time series microarray data including 101 gemination-associated genes (Leroch et al., 2013). Based on this principal component, we predicted the level of expression of germination-associated genes for the 96 isolates on the three Arabidopsis genotypes at 16HPI. Theses germination predictions for individual isolates were used in a linear ANOVA model to estimate the co-linearity of the germination eigengene vector to virulence. Using linear ANOVA models with and without this germination eigengene vector, we compared how germination influences the 10 B. cinerea transcriptomic networks involved in lesion area in the three host genotypes. The ANOVA models with and without germination eigengene vector were: where Germination represents the scores of first principal components on expressions of germination associated genes from B. cinerea transcriptomic data in this study, P BcNeti represents the summation of each of the ten B. cinerea GCNs identified in this study: BcVesicle/Viru GCN, BcTSL/ Growth GCN, BcBOA GCN, BcExocytoRegu GCN, BcCycPep GCN, BcCopperTran GCN, BcBOT GCN, BcPeptidase GCN, BcIPP GCN, BcPolyketide GCN, while G h stands for Host genotypes. Interactions among the terms were not tested to avoid the potential for overfitting. Model-corrected means of transcripts from 96 B. cinerea isolates were z-scaled and used in ANOVA. Df is the degrees of freedom for a term within the model. SS is the Sum of Squares variation. MS is the Mean of Squared variation. F value is derived from the F statistic and P-value indicates the statistical significance for a given term within the model. Significance of differences are shown as p<0.001 '***', 0.01'**' and 0.05 '*'. DOI: https://doi.org/10.7554/eLife.44279.030 . Supplementary file 3. ANOVA tables of B. cinerea gene co-expression networks. Mixed linear models were fitted to individual B. cinerea (Bc) gene co-expression networks (GCNs) and variation of host genotypes and Arabidopsis (At) GCNs. Variation was estimated among host genotypes and first eigenvectors from four individual Arabidopsis GCNs. Df is the degrees of freedom for a term within the model. SS is the Sum of Squares variation. MS is the Mean of Squared variation. F value is derived from the F statistic and P-value indicates the statistical significance for a given term within the model. Significance of difference are shown as p<0.001 '***', 0.01'**' and 0.05 '*'. . Supplementary file 4. ANOVA tables of Arabidopsis gene co-expression networks. Linear mixed models were fitted to individual Arabidopsis (At) gene co-expression networks (GCNs) and variation of host genotypes and ten B. cinerea (Bc) GCNs. Variation was estimated among host genotypes and first eigenvectors from individual B. cinerea GCNs. Df is the degrees of freedom for a term within the model. SS is the Sum of Squares variation. MS is the Mean of Squared variation. F value is derived from the F statistic and P-value indicates the statistical significance for a given term within the model. Significance of difference are shown as p<0.001 '***', 0.01'**' and 0.05 '*'. . Supplementary file 6. Analysis of potential impact of germination variation. To test if germination may influence the observed network to lesion connections, we estimated germination using the first principal component of genes linked to germination in Leroch et al. (2013). We then estimated the value of this principal component in the isolates grown on the three host genotypes and conducted a linear model to compare how this eigengene links to virulence. Using a linear model with and without this germination eigengene, we compared the link of the 10 B. cinerea transcript networks to lesion size with and without the germination eigengene vector in the model.
The following previously published datasets were used: | 2019-04-03T13:08:33.889Z | 2018-12-04T00:00:00.000 | {
"year": 2019,
"sha1": "903d18c636c47856dcaccb4e388ed38b4ff794c3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.44279",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "903d18c636c47856dcaccb4e388ed38b4ff794c3",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
198968035 | pes2o/s2orc | v3-fos-license | Variational f-divergence Minimization
Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution. In light of recent successes in training Generative Adversarial Networks, alternative non-likelihood training criteria have been proposed. Whilst not necessarily statistically efficient, these alternatives may better match user requirements such as sharp image generation. A general variational method for training probabilistic latent variable models using maximum likelihood is well established; however, how to train latent variable models using other f-divergences is comparatively unknown. We discuss a variational approach that, when combined with the recently introduced Spread Divergence, can be applied to train a large class of latent variable models using any f-divergence.
Introduction
Probabilistic modelling generally deals with the task of trying to fit a model p θ (x) parameterized by θ to a given distribution p(x). To fit the model we often wish to minimize some measure of difference between p θ (x) and p(x). A popular choice is the class of f -divergences 1 (see for example (Sason & Verdú, 2015)) which, for two distributions p(x) and q(x), is defined by where f (x) is a convex function with f (1) = 0.
Many of the standard divergences correspond to simple choices of the function f , see table 1. The divergence D f (p(x)||q(x)) is zero if and only if p(x) = q(x). However, for a constrained model p θ (x) fitted to a distribution p(x) by minimizing D f (p θ (x)||p(x)), the resulting optimal θ can be heavily dependent on the choice of the divergence function f (Minka, 2005).
Whilst there is significant recent interest in using fdivergences to train complex probabilistic models (Nowozin et al., 2016), the f -divergence is generally computationally intractable for such complex models. We consider an upper bound on the f -divergence and show how this bound can be readily applied to training generative models.
Maximum likelihood and forward KL
For data x 1 , . . . , x N drawn independently and identically from some empirical distributionp(x), the forward KL between an approximating distribution p θ (x) andp(x) is KL(p(x)||p θ (x)) = − N n=1 log p θ (x n ) + const. (2) Minimizing KL(p(x)||p θ (x)) w.r.t. θ is therefore equivalent to maximizing the likelihood of the data. Given the asymptotic guarantees of the efficiency of maximum likelihood (Wolfowitz, 1965), the forward KL is a standard divergence used in statistics and machine learning.
For latent variable models p θ (x) = p θ (x|z)p(z)dz, the likelihood objective is usually intractable and it is standard to use the variational evidence lower bound (ELBO) where the so-called variational distribution q φ (z|x) is chosen such that the bound (and its gradient) is either computationally tractable or can be readily estimated by sampling (Kingma & Welling, 2013). The parameters φ of the variational distribution q φ (z|x) and parameters θ of the model p θ (x) are jointly optimized to increase the log-likelihood arXiv:1907.11891v1 [stat.ML] 27 Jul 2019 Jensen-Shannon 1 2 p(x) log 2p(x) p(x)+q(x) + q(x) log 2q(x) p(x)+q(x) dx −(u + 1) log 1+u 2 + u log u GAN p(x) log 2p(x) p(x)+q(x) + q(x) log 2q(x) p(x)+q(x) dx −(u + 1) log(1 + u) + u log u Table 1. Some standard f -divergences. Here p(x) is given and q(x) is the model. From Nowozin et al. (2016).
lower bound L(θ, φ). This lower bound on the likelihood corresponds to an upper bound on the forward divergence KL(p(x)||p θ (x)).
Forward versus reverse KL
It is interesting to compare models trained by the forward and reverse KL divergences. For example, when p θ is Gaussian with parameters θ = µ, σ 2 , then minimizing the forward KL gives so that the optimal setting is for µ to be the mean of p(x) and σ 2 the variance. For an "under-powered" model p θ (x) (a model which is not rich enough to have a small divergence) and multi-modal p(x) this could result in p θ (x) placing significant mass on low probability regions in p(x). This is the so-called "mean matching" behavior of KL(p(x)||p θ (x)) that has been suggested as a possible explanation for the poor fidelity of images generated by models p θ (x) trained by forward KL minimization (Goodfellow, 2016). Conversely, when using the reverse KL objective, KL(p θ (x)||p(x)), for a Gaussian p θ (x) and multi-modal p(x) with well separated modes, optimally µ and σ 2 fit one of the local modes. This behavior is illustrated in figure 1 and is the so-called "mode matching" behavior of KL(p θ (x)||p(x)). For this reason, the reverse KL objective has been suggested to be more useful than forward KL divergence if high quality samples are preferable to coverage of the dataset.
This highlights the potentially significant difference in the resulting model that is fitted to the data, depending on the choice of divergence (Minka, 2005). In this sense, it is of interest to explore fitting generative models p θ (x) to a data distributionp(x) using f -divergences other than the forward KL divergence (maximum likelihood).
Optimizing f -divergences
Whilst the above upper bound (3) on the forward divergence KL(p(x)||p θ (x)) is well known, an upper bound on other f -divergences (e.g. reverse KL divergence) seems to be unfamiliar (Sason & Verdú, 2015) and we are unaware of any upper bound on general f -divergences that has been used within the machine learning community.
Recently a lower bound on the f -divergence was introduced in (Nowozin et al., 2016;Nguyen et al., 2010) by the use of the Fenchel conjugate. The resulting training algorithm is a form of minimax in which the parameters φ that tighten the bound are adjusted so as to push up the bound towards the true divergence, whilst the model parameters θ are adjusted to lower the bound. Nowozin et al. (2016) were then able to relate the Generative Adversarial Network (GAN) (Goodfellow, 2016) training algorithm to the Fenchel conjugate lower bound on a corresponding f -divergence, see table 1. However, if the interest is purely on minimizing an f -divergence, it is arguably preferable to have an upper bound on the divergence since then standard optimization methods can be applied, resulting in a stable optimization procedure, see figure 2.
The f -divergence upper bound
Following the data processing inequality (see for example (van Erven & Harremoës, 2012)), we obtain an upper bound where p(x, z) is a distribution with marginal p(x, z)dz = p(x) and similarly q(x, z)dz = q(x). The bound corresponds to a generalization of the auxiliary variational method (Agakov & Barber, 2004) and can be readily veri- The Fenchel-conjugate f -divergence lower bound.
Figure 2.
Upper and lower bounds on the divergence D f (p θ (x)||p(x)). In our upper bound, both the model parameters θ and bound parameters φ are adjusted to push down the upper bound, thereby driving down the divergence. In the Fenchel-conjugate approach (Nowozin et al., 2016), the lower bound is made tighter adjusting the bound parameters φ to push up the bound towards the true divergence, whilst then minimizing this with respect the model parameters θ.
fied using Jensen's inequality: Additional properties of the auxiliary f -divergence are given in section A of the supplementary material. We show that the bound is tight and reduces to D f (p(x)||q(x)) when performing a full unconstrained minimization of the bound with respect to p(z|x) (keeping q(x, z) fixed).
For a latent variable model p θ (x) = p θ (x|z)p(z)dz, even if the f -divergence between the model and data distribution D f (p θ (x)||p(x)) is not computationally tractable, we may form an upper bound Provided we choose the quantities in the upper bound appropriately, we can accurately estimate gradients of the bound in order to learn the model parameters θ 2 .
The auxiliary variational method (Agakov & Barber, 2004) and ELBO (Kingma & Welling, 2013) are special cases of this objective (see supplementary material). Whilst the bound holds for any f -divergence, there can be issues in directly applying this. For example, the corresponding reverse 2 Similar to standard treatments in variational inference (see for example Kingma & Welling (2013); Rezende et al. (2014)), the variational distribution q φ is only being used to tighten the resulting bound and is not a component of the generative model.
KL bound is
For any given z, the upper bound has support only on the training data x n , n = 1, . . . , N , whereas the model p θ (x|z) will typically have larger support. This means that p θ (x|z) logp(x)dx is ill-defined and the KL divergence (and its gradient) cannot be determined. More generally, calculating the f -divergence D f (p(x)||q(x)) can be problematic when the supports of p and q are different. To address this and extend the class of latent models and f -divergences for which the upper bound can be applied, we make use of the recently introduced Spread Divergence ( Barber et al., 2018), giving a brief outline below.
Spread f -divergence
For q(x) and p(x) which have disjoint supports, we define new distributions q(y) and p(y) using p(y) = p(y|x)p(x)dx, q(y) = p(y|x)q(x)dx where p(y|x) is a "noise" process designed such that p(y) and q(y) have the same support. For example, if we use a Gaussian p(y|x) = N y x, σ 2 , then p(y) and q(y) both have support R.
We thus define the spread f -divergence This satisfies the requirements of a divergence, that is D f (p(x)||q(x)) ≥ 0 and D f (p(x)||q(x)) = 0 if and only if q(x) = p(x).
The auxiliary upper bound can be easily applied to the spread f -divergence For example, applying this to the reverse KL objective In this case, the spreaded empirical distribution for Gaussian p(y|x) is a mixture of Gaussianŝ which has support R. Since the spreaded model also has support R, the spread reverse KL divergence KL (p θ (x)||p(x)) is well defined.
Training implicit generative models
For certain applications (such as image generation) a criticism of latent models p θ (x|z)p(z) is that the distribution p θ (x|z) may add noise to otherwise "clean" images. For example, if p θ (x|z) = N x µ θ (z), σ 2 , then the generated x will add Gaussian noise of variance σ 2 to the mean image µ θ (z), blurring the image. In practice, many authors fudge this by simply drawing from the model δ (x − µ θ (z)) p(z), without adding on the Gaussian noise. Strictly speaking, this is inconsistent since the model is trained under the assumption of additive Gaussian noise on the observation, but the use of the model differs, assuming no additive noise. Using the spread divergence, however, we may now directly and consistently define a training objective for implicit models with deterministic output p θ (x, z) = p θ (x|z)p(z) = δ(x − µ θ (z))p(z). Using spread noise, For example, for Gaussian spread noise in one-dimension, we have which can be used now in the reverse KL bound (9) to train an implicit generative model. We thus learn a generative model on the y-space, p θ (y), by minimizing the f -divergence to the noise corrupted data distributionp(y). After training we then recover the generative model on xspace, p θ (x), by taking the mean of our generation network p θ (y|z) model as the output.
The upper bound (9) can be estimated through sampling and minimized with respect to θ and φ by using the reparameterization trick (Kingma & Welling, 2013) and taking gradients. The only difference with standard training (for example stochastic gradient training in the standard VAE) is that an additional outer loop is required in which noisy training points y are sampled from the empirical distribution p(x).
Note that for reverse KL training, we require the estimation of the gradient θ p θ (y|z)p(z) logp(y)dydz, for which we propose an efficient estimator in the next section.
For f -divergences other than the reverse KL, since the upper bound is expressed as an expectation over p θ (y|z)p(z), we can generate (y, z) samples from these distributions and then estimate the bound and take gradients.
Gradient approximation
For the reverse KL upper bound, we need to cal- i.e. a sum of delta functions (the data distribution) corrupted with a noise process as described in section 2.1.
Clearly, summing over all points in the dataset to calculatep(y) is computationally burdensome. Naively using a minibatch inside the log to estimatep(y) = M −1 m∈M p(y|x m ) results in a biased estimator, which we have found to be detrimental to the optimization procedure in our image generation experiments. Therefore, we propose an different gradient estimator.
We first rewrite the gradient as following (up to addition of a constant): Where p(n|y) = p(y|xn) n p(y|xn) (see supplementary section B for details). We can now sample a minibatch of index M = {n 1 , n 2 , ..., n m } from p(n|y) and approximate equation (11) by This gradient estimator is unbiased, but we also propose two tricks in supplementary section B to further reduce the computational cost and variance during training. These tricks may bias the estimator, but we find that they work well in practice.
Experiments
In the following experiments our interest is to demonstrate the applicability of the f -divergence upper bound. The goal of the experiments is not achieving the state-of-the-art image generation results but showing the effectiveness of training with different divergences. Thus the main focus is on training with the reverse KL divergence since this provides a natural "opposite" to training with the forward KL divergence. Throughout, the data is continuous and we use a Gaussian noise process with width σ for p(y|x). We take p(z) to be a standard zero mean unit covariance Gaussian (thus with no trainable parameters). Similar to standard VAE training, we use deep networks to parameterize the Gaussian model p θ (y|z) = N y µ θ (z), σ 2 and Gaussian variational distribution q φ (z|y) = N (z µ φ (y), Σ φ (y)) for diagonal Σ φ (y). Experimentally, we found that running several optimizer steps on φ whilst keeping θ fixed is also useful to ensure that the bound is tight when adjusting θ. We therefore use this strategy throughout training. The result that optimizing the auxiliary bound with respect to only q φ (z|y) tightens the bound (towards the marginal divergence) is shown in supplementary section A.
Toy problem : Forward KL, Reverse KL and JS training
The toy dataset, as described by (Roth et al., 2017), is a mixture of seven two-dimensional Gaussians arranged in a circle and embedded in three dimensional space, see figures: figure 3, figure 6. We use 5 hidden layers of 400 units and relu activation function for the mean and variance parameterization in q φ (z|y) and mean parameterization in p θ (y|z) with a two dimensional latent space z ∈ R 2 .
To evaluate the bound in each iteration we use a minibatch of size 100 to calculate p (B) (y). For each minibatch we draw 100 samples from p(z) and subsequently draw 10 samples from p θ (y|z) for each drawn z to generate (y, z) samples. To facilitate training, we anneal the width, σ , of the spread divergence throughout the optimization process. This enables the model to feel the presence of other distant modes (high mass regions of p(y)), allowing the method to overcome any poor initialization of (θ, φ). In this experiment, σ is annealed from 1.0 to 0.1 using the formula σ = 1.0 * (0.1 current steps total steps ).
We can see in figure 3 that the model trained by JS and reverse KL divergence converge to cover each mode in the true generating distribution, and exhibits good separation of the seven modes in the latent space. Even though the reverse KL tends to collapse a model to a single mode, provided the model p θ (y|z) is sufficiently powerful, it can correctly capture all the 7 modes. reverse KL training, we interleave each θ update (learning rate 5e −5 with SGD) with 20 φ updates (learning rate 10 −4 with Adam), the batch size is 100 in both cases. We also use the gradient approximation method discussed in section 2.3, also see supplementary section B.
MNIST : forward and reverse KL training
In figure 4, we show the samples from the two trained models using the same latent codes. As we can see, the samples generated by reverse KL model are sharper.
CelebA: forward and reverse KL training
We pre-process CelebA (Liu et al., 2015) dataset (with 50000 images) by first taking 140x140 center crops and then resizing to 64x64. Pixel values were then rescaled to lie in [0, 1]. The architectures of the convolutional encoder q φ (z|y) and deconvolutional decoder p θ (y|z) (with fixed noise) are given in the supplementary material section D.
The standard deviation of the spread divergence is 0.2 for the KL and 1.0 for the RKL. We first train the model using KL divergence for 60 epochs as initialization and then train for 6 additional epoch for both pure forward KL and reverse KL. In order to ensure that the bound remained tight, we interleave each θ update (learning rate 10 −7 with RMSprop) with 20 φ updates (learning rate 10 −4 with Adam), the batch size is 100 in both cases.
In figure 5 we show samples from the two trained models using the same latent codes. As we can see, the impact of the reverse KL term in training is significant, resulting in less variability in pose, but sharper images. This is consistent with the "mode-seeking" behavior of the reverse KL objective.
Comparison to Fenchel conjugate lower bound
As discussed in section 1.3, a different approach to minimizing the f -divergence is used in (Nowozin et al., 2016), utilizing a variational lower bound to the f -divergence: Here f * is the Fenchel conjugate and T is any class of functions that respects the domain of f * . After parameterizing T = g f (V φ ) (where g f : R → dom f * and V φ is an unconstrained parametric function) and p θ (x), the optimization scheme is then to alternately tighten (i.e. increase) the bound through changes to φ and then lower the bound through changes to θ, see figure 2. This is of interest because the GAN objective (Goodfellow et al., 2014) can be seen as a specific instance of this scheme. We acknowledge that the f -GAN principally grounds GANs in a wider class of techniques, and is not necessarily intended as a scheme for minimizing an f -divergence. However, it is natural to ask whether our auxiliary upper bound or the Fenchelconjugate lower bound gives different results when used to minimize the f -divergence for a similar complexity of parameter space (θ, φ).
To compare the two methods we fit a univariate Gaussian p θ (x) to data generated from a mixture of two Gaussians through the minimization of various f -divergences. See the supplementary material for details. For the f -GAN lower bound we use a network with two hidden layers of size 64 for V φ (x). For our upper bound we use a network with two hidden layers of size 50 to parameterize q φ (z|x) and set p θ (x, z) to be a bivariate Gaussian, so that it marginalizes to a univariate Gaussian as required. The upper and lower bound methods have a similar number of free parameters (q φ has fewer hidden units but more outputs than V φ ). The two methods result in broadly similar Gaussian fits, see table 2. In general, minimizing the upper bound results in a slightly superior fit compared to the f -GAN method (Nowozin et al., 2016) in terms of proximity to the true minimal f -divergence fit and proximity of the bound value to the true divergence. Additionally we find that minimizing our upper bound is computationally more stable than the optimization procedure required for f -GAN training (simultaneous tightening and lowering of the bound -see supplementary material E).
Related work
The Auxiliary Variational Method (Agakov & Barber, 2004) uses an auxiliary space to minimize the joint KL divergence in order to minimize the marginal KL divergence. We extend this method to the more general class of f -divergences.
Compared to Variational Auto-Encoders (Kingma & Welling, 2013), our method is a way to train an identical class of generative and variational models, but with a class of different optimization objectives based on f -divergences.
Since the VAE optimization scheme is a variational method of maximizing the likelihood, it is similar to our scheme with the choice of minimizing the forward KL divergence, which is also a variational form of maximum likelihood. Both methodologies use sampling to estimate a variational Table 2. Learned Gaussian parameters to fit a mixture of two Gaussians using forward KL, reverse KL and Jensen-Shannon divergence. p * (x) is the optimal Gaussian fitted to minimize the exact divergence. pUB(x) is the optimal Gaussian fitted to minimize our auxiliary upper bound on the divergence. pLB(x) is the optimal Gaussian fitted to minimize the Fenchel-conjugate lower bound on the divergence.
bound which can be differentiated through the use of the reparameterization trick.
In Rényi divergence variational inference (Li & Turner, 2016), a variational approximation of log-likelihood is proposed based on the Rényi divergence. However, our joint upper bound is an estimator of f -divergence in marginal data space, it only relates to maximum likelihood learning when we use KL divergence.
In Auxiliary Deep Generative Models (Maaløe et al., 2016), a VAE is extended with an auxiliary space. This allows a richer variational distribution to be learned, with the correlation between latent variables being pushed to the auxiliary space to keep the calculation tractable. This, similarly to our method, utilizes the general auxiliary variational method (Agakov & Barber, 2004), but is focused on making VAEs more powerful rather than providing different optimization schemes.
In the f -GAN (Nowozin et al., 2016) methodology, an interesting connection is made between the GAN training objective and a lower bound on the f -divergence. The authors conclude that using different divergences leads to largely similar results, and that the divergence only has a large impact when the model is "under-powered". However, that conclusion is somewhat at odds with our own, in which we find that the (upper bound on) different divergences gives very different model fits. Indeed, others have reached a similar conclusion: the reverse KL divergence is optimized as a GAN objective in (Sønderby et al., 2017), demonstrating that it is effective in the task of image super-resolution. A variety of different generator objectives for GANs are used in , with some divergence objectives exhibiting the "mode-seeking" behavior we have observed.
In (Mohamed & Lakshminarayanan, 2016), the authors demonstrate an alternative approach to train f -divergence D f (p(x)||q(x)) = q(x)f p(x) q(x) dx by directly estimating the density ratio p(x) q(x) . This method makes a connection to GANs: the discriminator is trained to approximate the ratio and the generator loss is designed based upon different choices of f -divergence (see the supplementary material section F for details). We thereby recognize there are three different tractable estimations of the f -divergence: 1. ratio estimation in the marginal space 2. Fenchel conjugate lower bound (f -GAN) and 3. the variational joint upper bound (introduced by our paper).
Ratio estimation by classification has also been extended to minimize the KL-divergence in the joint space (Huszár, 2017). Similarly, Bi-directional GAN (Donahue et al., 2016) and Ali-GAN (Dumoulin et al., 2016) augment the GAN generator with an additional inference network. Although these models focus on similar training objectives to our own, the purpose of using the joint space is different to that of our approach. Our method uses the joint distribution to create an upper bound in order to estimate the f -divergence in the marginal space; the latent representation is automatically achieved. In contrast, all three methods mentioned above expand the original space to a joint space just for learning the latent representation, the divergence is estimated by either ratio estimation or GAN approaches. Additionally, they only minimize the target divergence only at the limit of an optimal discriminator (or in the nonparametric limit, see (Goodfellow et al., 2014) and (Mescheder et al., 2017)), which may cause instability in the GAN training process (Arjovsky & Bottou, 2017).
Conclusion
We introduced an upper bound on f -divergences, based on an extension of the auxiliary variational method. The approach allows variational training of latent generative models in a much broader set of divergences than previously considered. We showed that the method requires only a modest change to the standard VAE training algorithm but can result in a qualitatively very different fitted model. For our low dimensional toy problems, both the forward KL and reverse KL can be effective in learning the model. However, for higher dimensional image generation, compared to standard forward KL training (VAE), training with the reverse KL tends to focus much more on ensuring that data is generated with high fidelity around a smaller number of modes. The central contribution of our work is to facilitate the application of more general f -divergences to training of probabilistic generative models with different divergences potentially giving rise to very different learned models.
A. Properties of the Auxiliary Variational Method
Here we give a property of the auxiliary bound for f -divergences with differentiable f ; this covers most f of interest, and the argument extends to those f which are piecewise differentiable. Then for the particular case of the reverse Kl divergence we give a simpler proof of this property as well as two additional properties (which do not hold for general f ).
For differentiable f we claim that when we fully optimize the auxiliary f -divergence w.r.t p(z|x), this is the same as minimizing the f -divergence in the x space alone.
Let's first fix q(x, z) and find the optimal p(z|x) by taking the functional derivative of the auxiliary f -divergence At the minimum this will be equal to 0 (plus a constant Lagrange multiplier that comes from the constraint that p(z|x) is normalized). Since f is not constant (if it is then the f -divergence is a constant), this then implies that the argument of f must be constant in z. This implies that optimally p(z|x) = q(z|x). Plugging this back into the f -divergence, it reduces to simply D f (p(x)||q(x)) Hence, we have shown Since the assumption is that D f (p(x)||q(x)) is not computationally tractable, this means that, in practice, we need to use a suboptimal p(z|x), restricting p(z|x) to a family p θ (z|x) such that the joint f -divergence is computationally tractable.
A.2. Relation to KL(q(x)||p(x))
For the particular case of the reverse KL divergence we also provide this more straightforward proof.
Again, the claim is that when we fully optimize the auxiliary KL divergence w.r.t p(z|x), this is the same as minimizing the KL in the x space alone.
Let's first fix q(x, z) and find the optimal p(z|x). The divergence is Since we are taking a positive combination of KL divergences, this means that, optimally, p(z|x) = q(z|x). Plugging this back into the KL divergence, the KL reduces to simply Hence, we have shown A.3. Independence p(z|x) = p(z) Also for the particular case of the reverse KL divergence we can derive a result from the assumption that the auxiliary variables are independent of the observations and the prior p(z|x) = p(z). We have Optimally, therefore, we set p(z) = q(z), which gives the resulting expression Since we are still free to set q(z), we should optimally set q(z) to place all its mass on the z that minimizes In other words, the assumption of independence p(z|x) = p(z) implies that method is no better than computing each KL(q(x|z)||p(x)) and then choosing the single best model q(x|z).
A.4. Factorizing q(x, z) = q(x)q(z)
Again for the reverse KL divergence, under the independence assumption q(x, z) = q(x)q(z), it is straightforward to show that In the case that q(x) for example is a simple Gaussian distribution, this means that the independence assumption does not help enrich the complexity of the approximating distribution.
A.5. Relation to the ELBO
The reverse KL divergence in joint space: KL(q(x, z)||p(x, z)) is equivalent to using the ELBO to lower bound log p(x) in KL(q(x)||p(x)):
B. Reverse KL gradient approximation
For the reverse KL upper bound, we need to calculate θ p θ (y|z)p(z) logp(y)dydz, where p(y) = N −1 n p(y|x n ), let us assume that p θ (y|z) and p(y|x n ) are spherical Gaussians with the same variance σ 2 (which is the case in our experiments) θ p θ (y|z)p(z) logp(y)dydz (26) Where we have used the reparametrization y = µ θ (z) + σ and p( ) = N (0, I), and then noticed that we can define which is a softmax over the square distance (with a scaling) between y and x n .
We can now get an unbiased estimator for this gradient (31) if we can generate samples from p(n| , z).
Computation reduction
The computationally expensive part of calculating p(n| , z) is the normalizer, which requires summing over all data points. Given that typically the x-space will be high dimensional in practice we consider a dimensionality reduction technique to speed up computing the square distance between y and x n .
We use Principal Components Analysis (PCA) to project the x-space to a much lower dimensional space. PCA is an appropriate choice as it maximizes the variance preserved by the lower dimensional projections, whilst minimizing the square distance between the reconstructions and the original data. Note that the PCA projection matrix, U , is learned once on the input data {x n }.
So we approximate We can now get an (approximate) unbiased estimator for (31) by sampling , z and then n ∼ q(n| , z).
Where z (s) ∼ p(z), (s) ∼ p( ), and for each s we sample n (t) s ∼ q(n| (s) , z (s) ). In the experiments on MNIST/CelebA, we sample T = 30 to form the approximation, and the PCA dimension is 50/100. Using this approximation, the computation cost of the normalizer in (34) scales with the number of the data points, but we have found this not to be an issue in practice when using the PCA projection. For a very large dataset this could be problematic though. In this case other minibatch methods could be used to approximate this normalizer, such as (Ruiz et al., 2018) and (Botev et al., 2017), which we leave to further work.
Variance reduction To reduce the variance in the softmax sampling, we approximate (35) by: Where T is the temperature. When 1 T → 0, the sample from softmax distribution is the index of the nearest neighbourhood in square distance. In all experiments, we set T = 10.
Note that by using these two tricks, the estimator is no longer unbiased anymore, but we found they work well in practice.
C. Target distribution of the toy problem Figure 6. Target distribution of the toy problem, from (Roth et al., 2017) We train on the toy dataset described by (Roth et al., 2017), which is a mixture of seven two-dimensional Gaussians arranged in a circle and embedded in three dimensional space, see figure 6. The standard deviation of the each Gaussian is 0.05.
D. Network Architecture
Both encoder and decoder used fully convolutional architectures with 5x5 convolutional filters and used vertical and horizontal strides 2 except the last deconvolution layer we used stride 1. Here Conv k stands for a convolution with k filters, DeConv k for a deconvolution with k filters, BN for the batch normalization (Ioffe & Szegedy, 2015), ReLU for the rectified linear units, and FC k for the fully connected layer mapping to R k .
E. f -GAN comparison
The mixture of Gaussians we attempt to fit a univariate Gaussian to is plotted in Figure 7.
We plot the lower and upper bounds during training in Figure 8. We can see the upper bound is generally faster to converge and less noisy. It also a consistently decreasing objective, whereas the variational lower bound fluctuates higher and lower in value throughout the training process.
F. Class Probability Estimation
In (Mohamed & Lakshminarayanan, 2016), two ratio estimation techniques, class probability estimation and ratio matching, are discussed. We briefly show how to use the class probability estimation technique to estimate f -divergence, and refer readers to the original paper (Mohamed & Lakshminarayanan, 2016) for the ratio matching technique. The density ratio can be computed by building a classifier to distinguish between training data and the data generated by the model. This ratio is p(x) q θ (x) = p(x|y=1) p(x|y=0) , where label y = 1 represents samples from p and y = 0 represents samples from q. By using Bayes rule and assuming that we have the same number of samples from both p and q, we have p(x) q θ (x) = p(x|y=1) = p(y=1|x) p(y=0|x) . We can then set the discriminator output to be D φ (x) = p(y = 1|x), so the ratio can be written as p( . The generator loss corresponding to an f -divergence can then be designed as D | 2019-07-27T10:32:08.000Z | 2019-07-27T00:00:00.000 | {
"year": 2019,
"sha1": "435f0f99da3098fe83371abd589c838155175ff1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5e2112b5bca3ff75fd3ed8c6380f66285e684336",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
15594696 | pes2o/s2orc | v3-fos-license | Effect of lung compliance and endotracheal tube leakage on measurement of tidal volume
Introduction The objective of this laboratory study was to measure the effect of decreased lung compliance and endotracheal tube (ETT) leakage on measured exhaled tidal volume at the airway and at the ventilator, in a research study with a test lung. Methods The subjects were infant, adult and pediatric test lungs. In the test lung model, lung compliances were set to normal and to levels seen in acute respiratory distress syndrome. Set tidal volume was 6 ml/kg across a range of simulated weights and ETT sizes. Data were recorded from both the ventilator light-emitting diode display and the CO2SMO Plus monitor display by a single observer. Effective tidal volume was calculated from a standard equation. Results In all test lung models, exhaled tidal volume measured at the airway decreased markedly with decreasing lung compliance, but measurement at the ventilator showed minimal change. In the absence of a simulated ETT leak, calculation of the effective tidal volume led to measurements very similar to exhaled tidal volume measured at the ETT. With a simulated ETT tube leak, the effective tidal volume markedly overestimated tidal volume measured at the airway. Conclusion Previous investigators have emphasized the need to measure tidal volume at the ETT for all children. When ETT leakage is minimal, it seems from our simulated lung models that calculation of effective tidal volume would give similar readings to tidal volume measured at the airway, even in small patients. Future studies of tidal volume measurement accuracy in mechanically ventilated children should control for the degree of ETT leakage.
Introduction
Three investigators have reported that tidal volume (V T ) in children is inaccurate when measured at the ventilator, even when effective V T is used [1][2][3]. Cannon and colleagues [1] studied 98 infants and children and found a significant discrepancy between expiratory V T measured at the ventilator and that measured with a pneumotachometer.
Calculation of the effective V T did not alter this discrepancy. Castle and colleagues [2] studied 56 intubated children and found that exhaled V T displayed by the Servo 300 significantly overestimated V T measured at the airway by between 2% and 91%. After correcting for gas compression, effective V T overestimated true V T by as much as 29% in older children but underestimated the true V T by up to 64% in the smallest infants. Neve and colleagues [3] studied 27 infants and found that V T was overestimated by the ventilator in comparison with V T measured at the Y piece. None of these investigators controlled for endotracheal tube (ETT) leakage, which is more of a problem in children than in adults because of the use of uncuffed ETTs.
Accurate measurement of V T is increasingly important because the Acute Respiratory Distress Syndrome (ARDS) Network investigators have shown that the use of a low effective V T leads to decreased mortality in their patient population [4]. The effective V T goal in their ventilator protocol was 6 ml/ kg but could be reduced to as low as 4 ml/kg if the plateau pressure was above 30 cmH 2 O. At such low V T values, accurate measurement is imperative to prevent atelectasis and subsequent ineffective minute ventilation.
Clinically, there are three methods to estimate delivered V T : first, direct measurement at the expiratory limb of the ventilator; second, direct measurement at the ETT with a pneumotachometer; and third, indirect calculation of effective V T by using set V T minus calculated compressible volume lost in the ventilator circuit [5]. The principle of Boyle's law (the volume of gas decreases as the absolute pressure exerted by the gas increases, and vice versa) is used to calculate the compressible volume in ventilator circuits.
How effective V T compares with V T measured at the airway has not been rigorously tested. Using V T measured at the ETT as the gold standard, we used three test lung models in a controlled laboratory setting to evaluate the accuracy of ventilator measured V T and effective V T under conditions of poor lung compliance, with and without ETT leakage, across a range of simulated patient sizes. We proposed that the discrepancy between effective V T and V T measured at the ETT in children was due mainly to ETT leakage around uncuffed ETTs, and that in situations with minimal ETT leakage there would be minimal difference between the effective V T and V T measured at the airway.
Experimental conditions
A Servo 300 ventilator (Siemens-Elema, Solna, Sweden) in the SIMV volume control mode was used. A pressure differential pneumotachometer (CO 2 SMO Plus; Novametrix Medical Systems, Wallingford, CT) was used between the ventilator and ETT connection. The temperature of the humidifier was set at 37°C. A heated disposable respiratory circuit (Allegiance Healthcare Corporation, McGaw Park, IL) was used. We tested the compliance of the circuit to ensure that it was stable across a range of conditions. To do this, we first set the ventilator on the following: inspiratory time of 1.3 s, positive endexpiratory pressure (PEEP) of 0, respiratory set frequency of 6 breaths per minute, and a pause time of 15%. V T was increased by increments of 50 ml and the plateau pressure was recorded from the ventilator with the patient outlet occluded. No component other than the humidifier was added to the circuit [6]. A linear relationship was found, with no change of the circuit compliance at high airway pressure.
In the pediatric and infant models, a valve distal to the ETT was used to adjust volume leaks of 0%, 10%, 20%, and 30%. A shown in Fig. 1, a separate pneumotachometer (NVM-1; Thermo Respiratory Group, Palm Springs, CA) was used for independent measurement of the percentage of ETT leakage.
The Servo 300 was used for all test conditions. To control for differences between the ventilators, we tested each set of experimental conditions on three different ventilators. The CO 2 SMO Plus respiratory mechanics monitor was used to measure the V T at the ETT. This monitor measures flow with a fixed-orifice differential pressure pneumotachometer located at the ETT. Respired gas flowing through the flow sensor produces a small pressure decrease across the two tubes connected to the sensor. This pressure decrease is transmitted through the tubing sensor to a differential pressure transducer inside the monitor and is correlated with flow according to a factory-stored calibration. The pressure transducer is automatically 'zeroed' to correct for changes in ambient temperature. Data are filtered and sampled at 100 Hz. The monitor continuously displays a range of ventilatory variables, including both V T and airway pressures. Three CO 2 SMO Plus sensors are available: neonatal, pediatric, and adult. The manufacturer recommends that the choice of sensor be based on various criteria: first, the diameter of the tracheal tube; second, the patient's age; third, the expected flow/volume range; and fourth, the acceptable levels of dead space and resistance. Table 1 lists the experimental conditions for all lung models. Before data collection, all ventilators, respiratory mechanics monitors, and tachometers used in this study were calibrated in accordance with the manufacturer's recommendation.
To ensure that different ventilators and monitors did not influence the results, all data were repeated three times, each time with a different Servo 300 ventilator and a different CO 2 SMO Plus monitor.
Adult lung model
A TTL™ adult test lung (Vent Aid; Michigan Instruments Inc., Grand Rapids, MI) was used. This device has two separate lungs, each with a functional residual capacity (FRC) of 900 ml. The lung compliance can be adjusted by moving a spring up and down with a compliance ranging from 10 to 150 ml/ cmH 2 O per lung. Each lung is tested before use to assess for leakage. Lung-thorax compliance levels were set at 10, 20, 40, 60, 100, and 150 ml/cmH 2 O.
Pediatric lung model
A TTL™ adult test single lung was used with the FRC adjusted to give 30 ml/kg by displacing the extra volume with waterfilled bags. Lung-thorax compliance levels were set at 5, 10, 20, 40 and 60, ml/cmH 2 O.
Infant lung model
An infant lung simulator (D.B&M products, Redlands, CA) was used. The model has three different preset compliances of 1, 3, and 10 ml/cmH 2 O.
Data recording
Data were recorded from both the ventilator light-emitting diode display and the CO 2 SMO Plus monitor display by a single observer. Variables recorded were inspired V T , expired V T , peak inspiratory pressure (PIP), PEEP, and plateau pressure. Effective V T was calculated from the following equation [2]: set inspired V T -[circuit compliance × (PIP -PEEP)].
Analysis
The major outcome variable was the calculated difference between the effective V T and the exhaled V T measured either at the ventilator or at the ETT in each experiment. For each set of test conditions (Table 1) we used the mean of the three replicate measurements and also give the highest and lowest values. V T was adjusted for the simulated weights and expressed as ml/kg. We determined a priori that the difference between the V T values would be considered excessive if it exceeded 10% of the 6 ml/kg goal (0.6 ml/kg).
Test lung models
As shown in Fig. 2, for the adult, pediatric, and infant models with no ETT leak, the difference between V T measured at the ETT and at the ventilator increased with decreasing lung compliance. V T measured at the ventilator was always higher than that measured at the ETT. The ventilator measurement overestimated V T by more than 10% (0.6 ml/kg) as lung compliance dropped to moderately low values and the difference exceeded 20% (1.8 ml/kg) at the lowest lung compliances in each model. The standard deviation of the difference was 0-0.2 ml/kg for all sets of measurements.
In all models, in the absence of ETT leakage the difference between effective V T and V T measured at the ETT was less than 10% across the range of lung compliances with a standard deviation of 0-0.2 ml/kg for all sets of measurements. As shown in Fig. 3, however, the agreement between effective V T and V T measured at the ETT was poor when a 20% and 30% simulated ETT leak was added in the infant and pediatric test lung models. Under these conditions, the effective V T was at least 10% higher than that measured at the ETT for all simulated conditions, and the standard deviation was 0.1-0.4 ml/ kg for all sets of measurements. ETT, endotracheal tube; FiO 2 , fraction of inspired oxygen; PEEP, positive end-expiratory pressure.
Figure 1
Schematic diagram demonstrating the placement of CO 2 SMO and NMV pnueumotachometers in infant and pediatric models Schematic diagram demonstrating the placement of CO 2 SMO and NMV pnueumotachometers in infant and pediatric models.
Discussion
Using well-controlled experimental conditions, we showed that in the absence of ETT leakage, effective V T approximated the V T measured at the ETT in the test lung even when lung compliance was poor. As expected, exhaled V T measured at the ventilator became increasingly inaccurate with poor lung compliance. In the presence of ETT leakage, effective V T overestimated the V T measured at the ETT by at least 0.6 ml/kg. It is clear that in the presence of ETT leakage, effective V T is inaccurate and V T is most accurately estimated at the airway.
We used an in vitro model to manipulate experimental conditions while controlling for all other variables. Accurate measurement of V T is essential when a low-V T strategy is used to protect injured lungs as is recommended by the recent ARDS Network study [4]. In the adult lung model, we manipulated the compliance to simulate the lung compliance quartiles reported in the ARDSNet study [4]. Our findings have clinical implications. In agreement with other investigators [1][2][3], we found that unadjusted V T measured at the ventilator is highly inaccurate. We found this inaccuracy to increase markedly when lung compliance was abnormal. This means that dual-control automated ventilator modes (for example volume support or pressure-regulated volume control) that make adjustments based on V T measured at the ventilator might ineffectively ventilate patients with poor lung compliance. Automated ventilator modes should be used with care in critically ill children.
We support the current recommendations of previous investigators [1][2][3] that V T should be measured at the ETT in critically ill children receiving mechanical ventilator support. These investigators emphasized the need to measure V T at the ETT for all children; they did not control for the presence of uncuffed ETTs in their studies or evaluate the effect of leakage. Significant loss of V T occurs when both ETT leakage and poor lung compliance are present. Although the V T measured at the ETT may underestimate the actual V T being delivered in this situation, it is still the best estimation of the tidal volume delivered to the lung. Use of cuffed ETTs to minimize ETT leakage may lead to more accurate measurement of V T when lung compliance is poor [7]. When ETT leakage is 20% or greater, Main and colleagues [8] reported inconsistent tidal volume delivery and gross overestimation of respiratory compliance and resistance in children.
When ETT leakage is minimal, it seems from our simulated lung models that calculation of effective V T would give similar readings to V T measured at the airway, even in small patients. This could potentially negate the need for the addition of sensors at the airway and their associated increase in airway resistance for small ETTs [2]. Unfortunately, ETT leakage is dynamic and dependent on head position. Unless a simple, accurate and continuous means of measuring ETT leakage is available, it is safest to measure V T at the airway in all mechan-
Figure 2
Effect of decreasing lung compliance on the difference between effec-tive tidal volume and tidal volume at the endotracheal tube (ETT) in the infant, pediatric, and adult test lungs with no leak around the ETT Effect of decreasing lung compliance on the difference between effective tidal volume and tidal volume at the endotracheal tube (ETT) in the infant, pediatric, and adult test lungs with no leak around the ETT.
Figure 3
Effect of decreasing lung compliance on the difference between effec-tive tidal volume and tidal volume at the endotracheal tube (ETT) in the infant and pediatric test lung models with 20% and 30% simulated ETT leakage Effect of decreasing lung compliance on the difference between effective tidal volume and tidal volume at the endotracheal tube (ETT) in the infant and pediatric test lung models with 20% and 30% simulated ETT leakage. | 2014-10-01T00:00:00.000Z | 2004-10-06T00:00:00.000 | {
"year": 2004,
"sha1": "417713e90e7da4f57e662789558ddac3e6ee6db5",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc2954",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52188dd83d6210c311f80a7b61aed1ff349c8031",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257063331 | pes2o/s2orc | v3-fos-license | Genome-wide analysis of NBS-LRR genes revealed contribution of disease resistance from Saccharum spontaneum to modern sugarcane cultivar
Introduction During plant evolution, nucleotide-binding sites (NBS) and leucine-rich repeat (LRR) genes have made significant contributions to plant disease resistance. With many high-quality plant genomes sequenced, identification and comprehensive analyses of NBS-LRR genes at whole genome level are of great importance to understand and utilize them. Methods In this study, we identified the NBS-LRR genes of 23 representative species at whole genome level, and researches on NBS-LRR genes of four monocotyledonous grass species, Saccharum spontaneum, Saccharum officinarum, Sorghum bicolor and Miscanthus sinensis, were focused. Results and discussion We found that whole genome duplication, gene expansion, and allele loss could be factors affecting the number of NBS-LRR genes in the species, and whole genome duplication is likely to be the main cause of the number of NBS-LRR genes in sugarcane. Meanwhile, we also found a progressive trend of positive selection on NBS-LRR genes. These studies further elucidated the evolutionary pattern of NBS-LRR genes in plants. Transcriptome data from multiple sugarcane diseases revealed that more differentially expressed NBS-LRR genes were derived from S. spontaneum than from S. officinarum in modern sugarcane cultivars, and the proportion was significantly higher than the expected. This finding reveals that S. spontaneum has a greater contribution to disease resistance for modern sugarcane cultivars. In addition, we observed allelespecific expression of seven NBS-LRR genes under leaf scald, and 125 NBS-LRR genes responding to multiple diseases were identified. Finally, we built a plant NBS-LRR gene database to facilitate subsequent analysis and use of NBSLRR genes obtained here. In conclusion, this study complemented and completed the research of plant NBS-LRR genes, and discussed how NBS-LRR genes responding to sugarcane diseases, which provided a guide and genetic resources for further research and utilization of NBS-LRR genes.
Introduction
Sugarcane (Saccharum spp.) is an important sugar economic crop, accounting for 76% of the world's total sugar production (Zan et al., 2020). Sugarcane cultivation areas are mainly located in tropical and subtropical regions, where high temperature and humidity, and continuous rainy climate make sugarcane be susceptible to various diseases and cause huge economic losses. During long time interactions with pathogens, plants have evolved a well-established immune system during their long evolution to effectively resist pathogenic invasion. Plant immune system includes PAMPs triggered immunity (PTI) induced by pathogenassociated molecular patterns (PAMPs), and effector triggered immunity (ETI) triggered by pathogen effectors (Goff et al., 2002;Jones and Dangl, 2006). PTI is the first level of immune defense for plants, occurring on cell surface, where plants recognize PAMPs through pattern recognition receptors (PRRs) on the cell membrane, and subsequently trigger the immune process. ETI is the second level of defense in plant cells. When some plant pathogen effectors break through the first level of the immune system, they are recognized directly or indirectly by R proteins in plants, which activate downstream signaling pathways and trigger a plant hypersensitivity response (HR) (Martin et al., 2003;Nimchuk et al., 2003) or produce resistance factors to inhibit the spread of pathogens. In recent years, researchers have made great progress in study of plant resistance gene (R) in a variety of species, of which Nucleotide Binding Sites-Leucine Rich Repeats (NBS-LRR) is the largest group of R genes (Caplan et al., 2008). The protein encoded by NBS-LRR gene has distinct characteristics. Its central structure is a NB-ARC domain, which has the function of molecular switch, and is responsible for the binding and hydrolysis of ATP and GTP (Tameling et al., 2006). The C terminal is a leucine-rich repeat sequence, highly variable, and has the ability to recognize specific pathogens (Meyers et al., 1999). The N-terminus is a variable structure. According to the different domains at the N-terminus, the NBS-LRR gene can be divided into two subfamilies: TIR-NBS-LRR (TNL) and CC-NBS-LRR (CNL). The N-terminus of TNL genes is TIR, while CNL gene is CC (Coiled-Coil) domain (Meyers et al., 1999). In addition to their different domains, they also differ greatly in downstream signaling pathways, indicating that there may be functional differences between the two subfamilies (Tarr and Alexander, 2009). Recently, NBS-LRR gene with RPW8 domain (resistance to powdery mildew 8) is considered as a separate class, the RNL gene, which plays an important role in signaling of disease response (Xiao et al., 2001;Collier et al., 2011;Shao et al., 2016).
Previously, NBS-LRR genes have been identified in various species, including Arabidopsis thaliana (Guo et al., 2011), and Oryza sativa (Zhou et al., 2004). The results show that the number of NBS-LRR genes in plants is usually in hundreds, reflecting the fact that NBS-LRR genes have an important role in the species. However, the number and characteristics of NBS-LRR genes among species were different. What factors affect the number of NBS-LRR genes in a species? In this study, we identified NBS-LRR at genome-wide level, and performed comparative analysis in 23 representative species. We clarified that species NBS-LRR genes were not related to species genome size or the number of all genes.
In addition, we found that whole genome duplication (WGD) and gene expansion affect the number of NBS-LRR genes in species. With the advancement of research, the method of analyzing NBS-LRR genes among multiple species is gradually adopted by researchers (Li et al., 2010). Grass species, Sorghum bicolor (S. bicolor), Miscanthus sinensis (M. sinensis) and sugarcane belong to the same monocotyledons , and they all have high quality genomes published, which provide a basis for systematic analysis of NBS-LRR genes. In this study, we investigated the sequence characteristics, function, and evolution of conserved NBS-LRR genes in the above grass species to obtain their generality and specificity. Meanwhile, we explored the expression patterns of NBS-LRR genes in responding to diseases and revealed the contribution of S. spontaneum to disease resistance in modern sugarcane cultivars using multiple sets of transcriptomic data. These analyses and discoveries complemented the study of the NBS-LRR gene in grass species, and provided guidance and genetic resources for further in-depth studies on the disease resistance mechanism and breeding in sugarcane.
Identification of NBS-LRR genes in 23 plant species
Based on the results of Jansen et al. (2007) and Bremer et al. (2009), a total of 23 flowering plants including 19 species with a representative phylogenetic status in taxonomy according to interrelationships of the APG IV orders (Byng et al., 2016) and 4 sugarcane accessions were selected for the study. Among them, there were 13 dicotyledons, and 10 monocotyledons. Their protein sequences as well as genomic information were obtained from Phytozome Plants (https://phytozome-next.jgi.doe.gov/), EnsemblPlants database (http://plants.ensembl.org/species.html), Sugarcane Genome database (http://sugarcane.zhangjisenlab.cn/ sgd/html/download.html), the Sugarcane Genome Hub (https:// sugarcane-genome.cirad.fr/) and Figshare storage database (https://figshare.com/). Subsequently, protein sequences of the 23 species were annotated using InterProScan 5.48-83.0. Based on the annotation results, the NBS-LRR gene of the species was identified based on the inclusion of NB-ARC and LRR domains. Chloroplast genomic data of the above species were acquired from NCBI, and the species evolutionary tree was constructed using PhyloSuite . All of the 63 protein-coding genes (PCGs) shared between 23 species were aligned in batches with MAFFT (v7.313) integrated into PhyloSuite using normal-alignment mode. Maximum likelihood phylogenies were inferred using IQ-TREE (Nguyen et al., 2015) under Edge-unlinked partition model for 50,000 ultrafast (Minh et al., 2013) bootstraps with GTR+F+I+G4, which was the best-fit model according to BIC criterion, as well as the Shimodaira-Hasegawa-like approximate likelihood ratio test (Guindon et al., 2010). The species genome versions, download links, assessment information and chloroplast genome accession numbers were included in Supplementary Table 1.
Characterization and analysis of conserved NBS-LRR genes
The MCScanX installed in Tbtools (Wang et al., 2012) was used for rapid identification of intraspecies collinearity NBS-LRR genes with E-value of 10 -5 in the four closely related monocotyledonous species, S. bicolor, M. sinensis, S. spontaneum and S. officinarum. The allelic loss of NBS-LRR gene in S. spontaneum and S.officinarum was calculated based on their genome annotations (http://sugarcane.zhangjisenlab.cn/sgd/html/download.html). Orthofinder-2.5.4 was used to identify homologous genes between the four species, which were conserved NBS-LRR genes, and the comparison software was Blast (E-value=10 -3) (Emms and Kelly, 2015).
Gene composition of conserved NBS-LRR genes
To characterize the NBS-LRR genes, the GC content and CDS length of conserved NBS-LRR gene were evaluated using SeqKit (Shen et al., 2016). Moreover, the characteristics of conserved NBS-LRR gene, including intron size, exon number, and exon size, were estimated based on genome annotations using python script. The statistics of bivariate correlation analysis and analysis of variance (ANOVA) were performed by IBM SPSS Statistics 25.0.
Analysis of motifs, cis-acting and calculation of Ka/Ks ratio
Prediction of conserved motifs of the conserved NBS-LRR genes was performed using the online software MEME (https://memesuite.org/meme/tools/meme). The top 20 motifs obtained were subjected to functional prediction analysis by Motif Comparison software (https://meme-suite.org/meme/tools/tomtom), and then graphed by ggplot2 R package. The 2000bp sequences before the CDS sequence of conserved NBS-LRR genes were extracted using the Gtf/Gff3 Sequence s Extractor installed in TBtools as the promoter sequences, and submitted to PlantCare (https:// bioinformatics.psb.ugent.be/webtools/plantcare/html/) for functional analysis of the cis-acting elements (Lescot, 2002).
Ka/Ks stands for the ratio of non-synonymous to synonymous substitutions, and the Ka/Ks of conserved NBS-LRR genes was calculated based on CDS sequences using the Simple Ka/Ks Calculator installed in TBtools between S. spontaneum and M. sinensis, S. bicolor, S. officinarum, respectively.
Transcriptomic analysis
Transcriptomic raw data were downloaded from the ENA database (https://www.ebi.ac.uk/ena/browser/home), and used to analyze the expression of the NBS-LRR gene in sugarcane. A total of 45 RNA-seq datasets were collected for four sugarcane diseases, including sugarcane smut, ratoon stunting, leaf scald, and mosaic virus (Supplementary Table 2). Transcriptomic data were uploaded to a local high performance computing server for analysis. Firstly, the transcriptomic sequencing data were quality-controlled using Fastp (Chen et al., 2018). Then clean reads were aligned to transcript sequences of S. spontaneum and S. officinarum using Bowtie2 (Langmead and Salzberg, 2012). Unique mapped reads were extracted, and quantified using Salmon (Patro et al., 2017). Transcripts per kilobase of exon model per million mapped reads (TPM) was used to estimate the amount of gene expression. In addition, differential expression analysis of between resistant plants and susceptible plants using read counts with the edgeR R package (Robinson et al., 2010). Genes with FDR < 0.05 and |log 2 (fold change)| ≥ 1 estimated by edgeR were assigned as differentially expressed. Replicate-free differential expression analysis was also performed using edgeR on samples between different time points after inoculation (Chen et al., 2008;Anders and Huber, 2010).
Build NBS-LRR gene database
To facilitate the understanding and exploration of the NBS-LRR gene by researchers, we built the NBS-LRR gene database website (http://110.41.19.157:5000/) based on the Linux platform using the Flask framework and SQLite module. A total of 5 HTML pages were built, including Home, Species data, Transcriptomic data, InterProScan, and Blast. User can access the database web page through their browser for corresponding functions.
Identification of NBS-LRR genes in representative plant species
The two conserved structural domains, NB-ARC and LRR, were used as the basis to identify NBS-LRR genes at genome-wide level in the 23 plant representative species (Table 1). We found that the total number of all genes in a species is positively correlated with genome size (P < 0.0001), but the number of NBS-LRR genes were not significantly correlated with genome size and the total number of all genes, showing that the number of NBS-LRR genes was speciesspecific, and may be the result of species adaptation to their respective ecological environments. This is in line with the findings of Wang et al. (2022b).
To explore the changes in NBS-LRR genes during species evolution, we used the chloroplast genomes to construct a phylogenetic tree and to count the proportion of NBS-LRR genes in species ( Figure 1A). The number of NBS-LRR genes did not correlate with the evolutionary process of plant species. Analysis of NBS-LRR genes and their subclasses showed that the number of NBS-LRR genes was positively correlated with the number of CNL genes (P < 0.0001), but not significantly with TNL genes, which is consistent with the view of Liu et al. (2021) (Figure 1B). All dicotyledonous species except Sesamum indicum (S. indicum) contained TNL and CNL genes, whereas no TNL gene was found in monocotyledonous plants, which is consistent with the results of previous studies (Meyers et al., 1999;Zhou et al., 2004;Gao et al., 2011;Liu et al., 2021). In fact, TIR and CC domains were not always independent, such as an NBS-LRR gene (Tp57577_TG AC_v2_mRNA32981) of Red clover (Trifolium pratense) contained both domains according to the results in this study.
Characterization and phylogenetic analysis of NBS-LRR genes in grass species
After removal of alleles of the same gene, 299, 340, 244 and 478 NBS-LRR genes were identified in S. spontaneum, S. officinarum, S. bicolor and M. sinensis, with 132 (44%), 157 (46.2%), 135 (55.3%) and 232 (48.5%) CNL genes, respectively. Gene expansion or loss may lead to the differences in the number of NBS-LRR genes among species. We first explored the collinearity relationships of the NBS-LRR gene within species using MCScanX for identification of duplicated NBS-LRR genes (Supplementary Figure 1). After removing alleles, 18 gene pairs formed by 36 NBS-LRR genes were identified in S. spontaneum, of which 18 genes (50%) were located on Chr2 (8 genes) and Chr5 (10 genes). A total of 47 gene pairs formed by 75 genes were identified in S. officinarum, with 41 genes (54.7%) on Chr5. Both M. sinensis and S. bicolor were diploids, but a large difference in the number of NBS-LRR dulicated gene pairs between them. In M. sinensis, 51 gene pairs of NBS-LRR genes formed by 93 genes were found, with the largest number of genes on Chr9 (12 genes). Only two NBS-LRR gene pairs were found in the S. bicolor, distributed on Chr3 and Chr5, respectively. M. sinensis went through recent whole genome duplication (WGD) and chromosome rearrangement events, and its 18 basic chromosomes showed a good syntenic relationship between each other (Cheng et al., 2018), which may cause high duplicated genes identified in M. sinensis. In addition, we estimated the retention of alleles of the NBS-LRR genes within in S. spontaneum and S. officinarum, and the results showed that loss of a large number of alleles occurred in sugarcane. In the tetraploid S. spontaneum genome, only 18 NBS-LRR genes (6%) retained 4 alleles, 281 NBS-LRR genes (94%) lost 1 to 3 alleles, of which 46.6% were CNL genes and 53.4% were truncated CNL genes lacking the CC structure. In the octaploid S. officinarum genome, only 3 NBS-LRR genes (1.3%) retained 8 alleles, and 337 NBS-LRR genes (98.7%) lost 1 to 7 alleles, with CNL genes accounting for 46.3% and truncated CNL genes for 53.7%. CNL genes lost fewer alleles than truncated CNL genes (P < 0.001), indicating that CNL genes were likely more conserved in polyploid sugarcane genome evolution.
In order to better investigate the functional and evolutionary relationships of NBS-LRR genes in sugarcane, OrthoFinder was used to analyze homologous NBS-LRR genes called conserved NBS- LRR genes in four monocotyledonous grass species, including S. spontaneum, S. officinarum, S. bicolor and M. sinensis (Table 2). In total, 166, 121, 125 and 181 conserved NBS-LRR genes were identified in S. spontaneum, S. officinarum, S. bicolor and M. sinensis, accounting for 35.5%, 16.7%, 51.2%, and 37.9% of the number of NBS-LRR genes in the four species, and there were 75 (45.2%), 62 (51.2%), 65 (52%) and 99 (54.7%) CNL genes in the conserved NBS-LRR genes, respectively. We found that the proportion of CNL genes in the total number of NBS-LRR genes was higher in S. bicolor than in other three species, while the proportion of CNL genes in conserved NBS-LRR genes was increased compared to the proportion of CNL at the genomewide level. The proportion of CNL genes in conserved genes was higher in S. bicolor and M. sinensis than in sugarcane.
Gene composition and evolutionary analyses of conversed NBS-LRR genes
For investigating the structure of the NBS-LRR gene, we compared the GC content, CDS length, introns and exons of the conserved NBS-LRR gene in the four monocotyledonous plants.
The average GC content of the conserved NBS-LRR gene was approximately 45% with no significant differences among the four species (Supplementary Figure 2B). The mean GC content of CNL genes was lower than that of truncated CNL genes in all four species (P < 0.05) (Supplementary Figure 2C). The CDS length of the NBS-LRR gene differed among the four species, with S. bicolor having the largest mean CDS and S. officinarum having the smallest, at 3,352.6 bp and 3,035.6 bp, respectively (P < 0.001). Except for S. bicolor, the mean CDS length of CNL genes was lower than that of truncated CNL genes in the other three monocotyledons (P < 0.001) (Supplementary Figure 3C).
There were also significant differences in the intron size of NBS-LRR gene among the four species (P < 0.01), among which the intron length of S. officinarum was the largest and that of S. bicolor was the smallest (Supplementary Figure 4A). By analyzing their exons, it was found that M. sinensis had the largest number of exons and S. officinarum had the least (P < 0.01) (Supplementary Figure 4B). The exon fragment length of NBS-LRR genes in sorghum was the longest and the shortest in S. spontaneum (P < 0.01) (Supplementary Figure 4C). The intron fragment length of CNL genes was significantly smaller than that of truncated CNL genes (P < 0.01) (Supplementary Figure 5A). The mean exon length of CNL genes was greater than that of truncated CNL genes except for S. spontaneum (P < 0.05) (Supplementary Figure 5C).
Analysis of cis-acting elements and protein motifs was useful for exploring the function of NBS-LRR genes. The functions of the predicted cis-acting elements were primarily related to light response, phytohormones response, stress response, and plant growth and metabolism. The most widely distributed top 10 cisacting elements were mainly involved in the regulation of transcription, response to light, induction of anaerobic motility, and metabolism of gibberellin and methyl jasmonate (Figure 2A), while methyl jasmonate played an important role in disease resistance in plants (Pichersky and Gershenzon, 2002). In addition, we also found that NBS-LRR genes in S. bicolor contained the fewest cis-acting elements among the four grass species (P < 0.05). The most distributed top 10 protein motifs in the NBS-LRR genes of the four grass species were analyzed, and their functions were primarily related to catalyzing the reversible interconversion of 3-phosphoglycerate and dihydroxyacetone phosphate, catalytic substrate phosphorylation, transcriptional regulation, and synthesis pathways of various biological substances ( Figure 2B). Based on protein motif, M. sinensis were clustered close to that of Saccharum genus (S. spontaneum and S. officinarum), while M. sinensis and S. bicolor were in the same clusters based on promoter elements, illustrating differences in the evolution of the conserved NBS-LRR genes between species in terms of regulatory elements and functional motifs.
The ratio of nonsynonymous (Ka) to synonymous (Ks) nucleotide substitution rates for conserved NBS-LRR genes among grass species showed that the Ka/Ks gene frequencies were different between species ( Figure 3A). A total of 88 shared gene pairs were used for calculation of Ka/Ks values among comparisons between S. spontaneum and M. sinensis, S. bicolor, S. officinarum, respectively. The proportion of gene pairs with high Ka/Ks values (Ka/Ks > 0.7) was gradually increasing in S. spontaneum compared with S. bicolor, M. sinensis and S. officinarum repectively, and the Ka/Ks value of one pair of homologous gene pairs between S. spontaneum and S. officinarum was above 1.1 (Sspon.07G0017980-1A and Soff.08G0002710-1A), indicating that these NBS-LRR genes were subject to positive selection during species evolution. In addition, Ka/Ks ratios of Sspon.07G0017980-1A with Sobic.003G317300 in S. bicolor and Misin05G292700 in M. sinensis respectively were greater than 0.7. The Ka/Ks values of CNL and truncated CNL genes confirmed the above trends ( Figure 3B), but not significant differences were observed between the two subfamilies.
Transcriptomic analysis of NBS-LRR genes in sugarcane
NBS-LRR genes, as one of the most important disease resistance genes in plants, play an important role in the resistance to pathogens (DeYoung and Innes, 2006). We analyzed transcriptomic data related to sugarcane smut, ratoon stunting, leaf scald, and mosaic virus diseases. After filtering out low-quality reads, the clean reads had a Q20 over 90% and a GC content of 53.65%~54% (Supplementary Table 3), showing a high quality of the sequencing data. Considering alleles of the same genes, 131, 126 and 269 differentially expressed NBS-LRR genes were identified between resistant and susceptible plants after being challenged by pathogens of sugarcane smut, ratoon stunting, and leaf scald, respectively, and 18 differentially expressed NBS-LRR genes were identified between infected and healthy sugarcane for mosaic virus disease. After removing alleles based on genome annotations, there were 125, 121, 226, and 18 differentially expressed genes in the four diseases, respectively. Interestingly, the expression patterns among The ratio of nonsynonymous (Ka) to synonymous (Ks) nucleotide substitution rates of conserved NBS-LRR genes between the four grass species. alleles of the same NBS-LRR genes were not always the same. For example, in leaf scald, the expression of the Sspon.05G0015970-2C was significantly up-regulated in susceptible plants, while the allele Sspon.05G0015970-3D was significantly up-regulated in the resistant plants ( Figure 4A). A similar situation was observed in the NBS-LRR gene from S. officinarum. In leaf scald, Soff.05G0011330-4E was significantly up-regulated in resistant plants, while the allele Soff.05G0011330-3D in susceptible plants was significantly up-regulated ( Figure 4A). In this study, we found that, in addition to mosaic virus disease, 6, 5 and 38 genes had alleles in the differentially expressed genes of the three diseases of sugarcane smut, ratoon stunting and leaf scald, respectively, of which only 7 genes of leaf scald had allele-specific expression. After removal of alleles, among the differentially expressed NBS-LRR genes for sugarcane smut, the number of genes from S. spontaneum and S. officinarum was 62 (49.6%) and 63 (50.4%), respectively, of which 28 genes were up-regulated from S. spontaneum and 31 genes were up-regulated from S. officinarum. Among the differentially expressed NBS-LRR genes for ratoon stunting disease, the number of genes from S. spontaneum and S. officinarum were 70 (56.9%) and 51 (43.1%), respectively, of which 36 genes were up-regulated from S. spontaneum and 22 genes were up-regulated from S. officinarum. Among the differentially expressed NBS-LRR genes for leaf scald, the number of genes from S. spontaneum and S. officinarum were 127 (55.3%) and 99 (44.7%), respectively, of which 54 genes were up-regulated from S. spontaneum and 53 genes from S. officinarum. Among the differentially expressed NBS-LRR genes for mosaic virus disease, the number of genes from S. spontaneum and S. officinarum were nine for each species, of which six were upregulated in the differentially expressed genes from S. spontaneum and four from S. officinarum. In modern sugarcane cultivars, about 10-15% of the genome is derived from S. spontaneum and 80% from S. officinarum (Zhang et al., 2018). The proportion of differentially expressed genes from S. spontaneum in sugarcane cultivars was much higher than the theoretical value (P < 0.001, based on S. spontaneum contributing to 20% genomes in sugarcane cultivars), indicating that S. spontaneum has a greater contribution to disease resistance for modern sugarcane cultivars. We also investigated differential expressed genes for each disease at different time points after inoculation (Supplementary Figure 6). In sugarcane smut, three NBS-LRR genes from S. officinarum were differentially expressed at 24 hours after inoculation. In contrast, NBS-LRR genes from S. spontaneum were upregulated at 48 hours after inoculation in the resistant plants, and at 120 hours in susceptible plants. The NBS-LRR genes seems to have different patterns after pathogen challenging, and the genes from S. officinarum first responded sugarcane smut. For Ratoon stunting disease, the trend of NBS-LRR gene expression pattern from S. spontaneum and S. officinarum was the same. For Leaf scald, the number of differentially up-regulated NBS-LRR genes from S. spontaneum and S.officinarum in resistant plant continued to increase after inoculation, while in susceptible plant the number of differentially up-regulated NBS-LRR genes showed a trend of first decreasing and then increasing patterns. Comparing healthy and susceptible sugarcane challenged by mosaic virus, we found that the number of differential up-regulation of NBS-LRR genes were higher than the differential down-regulation genes from S. spontaneum, while the expression pattern was opposite for S. officinarum.
Integration of the four transcriptomic data showed that 125 NBS-LRR genes were differentially expressed in at least two diseases. Here, we screened a total of 12 genes that were significantly differentially expressed (FDR < 0.05 and |log 2 (fold change)| ≥ 2) in common among sugarcane smut, ratoon stunting, and leaf scald disease, two of which were differentially expressed in all four diseases, Sspon.02G0025530-2B and Soff.05G0001720-5H ( Figure 4B). Some genes were able to respond to multiple diseases. For example, regarding ratoon stunting, and leaf scald, the expression of Sspon.06G0016970-2B in resistant plants was significantly up-regulated after inoculation. For sugarcane smut, ratoon stunting and leaf scald disease, the expression of Sspon.02G0025530-2B in susceptible plants was significantly up-regulated after inoculation. In particular, the gene Sspon.06G0016970-2B, which encodes the RGA5 disease resistance protein in rice, is tightly linked to RGA4 in an inverted tandem fashion at the Pi-CO39/Pia resistance locus, and ectopic activation of RGA4/RGA5 has been reported to confer resistance to bacterial wilt and bacterial leaf streak (Hutin et al., 2016).
Construction of NBS-LRR gene database
For researchers to quickly access information on the plant NBS-LRR genes, we constructed the NBS-LRR gene database (http:// 110.41.19.157:5000/) using Argon Design (Supplementary Figure 7A). This database consisted of two modules, data and tools. The data module was composed of species information including the genome size, ploid, number of all genes, and number of NBS-LRR genes identified (Supplementary Figure 7B), and transcriptomic data including results obtained in this study (Supplementary Figure 7C). The Tools module contained Blast and InterProScan tools, which allow users to search our database and do protein annotation using their sequences of interest (Supplementary Figures 7D, E). Compared with other similar databases, such as RPGdp (http://www.prgdb.org/ prgdb/), DeepLRR (http://lifenglab.hzau.edu.cn/DeepLRR/ index.html), and etc., our database provided comprehensive information on NBS-LRR genes in sugarcane for the first time, especially adding information on the expression of NBS-LRR genes under several sugarcane disease stresses, which facilitated users indepth study on corresponding area.
NBS-LRR genes are complex and variable in species evolutionary
Since Johal and Briggs isolated the first plant R gene, Hm1, from maize (Zea mays L) in 1992 (Johal and Briggs, 1992), researchers have identified many R genes in a variety of plants, of which more than 70% are classified in NBS-LRR class (McHale et al., 2006). Ancient origin and large number of subfamilies of plant NBS-LRR genes have certainly made it more difficult to explore the evolutionary patterns of them among species. In our study, NBS-LRR genes were identified in 23 representative plants, and their comparisons showed that the number of NBS-LRR genes was independent of species divergence and genome size, and the percentage of the number of NBS-LRR genes to the total number of genes differed between species even between the same clade and genus. WGD may be one of the major reasons affecting the number of NBS-LRR genes among species. Both WGD and large-scale duplication of chromosomal segments lead to genome duplications within a species (Zhang, 2003), resulting in expansion of alleles of the same genes in the same species. Based on the annotation results of Interproscan, 468 and 741 NBS -LRR genes were annotated in S. spontaneum, S. officinarum, respectively, where the number of alleles of other genes was 169 and 401, respectively. The large number of gene expansion were likely related to the fact that sugarcane has experienced at least two WGD events in its evolutionary history (Zhang et al., 2018). Gene duplication also affects the number of species NBS-LRR genes. After removal of alleles of the same gene, a total of 36, 75, 93 and 4 NBS-LRR genes were identified in S. spontaneum, S. officinarum, M. sinensis and S. bicolor, respectively, forming 18, 47, 51 and 2 duplicated gene pairs, which were derived from segmental duplication. We evaluated the effects of two factors on the number of NBS-LRR genes in S. spontaneum and S. officinarum. At least 36.1% and 45.9% of NBS-LRR genes in S. spontaneum and S. officinarum, respectively, were generated by the effects of WGD, while 18 (3.8%) and 47 (6.3%) of NBS-LRR genes, respectively, were generated by gene expansion. Based on this, we speculate that WGD may have a greater effect on the number of NBS-LRR genes in sugarcane compared to gene expansion (P < 0.05). In contrast to gene duplication, gene loss is another reason to change the number of genes. For instance, A. thaliana, which has undergone multiple WGD events in its evolutionary history (Blanc et al., 2003;Bowers et al., 2003), still has a genome size of~150 MB, and the latest highquality A. thaliana genome assembled by Wang et al. (2022a) was only about 133 Mb, implying that the vast majority of duplicated genes are not retained after a polyploidization events. In fact, gene loss was an inevitable trend of genome reconstruction after polyploidization (Lynch and Conery, 2000;Liang and Schnable, 2018). In S. spontaneum and S. officinarum, more than 94% of NBS-LRR genes lost at least one alleles. Balance of energy costs may be another reason to maintain a relative small number of NBS-LRR genes in species (Tian et al., 2003). Plants did not expand NBS-LRR genes blindly, and their number remained below 1.5% of the total number of genes in the species ( Figure 1A). It has been reported that plants loses some of its R genes to avoid wasting costs (Brown, 2002).
The differentiation and evolution patterns of different subfamilies of NBS-LRR genes are also long-standing puzzles for researchers. TNL and CNL genes are the two major subfamilies of NBS-LRR genes. The origin time of NBS-LRR gene has been proven to be earlier than the separation of chloroplast and streptococcus (Shao et al., 2019), and (Shao et al., 2016) found that the differentiation time of TNL subfamily genes may be earlier than CNL genes. Among the 23 species we studied, TNL and CNL genes were found in the earliest differentiated angiosperms, Amborella trichopoda, Nymphaea colorata and Nymphaea versipellis. As species evolved, TNL genes only existed in dicotyledons, but not in monocotyledons. It is still unclear that why TNL genes are lost in monocotyledons. In addition, according to our study, not all dicotyledonous plants contained the TNL genes. For example, S. indicum is a dicotyledonous plant, but did not contain the TNL genes. In addition, the number of CNL genes was positively correlated with the number of NBS-LRR genes, while the number of TNL genes not. The proportion of CNL genes in S. bicolor and M. sinensis was higher than that in sugarcane. In the other three monocots except sorghum, the proportion of CNL genes in the conserved NBS-LRR gene was higher than that of the whole genome level. Moreover, the statistical analyses of allele loss in S. spontaneum and S. officinarum showed that CNL genes was more difficult to lost alleles than truncated CNL genes. The results supported that TNL and CNL genes were likely to have different evolutionary patterns.
NBS-LRR genes in modern sugarcane cultivar
In the breeding process of modern sugarcane cultivar, S. officinarum was crossed with S. spontaneum, and the offspring obtained was backcrossed with S. officinarum for several generations to obtain cultivars with high sugar and high stress resistance . In this study, we analyzed multiple sets of transcriptomic data of sugarcane challenged by sugarcane diseases. It was surprising that the differentially expressed genes from S. spontaneum in modern sugarcane cultivar were more than those from S. officinarum, and the proportions of differentially expressed NBS-LRR from S. spontaneum were significantly higher than the expected. This result indicated that the NBS-LRR genes from S. spontaneum were selected in sugarcane breeding program, and confirmed its significant contribution to disease resistance of modern sugarcane cultivars although S. spontaneum contributes to less than 20% genomes of sugarcane cultivars.
Plant R genes usually target specific pathogen gene in defense against pathogens, but some R genes can mediate defense against multiple diseases. These genes were called multi-disease resistance genes (Fukuoka et al., 2014). For example, Lr34 and Lr67 genes have been shown to be resistant to a variety of pathogens in wheat including Puccinia triticina, Puccinia striiformis and Blumeria graminis (Krattinger et al., 2009;Moore et al., 2015). In this study, some NBS-LRR genes, such as Sspon.02G0025530-2B and Soff.05G0001720-5H, were also found to respond to various sugarcane diseases. The mechanisms of NBS-LRR genes responding to multiple diseases are inconsistent and complex. Studies have shown that Lr34-encoded protein is located on the cell membrane, which may affect the cell membrane structure by regulating phospholipid metabolism, and mediate abscisic acid (ABA) signaling pathway to achieve resistance to multiple diseases (Deppe et al., 2018;Krattinger et al., 2019). lr67 encodes a hexose transporter that may be related to glucose metabolism in mediating disease resistance (Milne et al., 2019). NBS-LRR genes stimulate strong ETI immune response by directly or indirectly identifying pathogen effector, resulting in an allergic reaction characterized by programmed cell death to resist the invasion of the pathogen (Jones and Dangl, 2006). We speculated that NBS-LRR genes responding to a variety of sugarcane diseases may be due to their ability to identify core effectors shared by pathogens. However, molecular mechanisms on how these genes work needed further investigation.
Allele specific expression (ASE) of NBS-LRR gene exists in sugarcane under disease stress. It was reported that ASE is a critical gene regulation, and the influence of cis-acting genetic variation is one of the main reasons for the specific expression between alleles (Gaur et al., 2013;Hill et al., 2021). ASE were ubiquitous in a variety of organisms (Knight, 2004), and studies have shown that ASE plays an important role in Zea mays (Springer and Stupar, 2007), A. thaliana (Todesco et al., 2010), and Oryza sativa (He et al., 2010). In addition, studies on human diseases have shown that ASE of genes encoding pathogenic enzymes can affect an individual 's susceptibility to disease (Emison et al., 2010;Finch et al., 2011;Berlivet et al., 2012;EMBRACE et al., 2012). Among the differentially expressed genes in response to leaf scald, we identified seven NBS-LRR genes with specific expression profiles, accounting for 3% of the differentially expressed genes in leaf scald, and, these NBS-LRR genes encompassed both S. spontaneum and S. officinarum sources, suggesting that ASE was also an important regulation mechanism in sugarcane disease resistance. However due to the complexity of ASE regulation mechanism and the limitation of detection technology, the research on ASE is still in its infancy, and needs further exploration.
Conclusion
By identification of genome-wide NBS-LRR genes in 23 representative species, and comparisons in four grass species, we found that the number of NBS-LRR genes did not correlate with their genome size, and total number of genes, and whole genome duplication may be the main factor affecting the number of NBS-LRR genes in sugarcane. In addition, our comparisons supported the previous researchers' view that TNL and CNL had different evolutionary patterns. Transcriptomic analysis of sugarcane challenged by different diseases showed that more differentially expressed NBS-LRR genes were derived from S. spontaneum than from S. officinarum, and the proportion of differentially expressed genes from S. spontaneum was significantly higher than the expected ratio in modern sugarcane cultivars, revealing its contribution of disease resistance. Moreover, allele specific expression of NBS-LRR genes were observed in responding to pathogen infection in sugarcane. In conclusion, the comprehensive analyses of plant NBS-LRR genes provided a deeper exploration of the evolutionary patterns of NBS-LRR genes, and contributed important gene resources to sugarcane improvement on disease resistance.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
Author contributions
ZJ and XP conceived and designed the project. ZJ and MY obtained and analyzed the data and wrote the manuscript. SC participated in the data analysis and discussion. HZ was involved in building the plant NBS-LRR gene database. XP revised the manuscript. All authors contributed to the article and approved the submitted version. | 2023-02-22T16:05:14.440Z | 2023-02-20T00:00:00.000 | {
"year": 2023,
"sha1": "ca7fcd79dc0f7a7a427f227a4870d17cdf62cb2f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "21656f506a5c1f717644139cae7fa7cfac3b634c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
236318876 | pes2o/s2orc | v3-fos-license | The role of body mass index-for-age in the assessment of acute malnutrition and obesity in Moroccan hospitalized children
Hassan Barouaca1,*,† , Dalal Ben Loubir1, Bachir El Bouhali2, Nabil Tachfouti3, Adil El Midaoui4,† 1Higher Institute of Nursing Profession, Techniques of Health, Errachidia, Morocco. 2Department of Biology, Faculty of Sciences and Techniques Errachidia, Moulay Ismail University of Meknes, Errachidia, Morocco. 3Laboratoire d'Epidémiologie, Recherche Clinique et de Santé communautaire, Faculté de Médecine et de Pharmacie Fès, Fès, Maroc. 4 Research Team “Biology, Environment and Health,” Department of Biology, Faculty of Sciences and Techniques Errachidia, Moulay Ismail University of Meknes, Errachidia, Morocco.
INTRODUCTION
It is estimated that undernutrition is still responsible as an underlying factor for nearly half of deaths among children under 5 years [Black et al., 2013;World Health Organization, United Nations Children's Fund (UNICEF), 2019]. Developing countries are called upon to confront and address the challenge of both increased obesity (Black et al., 2013) and the persistence of acute malnutrition as a major health burden (Müller, and Krawinkel, 2005). In health care settings, malnutrition is considered a common finding encountered during hospitalization, particularly in pediatric words (Joosten and Hulst, 2011). The malnutrition may be aggravated by inadequate practices such as the lack of nutritional screening. Indeed, the Academy of Nutrition and Dietetics and American Society for Parenteral and Enteral Nutrition recommend the assessment of pediatric malnutrition in the hospital setting (Becker et al., 2014;Mehta et al., 2013). The WHO recommends the use of conventional nutritional tools, based on growth standards to assess malnutrition (WHO, 2006). These tools are z-score indices based on weight, height/length, and z-score Mid-Upper Arm Circumference (MUAC) adjusted by age and sex. This nutritional assessment was largely used in both the communities and health settings and at the individual clinical level. Meijers et al. (2010) emphasized the complexity in defining malnutrition which makes the establishment of an appropriate method more complicated which is able to assess the nutritional status and thus to determine adequately the prevalence of malnutrition in children under 5 years. The reviewed literature showed that the prevalence of malnutrition in hospitalized children is highly variable in developed and developing countries' oscillating between 6%-37% and 6.1%-31.8% depending on the diagnostic criteria, respectively (Daskalou et al., 2016;Marginean et al., 2014;Pawellek et al., 2008;Hankard et al., 2001;Öztürk et al., 2003). Becker et al. (2014) reported that the variation in the prevalence of malnutrition is due to the variability of the diagnostic tools and their cut-off points. Consequently, it is difficult to determine the exact malnutrition prevalence because of the lack of standardized diagnostic criteria. Studies have confirmed that malnutrition prevalence differences are explained by the significant variations in the anthropometric parameters used notably between WFH-Z and MUAC (Bilukha and Leidman, 2018;Dukhi et al., 2017) or MUAC z-score (MUAC-Z) and MUAC . Moreover, in Morocco, malnutrition is qualified as a public health problem, despite efforts devoted by the government during the last five decades to fight the children undernutrition (Barouaca, 2012). In addition, the household survey results (ENPSF, 2018) showed a notable improvement in children undernutrition with the emergence of overweighting and obesity and the persistence of micronutrient deficiencies, particularly in children under 5 years. It is noteworthy that the cost of health and the loss of citizen productivity linked to malnutrition and micronutrient deficiencies represent a waste of 5% of the Moroccan Gross Domestic Product (Ministère de la Santé, 2012). According to the Ministry of Health Data, endocrine, nutritional, and metabolic diseases as well as severe acute malnutrition are classified as the important causes of hospital morbidity by 7.1% and 0.9%, respectively (Ministère de la Santé, 2012). Furthermore, even if the primary malnutrition prevalence was found to be decreased at the community level in children under 5 years (ENPSF, 2018), its impact on hospital pediatric care settings is still unknown. Therefore, the paucity of data on the nutritional status of children at admission or at risk of malnutrition in Moroccan hospitals prompted us to set up this study. Thus, the goal of this study was to assess the malnutrition prevalence in Moroccan children under 5 years admitted to the regional hospital. In addition, the aim of the present study was to provide an answer to this variability in the prevalence of malnutrition by analyzing the agreement between the WFH-Z, body mass index-for-age (BMI-Z), MUAC, and MUAC-Z in the assessment of acute malnutrition and the BMI-Z for obesity.
Design
We performed a longitudinal observational study between February 1, 2018, and May 30, 2019.
Setting
The survey was carried out in the pediatric unit of the Moulay Ali Chérif Regional Hospital Center, Errachidia City. This regional hospital is located in the Drâa-Tafilalet region, in the Southeast of Morocco. The population of this region is recognized by its low socioeconomic background since the urban and rural areas are affected by poverty with 14.5% and 19.5%, respectively. The economic activities are essentially based on agriculture (90%) and related activities (MIDGCL, 2015).
Inclusion and exclusion criteria
All children admitted to pediatric wards for at least 48 hours and aged from 1 to 60 months were eligible to participate in the study. Exclusion criteria include the patients' age of less than 1 month, with birth weight less than 2,500 g or prematurity history and the refusal of the guardian or children's parents to participate in the study.
Data collection
Data were collected using a structured questionnaire that comprises items on sociodemographic characteristics and anthropometric measures. The admission diagnosed diseases and analysis biological results were collected from the medical files of each participant. The anthropometric measurements notably weight, height, or length (for children aged ≤ 2 years, MUAC) were taking upon admission. The participants were weighted with no shoes and minimum clothing by using a calibrated digital scale (KINLEE-20, precision 5 g). For infants/toddlers (age ≤ 2 years), the weight was taken without clothes and nappies while the length was measured with infant-meter to the nearest millimeter. In children who can stand, the height was measured with a wall-mounted stadiometer to the nearest millimeter. The MUAC was measured with plastic tape to the nearest 0.1 cm. The anthropometric measures, related to sex and age (month) of each child, were then exported to WHO Anthro software (version 3.2.2) and converted into specific sex-age z-score of conventional indices of malnutrition. The results in z-score values are based on WHO growth standards for children between birth and 60 months (WHO, 2006). The nutritional status was assessed by conventional classification of anthropometric indices [weight-forage z-score (WFA-Z), height-for-age z-score (HFA-Z), WFH-Z, BMI-Z, MUAC, and MUAC-for-age (MUAC-Z)] according to the WHO cut-off points (De Onis, 2015). All these conventional classifications were used to assess the prevalence of malnutrition defined as severe wasting (WFH-Z < -3SD, MUAC-Z < -3SD, and MUAC < 11.5 cm); wasting (or acute malnutrition, WFH-Z < -2SD and MUAC < 12.5 cm); severe underweight (WFA-Z < -3SD); underweight (WFA-Z < -2SD); severe stunting (chronic malnutrition, HFA-Z < -3SD); stunting (HFA-Z < -2SD); severe undernutrition (MUAC-Z < -2SD); undernutrition (MUAC-Z < -1SD); risk of overweight (BMI-Z > 1SD; WFH-Z > 1SD); overweight (BMI-Z > 2SD; WFH-Z > 2SD); and obese (WFH-Z > 3SD; BMI-Z > 3SD). Data were collected following the enrollment process performed by two trained nurses with the introduction of a face-to-face anonymous questionnaire to the parents.
Data management
The data were entered and double-checked in Excel software and then analyzed by the MedCal® (version 9.3.0.0, http://www.medcal.be). To compare the mean differences between groups, Student's T-test was used. The chi-square test was used to compare the gender groups and to compare the prevalence levels driven by the two anthropometric indices. Cohen's Kappa coefficient was used to assess the degree of agreement between WFH-Z and BMI-Z, MUAC-Z, and MUAC in screening acute malnutrition or between WFH-Z and BMI-Z to screen for obesity (Cohen, 1968). The Kappa values were interpreted in accordance with scores proposed by Landis and Koch (1977) for the strength of agreement: no agreement (< 0); poor agreement (0-0.19); mild agreement (0.20-0.39); moderate agreement (0.40-0.59); substantial agreement (0.60-0.79); and almost perfect agreement (0.80-1.00). The p-value (p < 0.05) was considered to be statistically significant for all tests. The ROC plots were used to examine the accuracy of a diagnostic test (Zweig and Campbell, 1993;Hanley and McNeil, 1982). The ROC plots the sensitivity against the 1-specificity across a range of values of BMI-Z, MUAC, and MUAC-Z to assess their diagnostic performance for acute malnutrition. The validity was evaluated by the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of each test (Parikh et al., 2008). The area under the ROC curves (AUC) values of 0.5 suggests no discrimination, 0.7-0.8 is considered acceptable, from 0.8 to 0.9 is considered excellent, and more than 0.9 is suggesting very outstanding accuracy (Hosmer and Lemeshow, 2000).
Regulatory and ethical aspects
The goal of the study was explained in detail to children's parents or guardians with the use of local language before beginning the process of patient's selection. The permission to conduct the study was obtained from the Errachidia Regional Health Directorate and from the director of Moulay Ali Chrif Hospital. This study was approved by the local medical ethics committee from the Health Ministry of Morocco, regional direction of Drâa-Tafilalet, under the ethical approval number: 04-07-17-285. The written informed consent was obtained from the participant guardian or children's parent by preserving confidentiality with respect to Helsinki declaration guidelines (World Medical Association, 2013). Participation or refusals did not affect the medical treatment of the children during their hospital stay.
RESULTS AND DISCUSSION
Three hundred thirty-seven children were enrolled in the present survey with the mean age of 16.5 months ± 14.74 ( Table 1). As shown in Table 2, the overall prevalence of wasting (acute malnutrition) in the 337 admitted children as represented by the WFH-Z < -2SD, MUAC < 12.5 cm, MUAC-Z < -2SD, and BMI-Z < -2SD was 41 (12.17%), 28 (8.30%), 18 (5.71), and 49 (14.54%), respectively. The prevalence of severe wasting (acute malnutrition) as depicted by WFH-Z < -3SD, MUAC < 11.5 cm, MUAC-Z < -3SD, and BMI-Z < -3SD was 14 (4.15%), 8 (2.37%), 4 (1.27%), and 14 (4.15%), respectively, with higher values in females compared to male participants for WFH-Z and BMI-Z. Severe underweight prevalence as indicated by WFA-Z < -3SD was low 4 (1.19%) with significantly (< 0.0001) higher values in female compared to male participants. The overall prevalence of stunting as indicated by HFA-Z < -2SD was 45 (13.35%), whereas its severe form was 18 (5.34%) with higher values in the females compared to male patients for its severe form. As shown in Table 2, we have found that the prevalence of obesity (BMI-Z > 3SD or WFH-Z > 3SD) was 12(3.56%) with higher values in males compared to females (p = 0.039). From the overall children, 40 (10.39%) were considered overweight when expressed by WFH-Z > 2SD and 32 (9.50%) as expressed by BMI-Z > 2SD. The prevalence of the overweight risk depicted either by BMI-Z > 1SD and WFH-Z > 1SD was similar (82 (24.33) and 81 (24.03)) and was higher than all studied prevalence patterns (Table 2). Table 3 displays the assessment of the agreement between the anthropometric indices assessed by Cohen's Kappa coefficient and their respective diagnostic accuracy evaluated by the ROC curve. A higher level of agreement was achieved between BMI-Z and WFH-Z in assessing acute malnutrition (Kappa = 0.872) and obesity (Kappa = 0.827). The ROC curve of BMI-Z was found to have an excellent performance ability with higher sensitivity and specificity to discriminate children with acute malnutrition and obesity (Fig. 1).
To the author's knowledge, this study is the first to describe in detail the malnutrition prevalence by using standard nutritional assessment tools in Moroccan children aged from 1 month to 5 years at admission to the hospital. Our overall undernutrition prevalence of wasting (WFH-Z < -2SD), underweighting (WFA-Z < -2SD), and stunting (HFA-Z < -2SD) were 12.17%, 4.75%, and 13.35%, respectively. The high prevalence observed in the present study in wasting, underweight, and stunting parameters [WFH-Z < -2SD (12.17%); WFA-Z < -2SD (4.75%); HFA-Z < -2SD (13.35%)] showed clearly that children at admission to the health care setting were more undernourished than community children at the national (2.6%, 2.9%, and 15.1%) or at the urban level (2.5%, 2.0%, and 10.4%), respectively (ENPSF, 2018). The wasting malnutrition prevalence found in the present study is in accordance with that from previous studies (Moy et al., 1990;Sissaoui et al., 2013) and lower (27.7%) than that reported by Doğan et al. (2005) in regional Turkish hospitals. In the results of the present study, although there was a tendency toward an increase in malnutrition prevalence between males and females, the difference remains not statistically significant. The fact that the data of the present study revealed high stunting prevalence (13.35%) proves the endemic persistence of chronic children malnutrition despite effort consented by the Moroccan government during the last five decades (Ministère de la Santé, 2011).
More interestingly, we have found that 3.56% of children as determined by BMI-Z > 3SD or WFH-Z > 3SD were obese and 24% were at risk of overweight as depicted either by WFH-Z > 1SD or BMI-Z > 1SD, whereas the overweight prevalence was typically 9.50% as indicated by BMI-Z > 2SD and 11.39% as represented by WFH-Z > 2SD at admission to the hospital. The prevalence of obesity observed in the present study is in agreement with that reported previously (ENPSF, 2018). The prevalence of overweight is equivalent to that found at both the national level (10.8%) and Drâa-Tafilalet region (9.8%). In terms of epidemiological relevance, the coexistence of the high prevalence of wasting (12.17%) and stunting (13.35%) simultaneously with a high prevalence of overweighting (9.50%) and obesity (3.56%) associated with micronutrient deficiencies (Ministère de la Santé, 2011) may prove the triple burden of malnutrition and could be considered as a serious health problem (WHO, 1995). This paradox situation is challenging health professionals to prioritize the care for malnutrition, as Morocco was engaged in the Sustainable Development Goals, set by the United Nations in 2015 in identifying, preventing, and controlling noncommunicable diseases.
WHO recommended the utilization of MUAC or WFH-Z in the assessment of acute malnutrition (WHO, 2013). However, in the present study, we have found that there was heterogeneity in acute malnutrition prevalence between the MUAC and WFH-Z. The prevalence of moderate wasting 28 (8.30%) as depicted by MUAC < 12.5 cm was quite lower than that of 41 (12.70%) as indicated by WFH-Z < -2SD. In the same way, the severe wasting prevalence of 8 (2.37%) as depicted by MUAC < 11.5 cm was clearly lesser than that of 14 (4.15%) as indicated by WFH-Z < -3SD. Interestingly, the BMI-Z < -3SD value [14 (4.15%)] was found to be equivalent to that of WFH-Z < -3SD [14 (4.15%)]. These results are in concordance with those reported previously (Dukhi et al., 2017;Wieringa et al., 2018). In fact, the present study demonstrates disagreement between the WFH-Z, MUAC, and MUAC-Z as indicated by the Kappa indices (Kappa (WFH-Z vs. MUAC) = 0.019; Kappa (WFH-Z vs. MUAC-Z) = -0.041) in assessing acute malnutrition (Table 3). These results are in agreement with those reported previously (Bélanger et al., 2019;Bilukha and Leidman 2018;Hossain et al., 2017;Tadesse et al., 2017). This variation in malnutrition prevalence was also present between the group's genders. The present study shows that severe wasting prevalence in males and females is higher when represented by WFH-Z < -3SD (3.26% and 0.89%) in comparison to that indicated by MUAC < 11.5 cm (0.59% and 1.78%), respectively ( Table 2). These results are in concordance with previous studies (Tessema et al., 2020). Hence, the use of MUAC < 11.5 cm overlaps 50% of male with severe wasting when compared to WFH-Z < -3SD. These results are in concordance with those of Grellety and Golden (2016). It is known that MUAC is an easy and preferred tool to assess global moderate/severe acute malnutrition in the community (Sachdeva et al., 2016). However, Fiorentino et al. (2016 have reported that MUAC is not convincing as a single criterion to screen the real prevalence of either moderate or severe acute malnutrition. Moreover, the fact that our results showed that malnutrition prevalence as represented by MUAC values is lower than that indicated by WFH-Z < -3SD or WFH-Z < -2SD may suggest that undernourished children could be undiagnosed. Thus, one may miss to determine the real malnutrition prevalence and therefore to treat those vulnerable patients which could lead to an increased risk of death. Until now, there are no single tools that have been fixed for childhood to adulthood pediatric malnutrition. In this optic, McCarthy et al. (2019) pointed out the lack of consistency in the type of tools and their cut-off limits in assessing the extent of malnutrition prevalence. Meijers et al. (2010) argued that there is a debate on the most appropriate and suitable criteria to define malnutrition. The lack of consistency in the type of measures and their cut-off values prevents estimating the true prevalence of the overall burden of malnutrition in children. However, this heterogeneity in the measures of the prevalence of malnutrition reminds us and forces us not only to standardize the indicators of the diagnosis of pediatric malnutrition that we apply but also to refine the thresholds of malnutrition in order to maximize our ability to correctly identify and prevent missing treatment of malnutrition children at high-risk death. In this context, the BMI-Z seems to ensure the coherence in malnutrition prevalence from birth to adulthood, to avoid the weakness of anthropometric indices and thus the heterogeneity in malnutrition prevalence. Indeed, the WHO has updated the BMI-for-age growth reference, by the inclusion of children < 2 years (WHO, 2006). As indicated in Table 3, the BMI-for-age and WFH-Z give an equal prevalence in assessing acute malnutrition or obesity with a perfect agreement to diagnose acute malnutrition (BMI-Z < -2SD, Kappa = 0.872) and obesity (BMI-Z > 3SD, Kappa = 0.827) ( Table 3). These results are in accordance with the findings of Furlong et al. (2016) which showed that BMI and WFH-Z present high agreement in the determination of acute malnutrition. In addition, Zong et al. (2017) have reported that BMI and WFH-Z indicators present high concordance in the assessment of wasting, overweight, and obesity.
In the present study, we have found a perfect agreement between WFH-Z and BMI-Z as reflected by the very outstanding diagnostic accuracy (AUC = 0.98) for acute malnutrition and (AUC = 0.99) for obesity (Table 4). Moreover, the high sensitivity and specificity indices as well as high PPV and NPV values ( Table 4) support the idea that the BMI-Z may be considered as a suitable indicator for acute malnutrition and obesity. These results are in agreement with those reported previously Chiabi et al. (2017). Hence, we suggest that, in clinical practices, the use of BMI-Z may facilitate the assessment of atypical growth patterns (wasting and obesity) from the birth to adulthood. This may avoid interand intravariability in malnutrition prevalence, facilitating the comparison and preventing the rising incidence of obesity in early life which reached alarming proportions in many countries (Ng et al., 2013). Thus, we think to consider the BMI-for-age as a suitable and feasible indicator to assess the overall prevalence of wasting and obesity in children under 5 years.
CONCLUSION
The results of the present study showed that Moroccan children under 5 years from Drâa-Tafilalet area present high malnutrition prevalence as reflected by wasting (12.17%), stunting (13.35%), overweighting risk (24.64%), overweighting (11.39%), and low malnutrition prevalence as underweighting (4.75%) and obesity (3.56%). The BMI-Z showed perfect agreement to diagnose acute malnutrition (Kappa = 0.872) and obesity (Kappa = 0.827) with higher sensitivity and specificity indexes as well as higher positive PPV and NPV. The present study also demonstrated that AUC of the BMI-Z presented very outstanding diagnostic accuracy to assess acute malnutrition (AUC = 0.98) and obesity (AUC = 0.99) in comparison to those of MUAC (AUC = 0.75) and MUAC-Z (AUC = 0.69). Thus, the present study suggests that BMI-Z could be a good alternative indicator to assess acute malnutrition and obesity in children aged from 1 month to 5 years. These findings emphasize that high children malnutrition prevalence observed in Drâa-Tafilalet region may place responsibility on clinical providers to identify, intervene, and prevent malnutrition early, with locally appropriate measures in the pediatric care setting.
ACKNOWLEDGMENTS
The authors would like to thank all participating children and their parents for taking part in this study and the dietician and physicians for their cooperation.
AUTHOR'S CONTRIBUTION
Hassan Barouaca conceived and designed the study. Hassan Barouaca and Adil El Midaoui analyzed the data. Nabil Tachfouti, Bachir El Bouhali, and Dalal Ben Loubir contributed to the discussion of the manuscript. Hassan Barouaca drafted the paper. Adil El Midaoui edited the final version of the paper. All authors approved the final manuscript.
DATA AVAILABILITY STATEMENT
The data used to support the findings of this study are available from the corresponding author upon request.
CONFLICT OF INTEREST
The authors declare that there is no conflict of interests regarding the publication of this paper.
FUNDING
We have received no source of funding that supports this study.
PUBLISHER'S NOTE
This journal remains neutral with regard to jurisdictional claims in published institutional affiliation. | 2021-07-24T16:52:00.196Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "9c3c1fdada343da979dd380cbc52bd615c86135e",
"oa_license": "CCBY",
"oa_url": "https://japsonline.com/admin/php/uploads/3413_pdf.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9c3c1fdada343da979dd380cbc52bd615c86135e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
261516202 | pes2o/s2orc | v3-fos-license | Differential sensitivity of the 2020 revised comprehensive diagnostic criteria and the 2019 ACR/EULAR classification criteria across IgG4-related disease phenotypes: results from a Norwegian cohort
Background We investigated sensitivity of the 2020 Revised Comprehensive Diagnostic Criteria (RCD) and the 2019 ACR/EULAR classification criteria across the four identified IgG4-related disease (IgG4-RD) phenotypes: “Pancreato-Hepato-Biliary”, “Retroperitoneum and Aorta”, “Head and Neck-limited” and “Mikulicz’ and Systemic” in a well-characterized patient cohort. Methods We included adult patients diagnosed with IgG4-RD after comprehensive clinical assessment at Oslo University Hospital in Norway. We assigned patients to IgG4-RD phenotypes based on pattern of organ involvement and assessed fulfillment of RCD and 2019 ACR/EULAR classification criteria. Differences between phenotype groups were analyzed using one-way ANOVA for continuous variables, and contingency tables for categorical variables. Results The study cohort included 79 IgG4-RD patients assigned to the “Pancreato-Hepato-Biliary” (22.8%), Retroperitoneum and Aorta” (22.8%) “Head and Neck-limited” (29.1%), and “Mikulicz’ and Systemic” (25.3%) phenotype groups, respectively. While 72/79 (91.1%) patients in total fulfilled the RCD, proportion differed across phenotype groups and was lowest in the “Retroperitoneum and Aorta” group (66.7%, p < 0.001). Among the 57 (72.2%) patients meeting the 2019 ACR/EULAR classification criteria, proportion was again lowest in the “Retroperitoneum and Aorta” group (27.8%, p < 0.001). Conclusion The results from this study indicate that IgG4-RD patients having the “Retroperitoneum and Aorta” phenotype less often fulfill diagnostic criteria and classification criteria than patients with other IgG4-RD phenotypes. Accordingly, this phenotype is at risk of being systematically selected against in observational studies and randomized clinical trials, with potential implications for patients, caregivers and future definitions of IgG4-RD.
Introduction
IgG4-related disease (IgG4-RD) is a fibroinflammatory systemic disease that can involve nearly any organ.Core features include tissue infiltration of IgG4-positive plasma cells causing tumefactive lesions and/or organomegaly, frequently accompanied by elevated serum IgG4 concentration [1].
IgG4-RD is a diagnostic challenge, owing to its heterogeneous presentations and lack of pathognomonic features.Diagnosis requires correlation of clinical, serological, radiological and/or histopathological findings [2].Comprehensive Diagnostic Criteria (CDC) was devised in 2011 [3] and revised in 2020 (revised CDC, RCD) [4] to aid diagnosis.The CDC and RCD focus on core disease features, but their sensitivity and specificity have not been systematically evaluated [4].Therefore, the diagnosis of IgG4-RD currently rests on expert clinical assessment.
The unsettled status of diagnostic criteria for IgG4-RD is not unexpected.It reflects that development of accurate diagnostic criteria for complex diseases with overlap to mimicking conditions is inherently challenging, as evident from the near complete absence of diagnostic criteria in rheumatology [5].Instead, ACR and EULAR have invested major resources in the development of classification criteria for research purposes [5].In general, classification criteria aim to select homogenous cases from patient cohorts clinically diagnosed with the disease in question.As this purpose, by definition, requires high specificity, a potential weakness of classification criteria is that they may need to sacrifice sensitivity to optimize specificity.Though not intended, low sensitivity may introduce biases, including skewed representation of disease phenotypes.If low sensitivity of classification criteria skews phenotype distribution, research output will suffer from the same bias.
The 2019 ACR/EULAR IgG4-RD classification criteria were developed by an international expert group.In the two separate validation cohorts, the reported sensitivities of the criteria were 85.5% and 82.0%, respectively, while specificities were 99.2% and 97.8% [2].
Following publication of the 2019 ACR/EULAR classification criteria, Wallace et al. used data from the validation cohorts to identify four distinct clinical phenotypes of IgG4-RD with different patterns of organ involvement: (i) "Pancreato-Hepato-Biliary"; (ii) "Retroperitoneum and Aorta"; (iii) "Head and Neck-limited" and (iv) "Mikulicz' and Systemic" [6].Importantly, in addition to different organ involvement, the phenotypes differed in demographic features and serum IgG4 concentrations, indicating biological differences which may impact disease course.To date, there are no results from independent IgG4-RD cohorts showing how well the RCD and the 2019 ACR/EULAR classification criteria perform across the four phenotypes.
Here, we aimed to assess sensitivity of the RCD and the 2019 ACR/EULAR classification criteria across the four phenotypes.We included a well-characterized Norwegian cohort of patients with IgG4-RD diagnosed by expert clinical assessment, stratified by phenotype, and assessed criteria performance.As our study cohort did not include patients diagnosed with mimicking conditions, we were not able to assess the specificity of the criteria.
Methods
At the Department of Rheumatology at Oslo University Hospital (OUH) we consecutively include consenting adult patients (≥ 18 years) diagnosed with IgG4-RD by expert clinical assessment in the Norwegian systemic connective tissue disease and vasculitis registry (NOS-VAR) [7].For this study, we included IgG4-RD patients from NOSVAR diagnosed from 2001-2022.Data was retrieved from NOSVAR and the electronic medical journal.
Elevated serum IgG4 levels were defined as > 1.35 g/L for the CDC and RCD criteria [3,4], and > 2.01 g/L (upper limit of normal range at the OUH laboratory) for the 2019 ACR/EULAR classification criteria [2], as per the criteria's definitions.
Organ involvement was determined by clinical, histopathological and/or radiological findings, where other causes were deemed unlikely.Multi-organ involvement was defined as ≥ 2 involved organs.Two rheumatologists (JV, ØMi) assessed fulfilment of the CDC, RCD and 2019 ACR/EULAR classification criteria, and assigned patients to one out of four phenotypes based on pattern of organ involvement [6].
Written informed consent was given for the included IgG4-RD patients as requirement for inclusion in NOS-VAR.The study was conducted in compliance with the Helsinki Declaration and approved by the regional ethics committee (REK #342136).
Assessment of CDC and RCD
Both the CDC and RCD include three variables: (i) clinical and radiological findings suggestive of IgG4-RD; (ii) elevated serum IgG4 level (defined as > 1.35 g/L); and (iii) compatible histopathological findings [3,4].According to the CDC and RCD statement, patients were designated as "definite" (i + ii + iii), "probable" (i + iii) or "possible" (i + ii) IgG4-RD cases.Fulfilment of the histopathological domain of CDC requires both (a) lymphoplasmacytic infiltration and fibrosis and (b) > 10 IgG4-positive (IgG4 +) plasma cells per high power field (hpf ) and ratio of IgG4 + /IgG4 + plasma cells > 0.40 [3].The histopathological domain of RCD includes the same two variables, but also (c) typical tissue fibrosis, particularly storiform fibrosis, or obliterative phlebitis, and fulfilment requires at least two of three [(a), (b) and/ or (c)] [4].Exclusion criteria for CDC and RCD are listed in the original documents [3,4] and include mimicking conditions such as granulomatosis with polyangiitis and eosinophilic granulomatosis with polyangiitis.
Assessment of 2019 ACR/EULAR classification criteria
The 2019 ACR/EULAR classification criteria employ a three-step approach, which includes (i) an obligatory entry criterion (involvement of a typical organ with compatible clinical and/or histopathological features); (ii) a set of exclusion criteria; and (iii) a list of classification items with weighted scores assigned to various clinical, serological, and histopathological features.Following exclusion of mimickers, we classified patients as IgG4-RD cases if they (i) met the entry criterion, (ii) had no exclusion criteria and (iii) scored ≥ 20 points by the defined classification items [2].
Outcome measures
In this cohort of well-characterized patients diagnosed with IgG4-RD based on expert clinical assessment, we aimed to describe, both on a group and phenotypic level: • Fulfilment of CDC, RCD and 2019 ACR/EULAR classification criteria • Reasons for failure to fulfil the criteria
Statistics
Descriptive statistics were applied, using IBM SPSS version 26 for Windows (Armonk, NY: IBM Corp.).Continuous variables are reported as means and standard deviations, and between-group differences analyzed using one-way ANOVA.Categorical variables are reported as absolute number and percentage, and between-group differences analyzed using contingency tables.
Baseline characteristics, phenotypes, and fulfilment of criteria
The IgG4 study cohort included 79 patients (Table 1).In the "Head and Neck-limited" group, patients were younger (p = 0.002), more often female (p = 0.024), and demonstrated a trend toward more non-white patients.
The "Retroperitoneum and Aorta" group had the highest mean CRP (p < 0.001) and ESR (p = 0.001) and was characterized by the lowest mean serum IgG4 concentration, less frequent multi-organ disease (p = 0.03), and fewer biopsies (p < 0.001).The "Mikulicz' and Systemic" group had the highest mean serum IgG4 concentration and mean number of involved organs (p < 0.001 for both).In total, 72 patients (91.1%) fulfilled the CDC and RCD.Discrepancy between CDC and RCD only occurred twice: two patients deemed "possible" IgG4-RD by CDC were considered "definite" by RCD.This discrepancy related to the histopathological domain of these criteria.Both patients had dense lymphoplasmacytic infiltrate with fibrosis, and > 10 IgG4 + plasma cells per hpf.The tissue IgG4 + /IgG + plasma cell ratio was < 0.40 (hence, "possible" by CDC), but there was evidence of storiform fibrosis and obliterative phlebitis (hence, "definite" by RCD).Given these minor differences, we decided to focus on RCD for all further analyses.Fulfilment of RCD was lower in the "Retroperitoneum and Aorta" group (66.7%) than in the remaining groups: 100% in "Pancreato-Hepato-Biliary", 100% in "Head and Neck-limited" and 95.0% in "Mikulicz' and Systemic" phenotype.The between-group difference was statistically significant (p < 0.001).
Fifty-seven patients (72.2%) in the IgG4-RD cohort fulfilled the 2019 ACR/EULAR classification criteria, with 100% meeting the criteria in both the "Pancreato-Hepato-Biliary" and "Mikulicz' and Systemic" groups.The percentage of patients fulfilling the classification criteria was lower in the "Retroperitoneum and Aorta" group (27.8%) and the "Head and Neck-limited" group (60.9%) (p < 0.001).
Reasons for failure to fulfil the 2019 ACR/EULAR classification criteria
The reasons why the 22 patients did not meet the 2019 ACR/EULAR classification criteria are summarized in Fig. 1 and Tables 2 and 3. Reasons for failure to fulfil the criteria included (i) failure to meet the inclusion criterion (n = 3), (ii) fulfilment of one or more exclusion criteria (n = 5) or (iii) failure to achieve the required 20 points (n = 14).
Of the 13 patients in the "Retroperitoneum and Aorta" group who failed to fulfil the 2019 ACR/EULAR classification criteria (Table 2), 1 had isolated coronary artery involvement, while the remaining 12 had retroperitoneal fibrosis in a typical distribution (i.e., anterolateral (or circumferential) fibrosis involving the infrarenal aorta, often extending to the iliac arteries).In all the latter 12 cases, the reason for failure to fulfil the classification criteria was the inability to achieve the required 20 points in the final domain of the criteria.Of these 12 cases, (i) 11 patients (91.7%) had retroperitoneal fibrosis (with or without concomitant aortitis and/or inflammatory abdominal aortic aneurysm) as the only manifestation of the disease; (ii) 6 patients (50.0%) had elevated serum IgG4 (> 2.01 g/L), and (iii) none had a representative biopsy.
Of the 9 patients in the "Head and Neck-limited" group who failed to fulfil the 2019 ACR/EULAR classification criteria (Table 3), the clinical presentations and reasons for failure to achieve the criteria were more diverse than in the "Retroperitoneum and Aorta" group.Biopsy had been performed in all 9 cases, 7 patients (77.8%) had elevated serum IgG4 concentration (> 2.01 g/L), and 8 patients (88.9%) had multiorgan involvement.Two patients (22.2%) failed to fulfil the entry criterion (with disease limited to oropharynx and nasal septum, respectively), but were presumed to have IgG4-RD based on histopathological findings, serum IgG4 concentrations, and lack of a clear and definite alternative cause.Five (55.6%) fulfilled one or more exclusion criteria: fever (n = 1), positive anti-RNP (n = 1) or positive MPO-ANCA (n = 3).In the MPO-ANCA positive group, 2 patients were presumed to have coexisting IgG4-RD and microscopic polyangiitis.Of the 7 patients who failed to fulfil the entry criterion and/or fulfilled an exclusion criterion, 5 (71.4%) achieved the required 20 points in the subsequent domain of the classification criteria.
Cases of discrepancy between RCD and the 2019 ACR/ EULAR classification criteria fulfilment
Among the 22 patients who did not fulfil the 2019 ACR/EULAR classification criteria, 16 (72.7%)fulfilled the RCD, with 5, 2 and 9 patients considered "definite", "probable" and "possible" IgG4-RD, respectively (Fig. 2).Among the 7 patients who did not fulfil RCD, one fulfilled the 2019 ACR/EULAR classification criteria.This was a patient with "Mikulicz' and Systemic" phenotype, with characteristic and extensive multiorgan involvement and normal serum IgG4 level, where biopsy was deemed unnecessary for diagnosis.As the current study population did not include patients diagnosed with mimicking conditions, we were not able to calculate the specificity of the criteria.
Discussion
The performance of diagnostic and classification criteria of IgG4-RD across phenotypes is not well studied.Here, we addressed this issue using data from a well-characterized Norwegian cohort diagnosed with IgG4-RD at a tertiary referral center.The key finding in this study is low sensitivity of the 2019 ACR/EULAR classification criteria for the "Retroperitoneum and Aorta", and "Head and Neck-limited" phenotypes of IgG4-RD.Additionally, we found that a lower proportion of patients with the "Retroperitoneum and Aorta" phenotype met the RCD compared to the other phenotyopes.
To our knowledge, our study is the first to describe fulfilment of RCD and 2019 ACR/EULAR classification criteria across the four phenotypes, highlighting potentially important differences across phenotypes.Fulfilment of classification criteria is usually a prerequisite for inclusion in studies in the field of rheumatology.Hence, the subset of patients fulfilling such criteria largely shape our understanding of a disease over time [5].Importantly, if classification criteria do not fully capture distinct clinical phenotypes which constitute a substantial proportion of patients and differ in clinically important features (such as prognosis), the net result may be lost opportunities for treatment of individual patients, and skewed apprehension of disease features.
Our cohort demonstrated similar disease characteristics and phenotypic distribution as the multinational phenotype derivation cohort [6], and most patients fulfilled RCD.Despite this, only a proportion of patients in the "Retroperitoneum and Aorta" (27.8%) and "Head and Neck-limited" (60.9%) phenotypes fulfilled the 2019 ACR/EULAR classification criteria.This contrasts the findings in the phenotype derivation cohort, where the fulfilment of 2019 ACR/ EULAR classification criteria in these two groups were 77% and 84%, respectively [6].The reasons for the lower sensitivity in our cohort is not clear.It may reflect differences in case selection, possibly reflecting differences in assessment of retroperitoneal fibrosis (biopsy versus imaging).Also, it may reflect disease expression, i.e., Norwegian patients in "Retroperitoneum and Aorta" and "Head and Neck-limited" group could potentially have fewer additional manifestations and/or lower serum IgG4 than other cohorts, limiting their accrual of additional points in the classification criteria.Alternatively, one could argue that some patients in our cohort were misdiagnosed as IgG4-RD.In patients with retroperitoneal fibrosis with no other organ manifestations, normal serum IgG4, and no (conclusive) biopsy, a presumptive clinical diagnosis of possible IgG4-RD was made based on demography and radiological findings (i.e., distribution of the fibrosis), if other causes were deemed less likely, albeit with the recognition that distinction between IgG4-RD and "idiopathic retroperitoneal fibrosis" in such scenarios is difficult.The diagnosis of IgG4-RD can also be debated in some of the patients in the "Head and Necklimited" phenotype.In general, we base the diagnosis on compatible clinical presentation (slowly progressive, painless tumefactive lesion(s) or gross organomegaly), with compatible histopathological findings, frequently accompanied by elevated serum IgG4, and absence of a definite alternative cause.While patients with overlapping features of ANCA-associated vasculitis (AAV) and IgG4-RD represent a diagnostic challenge, we considered the three patients included in this study to have coexisting AAV and IgG4-RD.
Considering the inherent ambiguity when diagnosing a complex and heterogenous disease, we chose to describe the patients not fulfilling the 2019 ACR/ EULAR classification criteria, for transparency and to allow recalculation based on alternative interpretations by the readers.
The most common reason for not fulfilling the 2019 ACR/EULAR classification criteria in our IgG4-RD cohort was inability to achieve the required 20 points in the final step of the criteria [2].It is possible that this relates to the low numeric weight assigned to typical manifestations in both the "Retroperitoneum and Aorta" and "Head and Neck-limited" phenotypes.For instance, retroperitoneal fibrosis in a typical distribution, a finding highly suggestive of IgG4-RD, yields only 8 points [2].These patients frequently have normal or only mildly elevated serum IgG4 concentration, no other organ involvement, and are often poor candidates for biopsy due to the periaortic disease distribution [8].This was also demonstrated in our study, with the "Retroperitoneum and Aorta" group having the lowest mean serum IgG4 level, fewer involved organs, and rarely having undergone biopsy.Similarly, orbital pseudotumor, a typical manifestation of the "Head and Neck-limited" group, does not yield any points in the classification criteria [2].
Importantly, clinical experience indicates that the "Retroperitoneum and Aorta" and "Head and Neck-limited" phenotypes are more treatment refractory than the remaining groups [8].Taken together, these observations may indicate that the 2019 ACR/EULAR classification criteria could disfavor subsets of IgG4-RD patients with more treatment resistant disease.
As we did not have access to patients with mimicking conditions in this study, we were unable to calculate the specificity of any criteria.It is reasonable to assume that RCD has a low specificity for IgG4-RD, as it focuses on largely nonspecific features of the disease.This is particularly true for cases designated as "possible" IgG4-RD, which largely rests on elevated serum IgG4, a finding seen in many inflammatory conditions.Accordingly, we do not suggest the superiority of these criteria, nor do we support favoring their use to identify patients for clinical trials.Rather, the main finding in our study is the potential limitation of the 2019 ACR/EULAR classification criteria for certain phenotypes, which may have implications for future research.Whether increasing the weighed score assigned to "typical" retroperitoneal fibrosis and/or including orbital pseudotumor as a weighted manifestation alleviate this shortcoming without significantly sacrificing specificity is unclear but warrants further discussion.We encourage further research to evaluate the specificity of the criteria in large cohorts that include patients diagnosed with mimicking conditions.
The strength of our study is a well-described cohort followed at a tertiary referral center with rheumatologists, pathologists, radiologists, and other specialists experienced in IgG4-RD.Furthermore, the work-up included advanced imaging, including 18 FDG PET/CT in many patients.Hence, it seems unlikely that the failure to achieve the required 20 points reflects inability to capture additional, mild and/or asymptomatic disease manifestations.
The limitations of this study include its single center design with partly retrospectively collected data and predominantly White patients.Another limitation is the lack of baseline (pre-treatment) serum IgG4 in some patients, the fact that some patients did not have a biopsy performed, and the inherent diagnostic ambiguity in this field.
Conclusion
Our study demonstrated that the 2019 ACR/EULAR classification criteria did not capture most patients with the "Retroperitoneum and Aorta" and "Head and Necklimited" phenotypes of IgG4-RD.Hence, through a lower ability to capture these subgroups, results from studies based on these criteria, may not be representative for the whole disease population.
Fig. 1
Fig. 1 Legend: Fulfillment of the 2019 ACR/EULAR classification criteria in the Norwegian IgG4-RD cohort
Fig. 2
Fig. 2 Legend: Discrepancy between fulfilment of the RCD and 2019 ACR/EULAR classification criteria
Table 1
Baseline characteristics and phenotypic distribution of the IgG4-RD study cohortContinuous variables were analyzed by one-way ANOVA.Categorical variables were analyzed by contingency tables CRP C-reactive protein, ESR erythrocyte sedimentation rate, RCD revised comprehensive diagnostic criteria a Some patients did not measure serum IgG4 (s-IgG4) before initiation of immunosuppressive therapy.These were considered to have elevated baseline s-IgG4 if they had elevated levels after initiation of treatment; or excluded if they had normal s-IgG4 after initiation of treatment.Elevated s-IgG4 = above the upper limit of normal in the Oslo University Hospital laboratory assay (≥ 2,01 g/L) b Excluding
Table 2
Patients with the "Retroperitoneum and Aorta" phenotype, who failed to fulfil the 2019 ACR/EULAR classification criteria
Table 3
Patients with the "Head and Neck-Limited" phenotype, who failed to fulfil the 2019 ACR/EULAR classification criteria | 2023-09-05T13:52:57.663Z | 2023-09-05T00:00:00.000 | {
"year": 2023,
"sha1": "032b4930964a79a7ccb9b933118ca4d8a2524ada",
"oa_license": "CCBY",
"oa_url": "https://arthritis-research.biomedcentral.com/counter/pdf/10.1186/s13075-023-03155-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63c28945771dcb98bb18cfd7bb2d6b43118de042",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270237513 | pes2o/s2orc | v3-fos-license | Measurements of Nuclear Magnetic Shielding in Molecules
The origin of nuclear magnetic shielding in diamagnetic molecules is discussed, pointing out various contributions to the shielding from electrons and the effects of intra- and intermolecular interactions. In NMR practice, chemical shifts are determined first as the measure of shielding in observed samples. The descriptions of shielding and chemical shifts are not fully consistent. Gas phase studies permit the withdrawal of intermolecular contributions from shielding and obtaining the magnetic shielding data in isolated molecules. The shielding determination in molecules is possible using at least three methods delivering the reference shielding standards for selected nuclei. The known shielding of one magnetic nucleus can be transferred to other nuclei if the appropriate nuclear magnetic moments are available with satisfactory accuracy. It is possible to determine the nuclear magnetic dipole moments using the most advanced ab initio shielding calculations jointly with the NMR frequencies measurements for small-sized isolated molecules. Helium-3 gas is postulated as all the molecules’ primary and universal reference standard of shielding. It can be easily applied using common deuterium lock solvents as the secondary reference standards. The measurements of absolute shielding are available for everyone with the use of standard NMR spectrometers.
Introduction
Electrons always shield atomic nuclei in molecules from the influence of an external magnetic field.This physical phenomenon is described by the difference between the induction of applied field B 0 and its value B eff experienced by the nucleus and for isotropic species is expressed as follows: where σ is the shielding parameter dependent on the total electronic structure of the observed molecular system, the above form of Equation ( 1) means σ = 1/3 (σ xx + σ yy + σ zz ) because in a general case, shielding is a second-rank tensor and the induction of magnetic field is represented by its vectors.Equation ( 1) is sufficient for spherical systems like atoms or molecules in the gaseous and liquid states where molecular reorientation is not hindered.For atoms, the shielding is just described by the Lamb [1] equation [2].The shielding theory for molecules is a bit more complex and was first formulated by Ramsey for diatomic molecules, especially for a hydrogen molecule [3,4].In 1954, Saika and Slichter [5] noted that the magnetic shielding of a nucleus σ i can be presented as the sum of three different components In this equation, σ i d and σ i p are the local diamagnetic and paramagnetic parts of shielding when the last term is responsible for all the modifications of σ i arising from intraand intermolecular effects.A more detailed description of the shielding in diamagnetic molecules was provided by Pople [6] and other researchers [7,8].They explained the first two partial terms in Equation (2) as follows [9]: Equations ( 3) and ( 4) contain constants: e-electron charge, m e -electron mass, and µ 0-free space permeability.Then, in the bra-ket notation, there are the described wave functions of the ground state of electrons (0) and all the excited states (k) with their appropriate energies E 0 and E k .L and r are the vectors that represent the orbital angular momentum and the distance from an arbitrary origin for the nth electron, respectively.The important features of Equations ( 3) and ( 4) arise from the different signs of these two components to the total magnetic shielding: σ i d is positive (shielding effect) while σ i p is always negative (deshielding effect).It means that the total shielding effects for diamagnetic molecules can be positive (σ i > 0) or negative (σ i < 0), assuming ∑ i̸ =j σ j = 0 in Equation (2).The last term of Equation ( 2) is responsible for intra-and intermolecular effects that can be separately measured in the gas phase as described by Jameson [10,11].All the possible components of Equation ( 2) are presented in Figure 1 using selected examples from multinuclear NMR spectra.
Equations ( 3) and ( 4) contain constants: e-electron charge, me-electron mass, and µ0-free space permeability.Then, in the bracket notation, there are the described wave functions of the ground state of electrons (0) and all the excited states (k) with their appropriate energies E0 and Ek.L and r are the vectors that represent the orbital angular momentum and the distance from an arbitrary origin for the nth electron, respectively.The important features of Equations ( 3) and ( 4) arise from the different signs of these two components to the total magnetic shielding: σi d is positive (shielding effect) while σi p is always negative (deshielding effect).It means that the total shielding effects for diamagnetic molecules can be positive (σi > 0) or negative (σi < 0), assuming ∑ = 0 in Equation (2).The last term of Equation ( 2) is responsible for intra-and intermolecular effects that can be separately measured in the gas phase as described by Jameson [10,11].All the possible components of Equation ( 2) are presented in Figure 1 using selected examples from multinuclear NMR spectra.As seen in Figure 1, the magnetic properties of atoms and molecules are the function of their electronic structures.Atoms are usually more shielded, having more electrons, but atomic electronegativity is also an important feature.Molecules have a more complex distribution of electrons around atomic nuclei, and all the terms of Equation ( 2) must be remembered.The total shielding can be negative for some atomic nuclei; the F2 molecule is a spectacular example of such a case but is not unique.There are many other examples of the total deshielding effects for diamagnetic molecules, especially observed in 15 N, 17 O, and 19 F NMR spectra.However, more electrons in the neighborhood usually lead to an increase in nuclear magnetic shielding in diamagnetic molecules.Molecular vibrations and rotation cause further changes in shielding because the interatomic distance is enlarged.It results in deshielding effects in the majority of molecules, but some exceptions from the above rule are also known, cf.PH3 molecule shown in Figure 1.All intermolecular interactions also modify nuclear shielding in diamagnetic molecules; usually, the decrease of shielding is observed, but the reverse effects occur for selected atoms containing lone pairs of electrons.Figure 1 illustrates such a case for a 15 N nucleus in acetonitrile.
Gas phase studies are crucial for the separation of molecular shielding parameters from all the large intermolecular contributions present in macroscopic samples.The same effects As seen in Figure 1, the magnetic properties of atoms and molecules are the function of their electronic structures.Atoms are usually more shielded, having more electrons, but atomic electronegativity is also an important feature.Molecules have a more complex distribution of electrons around atomic nuclei, and all the terms of Equation ( 2) must be remembered.The total shielding can be negative for some atomic nuclei; the F 2 molecule is a spectacular example of such a case but is not unique.There are many other examples of the total deshielding effects for diamagnetic molecules, especially observed in 15 N, 17 O, and 19 F NMR spectra.However, more electrons in the neighborhood usually lead to an increase in nuclear magnetic shielding in diamagnetic molecules.Molecular vibrations and rotation cause further changes in shielding because the interatomic distance is enlarged.It results in deshielding effects in the majority of molecules, but some exceptions from the above rule are also known, cf.PH 3 molecule shown in Figure 1.All intermolecular interactions also modify nuclear shielding in diamagnetic molecules; usually, the decrease of shielding is observed, but the reverse effects occur for selected atoms containing lone pairs of electrons.Figure 1 illustrates such a case for a 15 N nucleus in acetonitrile.
Gas phase studies are crucial for the separation of molecular shielding parameters from all the large intermolecular contributions present in macroscopic samples.The same effects are still present in the gaseous samples, but they are much smaller and can be eliminated by the extrapolation of shielding measurements to the zero-density limit [10,11].It gives the measurement of shielding equivalent to isolated molecules denoted as σ 0 .Our review is mostly focused on the shielding in isolated molecules because only such experimental results can be used to connect with the most advanced calculations of the same parameters.Let us add that the description of nuclear magnetic shielding given by Equations ( 2)-( 4) is very useful in the qualitative understanding of shielding in diamagnetic molecules, but such an approximation is old-fashioned and is not applicable to modern quantum chemical calculations.The present state-of-the-art calculations of shielding [12][13][14][15][16][17][18] deliver such excellent results that they can often be treated as the source of the best data even in experimental NMR work, as shown in the recent comparison of experimental and calculated NMR parameters in CH 4-n F n molecules [19].It is nothing unusual, but verifying all available calculated results by experiment is also welcomed because many different approximate quantum chemistry methods are widely applied to shielding calculations.
In molecules, atomic nuclei are always surrounded by electrons, and so far, there is no possibility for the straightforward measurement of the molecular shielding, σ i .NMR spectroscopy offers only a reading of the shielding difference between two macroscopic shielded objects, known as a chemical shift.This parameter is extremely helpful for the qualitative analysis of chemical compounds but contains rather limited information on shielding.Nevertheless, chemical shifts are also exploited to determine nuclear magnetic shielding as the first step when the absolute shielding of at least one reference standard is known satisfactorily.The problem of absolute shielding is not easy because it usually requires additional information from other than NMR experiments or quantum chemical calculations, and, often, it must be solved individually for each kind of magnetic nucleus.At least three methods are applied to the above investigations and are discussed in the present review.All the experimental attempts of shielding determination are equally important and precious because they allow for the reliable verification of the determined σ 0 data in molecules.
NMR Chemical Shifts
NMR spectra are most applied in the qualitative analyses of chemical liquid compounds, and the shielding parameters are observed as the NMR chemical shifts (δ i ) measured relative to selected reference standards: where σ ref and σ i are the shielding values of reference and investigated compounds, respectively.This is an excellent method for analytical chemistry but requires strict standardization of chemical shift measurements, as described by Harris et al. [20,21].It is accepted in the scientific literature that NMR chemical shifts were discovered by Proctor and Yu when separate 14 N signals from NH 4 + and NO 3 -ions were observed at different resonance frequencies with a constant magnetic field [22].The same issue of Physical Review also contains Dickinson's publication [23], which describes the observation of 19 F signals of fluorine compounds with various magnetic fields at the fixed resonance frequency.Both the research notes report the same finding of chemical shifts because the NMR experiment can be completed by either changing resonance frequencies with the stable magnetic field (B 0 ) or having a constant electromagnetic frequency (ν 0 ).One can use the variation of a frequency (or a magnetic field) as shown by Equations ( 6) and (7).
δ i is the parameter earlier defined by Equation ( 5), and the above equations only show how the NMR measurement should be performed.The indexes "i" and "ref" are for the observed and reference nuclei.Chemical shifts are usually in the range of 10 −6 , and their values are always presented in "parts per million" (in ppm's).In the first 25-30 years of NMR history, Equation (7) was mostly used for the determination of chemical shifts.Later, the superconducting magnets were introduced to standard NMR spectrometers, and Equation ( 6) became the formula recommended by IUPAC [20,21].
Let us note that Equations ( 6) and ( 7) have different orders of "i" and "ref" indexes in the formulas.It is true because the increase of radiofrequency (B 0 = const.)is observed for less shielded nuclei while the magnetic field (ν 0 = const.)and magnetic shielding are changing in the same direction.It arises from the basic description of an NMR experiment when the hv quantum of radio-frequency energy is absorbed by the single magnetic nucleus: where g X is the g-factor of the observed nucleus (g X = µ X /(I X µ N ), which describes its spin I X , magnetic moment µ X , and µ N is the nuclear magneton.Over one hundred stable atomic nuclei are magnetic and each kind of them has its magnetic moment and the appropriate g X value.Consequently, there are as many NMR spectroscopic methods as the number of magnetic nuclei.Some of them, e.g., 1 H, 13 C, 15 N, 17 O, 19 F, and 31 P NMR, are especially important in organic chemistry.The others are also intensively explored in experimental studies of chemistry and physics.Table 1 presents only the modest choice of magnetic nuclei and their NMR spectral parameters used for the discussed methods of shielding measurements.Ref. [20]; b ref. [24]; c g X = µ X /(I X µ N ), µ N = 5.050 783 53 × 10 −27 J T −1 ; d ref. [25]; e estimated based on 1 H results and the actual 2 H resonance frequency, Ξ D (TMS-d 12 in CDCl 3 ); f ref. [26]; g ref. [25]; h ref. [27]; i ref. [28,29]; j ref. [30]; k ref. [31]; l ref. [32].
As seen in Table 1, the selected well-known nuclides have mostly a spin number I X equal to ½.Such nuclei are spherical and have no quadrupole electrical momentum.Their NMR signals are usually sharp and permit more precise measurements of chemical shifts.Nevertheless, Table 1 also includes oxygen-17 and deuterium because of their importance in NMR spectroscopy and the presentation of some problems discussed in this review.The next column of Table 1 gives the natural abundance of the magnetic nuclei.Usually, the natural abundance above one percent guarantees good NMR spectra if a modern FT spectrometer is applied.A sample containing a much lower percentage of magnetic nuclei cannot be easily observed at the natural abundance, especially in the gas phase.The magnetic moments and g X factors describe the magnitude of nuclear magnetic properties; their improved values are given in [24].
There is also one more interesting column in Table 1, which reveals the absolute resonance frequencies of selected reference standards if the various NMR experiments are performed when the external magnetic field (B 0 ) is fixed and giving a 1 H NMR signal of 1% TMS in CDCl 3 precisely at Ξ H = 100.000000MHz [20].This idea of the absolute chemical shift comes from early double resonance experiments [33,34] and is very interesting because it unifies the important NMR parameter (chemical shift) for all magnetic nuclei.Unfortunately, it requires the double resonance method and involves many other experimental problems that the concept of Ξ X has never been popular in everyday use of NMR practice.Limiting the discussion only to proton spectra, we should write the following Ξ H values for isolated molecules of methane (100.0002847MHz), ethane (100.0003593MHz), and ethylene (100.0008017MHz) based on existing δ i results [35].It is not so bad when everything is limited only to 1 H NMR experiments.However, too many digits present at every NMR measurement make such a description of experimental data impractical and unpopular in everyday NMR experimental work.It is also important that for all nuclei other than protons, the Ξ X measurement requires the simultaneous observation of 1 H NMR experiment because from the definition Ξ H (1% TMS in CD 3 Cl) must be equal to 100.000000 MHz in every case.It cannot be achieved using a standard NMR spectrometer.
General Insight
The range of chemical shifts depends on the electronic structure of atoms in molecules and is different for each kind of magnetic nuclei.Figure 2 illustrates the approximate area of chemical shifts on the scale of the nuclear magnetic shielding for the most popular nuclei and the positions of the recommended reference standards.The diagram refers to Table 1 but lacks two nuclei: 2 H and 3 He.The range of 2 H NMR chemical shifts is practically the same as for 1 H NMR if the minimal 2 H/ 1 H isotope effects in shielding are neglected [36].Helium-3 is not present here because it does not form any chemical compound.There is only a special case of 3 He NMR chemical shifts when helium atoms are encapsulated in fullerenes; then, the helium-3 shifts are −6.4 ppm for 3 He@C 60 and −27.9 ppm for 3 He@C 70 systems relative to pure gas 3 He [37,38].Theoretical models of helium-3 encapsulated in more complex carbon nanostructures predict even larger effects in 3 He shielding [39,40], and all recent results on helium-3 NMR were overviewed by Kupka [41] and Krivdin [42].
Figure 2 has been prepared mostly using the available data on chemical shifts given by Bruker Almanac 2010 [43] and the shielding parameters of reference standards shown in Table 1.The blue bars approximately cover the ranges of 1 H, 13 C, 15 N, 17 O, 19 F, 29 Si, 31 P, and 77 Se magnetic shielding for diamagnetic chemical compounds.There are also marked three selected special points of shielding: σ H (HI) = 43.92ppm (cf.δ H = −10.44ppm for HI in 1 H NMR [35,44]), σ C (CI 4 ) = 478.7 ppm (δ C = −292.3ppm for 13 CI 4 [45,46]), and σ Si (SiI 4 ) = 730.7 ppm (δ Si = −351.7 ppm for 29 SiI 4 [47]).All the latter chemical shifts are measured relative to TMS and illustrate the unusual increase of shielding due to the presence of so-called "heavy atoms", an iodide in this case [48,49].The effect is especially enormous for 13 C NMR spectra where the signal of carbon tetraiodide (CI 4 ) is far away from the normal range of carbon-13 chemical shifts.It is so strong that even solvent molecules containing halogen atoms produce extremely large intermolecular shifts in 13 C NMR spectra [50,51].
Figure 2.
On the scale of nuclear magnetic shielding, the blue bars represent the range of chemica shifts for selected nuclei [43].The values of chemical shifts can be positive (+) or negative (−) depending on the position of the reference standard (I).Special shielding effects are also highlighted by black triangles for three compounds containing iodide atoms: HI in 1 H, CI4 in 13 C, and SiI4 in 29 S NMR [35,[44][45][46][47].
Figure 2 has been prepared mostly using the available data on chemical shifts given by Bruker Almanac 2010 [43] and the shielding parameters of reference standards shown in Table 1.The blue bars approximately cover the ranges of 1 H, 13 C, 15 N, 17 O, 19 F, 29 Si, 31 P and 77 Se magnetic shielding for diamagnetic chemical compounds.There are also marked three selected special points of shielding: σH(HI) = 43.92ppm (cf.δH = −10.44ppm for HI in 1 H NMR [35,44]), σC(CI4) = 478.7 ppm (δC = −292.3ppm for 13 CI4 [45,46]), and σSi(SiI4) = 730.7 ppm (δSi = −351.7 ppm for 29 SiI4 [47]).All the latter chemical shifts are measured relative to TMS and illustrate the unusual increase of shielding due to the presence of socalled "heavy atoms", an iodide in this case [48,49].The effect is especially enormous for 13 C NMR spectra where the signal of carbon tetraiodide (CI4) is far away from the norma range of carbon-13 chemical shifts.It is so strong that even solvent molecules containing halogen atoms produce extremely large intermolecular shifts in 13 C NMR spectra [50,51].
Figure 2 shows only a modest representation of over 100 stable magnetic nuclei which can be observed by the NMR method, and it proves how different the features of shielding for various magnetic nuclei are.First, there is a different range of shielding for each element.A shielding range of only several ppm for proton spectra and almost three thousand for 77 Se, and even more for heavier nuclei like 199 Hg or 205 Pb.Second, the particular area of shielding belongs to its characteristic region.It is so different if we compare the scales of 17 O and 77 Se shielding, less positive for oxygen-17 and more positive for selenium-77.Third, the reference standard is usually different for each magnetic nucleus, with one significant exception.Tetramethylsilane (TMS) was introduced as the internal reference standard to 1 H NMR by Tiers in 1958 [52] and later accepted also for the referencing of 13 C and 29 Si spectra.Multinuclear NMR experiments also varied in many other details 29 Si NMR [35,[44][45][46][47].
Figure 2 shows only a modest representation of over 100 stable magnetic nuclei, which can be observed by the NMR method, and it proves how different the features of shielding for various magnetic nuclei are.First, there is a different range of shielding for each element.A shielding range of only several ppm for proton spectra and almost three thousand for 77 Se, and even more for heavier nuclei like 199 Hg or 205 Pb.Second, the particular area of shielding belongs to its characteristic region.It is so different if we compare the scales of 17 O and 77 Se shielding, less positive for oxygen-17 and more positive for selenium-77.Third, the reference standard is usually different for each magnetic nucleus, with one significant exception.Tetramethylsilane (TMS) was introduced as the internal reference standard to 1 H NMR by Tiers in 1958 [52] and later accepted also for the referencing of 13 C and 29 Si spectra.Multinuclear NMR experiments also varied in many other details, such as the natural abundance of each nucleus or the magnitude of its nuclear magnetic moment.As seen in Figure 2, the whole multinuclear NMR spectroscopy is well described and unified by one common parameter-nuclear magnetic shielding.
Therefore, NMR shielding is an important parameter of molecules and can be determined from chemical shifts using Equation (5) if the shielding of at least one small molecule (σ ref ) for each magnetic nuclide is known with satisfactory accuracy.There is a fundamental question of how the σ ref reliable value for the given nucleus can be determined.It seems that the most direct method is available just from advanced quantum chemical calculations.
Theoretical Approach to Shielding
The shielding ab initio calculations usually start with a molecule's computed equilibrium geometry.It requires large basis sets for the satisfactory description of electron correlations in small molecules.The common solution to this problem applies the gauge-included atomic orbitals (GIAO) approach [12,13].Various approximate methods are used for the shielding parameter calculations (frequently named improperly as the shielding constants): from HF (Hartree-Fock) to FCI (Full Configuration Interaction), MCSCF (Multi-Configuration Self-Consistent-Field), CC (Coupled Claster) approximation and MP (Møller-Plesset) perturbation theory [17].The reference NMR molecule is usually the smallest one, and for this purpose, we should look for the most advanced ab initio methods with the perturbative-dependent basis set.Electron correlation effects should be calculated at CCSD (Coupled Cluster Singlets and Doubles) or CCSD(T) (Coupled Cluster Singlets and Doubles with Perturbative Triple Corrections) levels of theory [14,15].The modern state-of-the-art magnetic shielding calculations are fairly advanced at the non-relativistic level [53].They include all the intra-and intermolecular contributions to shielding [54], as such effects are always present in NMR experiments.To obtain calculated results that are ready for comparison with the experiment, it is necessary to consider the strong dependence of shielding on molecular geometry.It can be achieved at the ZPV (Zero-Point Vibration), or even better, including temperature effects up to the value of experimental work.Finally, the relativistic effects in shielding should be considered, mainly responsible for the extensive shielding of heavy atoms.It requires new methods designed for the description of electron interactions in molecules (four-component Dirac-Coulomb-Breit) [55].Relativistic calculations are more expensive than any non-relativistic methods, and less advanced descriptions of electrons are usually applied, like HF or density functional theory (DFT) approximation.The relativistic contributions to shielding are mainly responsible for the extensive shielding of 1 H in HI, 13 C in CI 4 , and 29 Si in SiI 4 , as shown in Figure 2. It is the so-called HALA effect (heavy-atom effect on light atoms).
Intermolecular effects are difficult for theoretical treatment [17,54], and for this reason, it is better to avoid such effects by performing NMR experiments in the gas phase.Recently, a precise comparison of experimental and calculated shielding for small molecules CH 4-n F n has been presented [19].Earlier, a similar comparison was performed for NF 3 , PF 3 , and AsF 3 compounds, where the change of absolute 19 F shielding in liquid CFCl 3 to 197.07 ppm was suggested [56].It is questionable and requires new experimental proof because the existing last measurement gave 190.0 ppm if a bulk susceptibility correction is excluded [57,58].The recent comparison of 19 F shielding in CH 4−n F n [19] rather confirms the old experimental result for liquid CFCl 3 .
The new methods of shielding calculations are powerful for small molecules, but let us compare the first approximation of proton shielding in the H 2 molecule given by Ramsey in 1950 [3] (26.8 ppm) with the available calculated results: Sundholm and Gauss CCSD(T) value is 26.2983 ppm at T = 298 K [59] and Jaszu ński et al.CCSD shielding is 26.2980 at T = 300 K [60].Ramsey's prediction of 1 H shielding in H 2 molecule is overestimated only by 1.9 percent.The above example of magnetic shielding in the hydrogen molecule also illustrates the importance of rovibrational effects in shielding [54] that were included in the new calculations and effectively diminished the final H 2 result [59,60].
We are mostly interested in the nuclear magnetic shielding observed in small molecules, but quantum chemical calculations are also possible for larger molecular objects if some further approximations are applied.In such a case, the most popular method is DFT (Density-Functional Theory) [16], which can also be applied to studies of shielding at the four-component relativistic level [46].Another approach is offered by the ONIOM method, in which the shielding calculations are performed with different approximation levels for the selected layers of electrons in a molecule [61].It permits more precise calculations of shielding in larger molecules, simultaneously saving computer time.Other mixed methods of shielding calculations are applied for solids.Recently, it has been shown that a quantum mechanics/molecular mechanics (QM/MM) method can predict the solid-state NMR shielding for molecular crystals [62].
The Ramsey-Flygare Method
Equations ( 2) and (3) reveal that the magnetic shielding in a molecule consists of the positive diamagnetic part (σ dia ), which depends only on the ground electronic state of the molecule.This shielding part can be relatively easily calculated using modern quantum chemical methods.The second term of shielding described by Equation ( 4) is much more complex for calculations but is related to the microwave nuclear spin-rotation tensor (C), which is reduced to a coupling constant (c I ) for linear molecules.It happens because, in such a case, the paramagnetic shielding part parallel to the bond axis (σ ∥ p ) is equal to zero.Remains the perpendicular part of paramagnetic shielding ( ONIOM method, in which the shielding calculations are performed with different approximation levels for the selected layers of electrons in a molecule [61].It permits more precise calculations of shielding in larger molecules, simultaneously saving computer time.Other mixed methods of shielding calculations are applied for solids.Recently, it has been shown that a quantum mechanics/molecular mechanics (QM/MM) method can predict the solid-state NMR shielding for molecular crystals [62].
The Ramsey-Flygare Method
Equations ( 2) and (3) reveal that the magnetic shielding in a molecule consists of the positive diamagnetic part (σdia), which depends only on the ground electronic state of the molecule.This shielding part can be relatively easily calculated using modern quantum chemical methods.The second term of shielding described by Equation ( 4) is much more complex for calculations but is related to the microwave nuclear spin-rotation tensor (C), which is reduced to a coupling constant (cI) for linear molecules.It happens because, in such a case, the paramagnetic shielding part parallel to the bond axis (σ‖ p ) is equal to zero.Remains the perpendicular part of paramagnetic shielding (σꞱ p ), which, as shown by Flygare [63,64], is related to the spin-rotation constant for a nucleus in a diatomic molecule as follows: In the above equation, mp is the proton mass, B is the rotational constant of the molecule, Z is the atomic number of the neighbor atom in the molecule, and r is the internuclear distance.The rest of the parameters (me, µ0, e, and gX) are the same as previously defined in Equations ( 3), (4), and (8), respectively.As seen in Equation ( 9), there is a link between the spin-rotation interaction of microwave spectroscopy and the absolute nuclear magnetic shielding observed in NMR [65].However, the procedure of its use requires more additional work: first, the experimental spin-rotation constant must be refined from its vibrational corrections, and then the equilibrium value of shielding is obtained as shown in Equation ( 9), and finally, the rovibrational corrections should be added to shielding to obtain the σref value at the given temperature (usually 300 K).This relationship was exploited for the determination of absolute nuclear shielding in hydrogen fluoride (HF) delivering σref( 19 F) [57], in carbon monoxide molecules ( 13 C 16 O and 12 C 17 O) delivering the important σref( 13 C) [66][67][68] and σref( 17 O) values [69,70], respectively.The oxygen-17 case contains an interesting feature: it was initially based on the rotational constant of the 12 C 17 O molecule obtained from the observation of the J = 0 ← 1 l transition in the rotational spectrum observed from interstellar space [71] until a more accurate measurement of the same rotational constant was available from the laboratory experiment in 2002 [72].It is important to note that the Ramsey-Flygare method is not limited to linear and diatomic molecules [64].It was possible to obtain the σref( 15 N) value for ammonia enriched in nitrogen-15 [73] and σref( 31 P) for PH3 [74].
Methods Based on 1 H NMR Signal of Liquid Water and Shielding Transfers
The second method of shielding determination is based on the proton reference signal from liquid water in a spherical sample at a temperature of 34.7 °C, σref( 1 H2Oliq.sph)= ), which, as shown by Flygare [63,64], is related to the spin-rotation constant for a nucleus in a diatomic molecule as follows: In the above equation, m p is the proton mass, B is the rotational constant of the molecule, Z is the atomic number of the neighbor atom in the molecule, and r is the internuclear distance.The rest of the parameters (m e , µ 0 , e, and g X ) are the same as previously defined in Equations ( 3), (4), and (8), respectively.As seen in Equation ( 9), there is a link between the spin-rotation interaction of microwave spectroscopy and the absolute nuclear magnetic shielding observed in NMR [65].However, the procedure of its use requires more additional work: first, the experimental spin-rotation constant must be refined from its vibrational corrections, and then the equilibrium value of shielding is obtained as shown in Equation ( 9), and finally, the rovibrational corrections should be added to shielding to obtain the σ ref value at the given temperature (usually 300 K).This relationship was exploited for the determination of absolute nuclear shielding in hydrogen fluoride (HF) delivering σ ref ( 19 F) [57], in carbon monoxide molecules ( 13 C 16 O and 12 C 17 O) delivering the important σ ref ( 13 C) [66][67][68] and σ ref ( 17 O) values [69,70], respectively.The oxygen-17 case contains an interesting feature: it was initially based on the rotational constant of the 12 C 17 O molecule obtained from the observation of the J = 0 ← 1 l transition in the rotational spectrum observed from interstellar space [71] until a more accurate measurement of the same rotational constant was available from the laboratory experiment in 2002 [72].It is important to note that the Ramsey-Flygare method is not limited to linear and diatomic molecules [64].It was possible to obtain the σ ref ( 15 N) value for ammonia enriched in nitrogen-15 [73] and σ ref ( 31 P) for PH 3 [74].
Methods Based on 1 H NMR Signal of Liquid Water and Shielding Transfers
The second method of shielding determination is based on the proton reference signal from liquid water in a spherical sample at a temperature of 34.7 • C, σ ref ( 1 H 2 O liq .sph ) = 25.790(14) ppm [75].Two experiments standardized this 1 H signal: first, the simultaneous observation of the frequencies of an electronic and a proton transition in atomic hydrogen [76]; second, the simultaneous reading of 1 H NMR signals from atomic hydrogen and pure water, both in the spherical samples [77].The σ ref ( 1 H) value can be directly applied to the scale of proton shielding, but the use of a spherical sample at elevated constant temperature is rather inconvenient in everyday NMR experimental practice.The 1 H signal of H 2 O is extremely dependent on temperature.Figure 3 presents the positions of three isolated molecules (H 2 , H 2 O, and TMS) and the sample containing 1% of TMS in CDCl 3 on the shielding scale relative to the σ ref ( 1 H) reference signal.It permits the fast measurements of absolute shielding in 1 H NMR spectra for gaseous and liquid compounds using 1 H chemical shifts.
The actual σref( 1 H) parameter can also be used for shielding referencing other than proton nuclei as the universal reference standard of shielding, but it requires knowledge of the other nuclear magnetic dipole moments and the double-resonance experiments.There is also another possibility of transferring the shielding scale from one nucleus to another present in the same molecule, exploring the relaxation T1 time measurements in the gas phase.The latter method is described in detail by Jameson [75] and applied for the determination of the σref( 29 Si) in a gaseous mixture of SiH4 and SiF4 molecules [78], and the σref( 77 Se) parameter in H2Se and SeF6 molecules [32].
Helium-3 Atom as the Universal Shielding Reference
For many reasons, an isolated helium-3 atom is probably the best candidate for the universal reference standard of magnetic shielding in multinuclear NMR spectroscopy.First, the gas phase 3 He NMR experiments delivered the resonance frequency of an isolated helium-3 atom at the stable external magnetic field [79].Second, the 3 He measurement is independent of temperature, and no rovibrational corrections are needed for further increase in accuracy.Third, the quantum chemical calculations deliver the most accurate value of the 3 He atom, and this result can be accepted as the σref( 3 He) reference standard equal to 59.967 029 (23) ppm [26].
The actual σ ref ( 1 H) parameter can also be used for shielding referencing other than proton nuclei as the universal reference standard of shielding, but it requires knowledge of the other nuclear magnetic dipole moments and the double-resonance experiments.There is also another possibility of transferring the shielding scale from one nucleus to another present in the same molecule, exploring the relaxation T 1 time measurements in the gas phase.The latter method is described in detail by Jameson [75] and applied for the determination of the σ ref ( 29 Si) in a gaseous mixture of SiH 4 and SiF 4 molecules [78], and the σ ref ( 77 Se) parameter in H 2 Se and SeF 6 molecules [32].
Helium-3 Atom as the Universal Shielding Reference
For many reasons, an isolated helium-3 atom is probably the best candidate for the universal reference standard of magnetic shielding in multinuclear NMR spectroscopy.First, the gas phase 3 He NMR experiments delivered the resonance frequency of an isolated helium-3 atom at the stable external magnetic field [79].Second, the 3 He measurement is independent of temperature, and no rovibrational corrections are needed for further increase in accuracy.Third, the quantum chemical calculations deliver the most accurate value of the 3 He atom, and this result can be accepted as the σ ref ( 3 He) reference standard equal to 59.967 029 (23) ppm [26].
The final step requires just the transfer of shielding from helium-3 to other magnetic nuclei experiments, and such a measurement can be taken using double resonance methods like those previously explored by McFarlane [33,34].The comparison of Equation (8) for the 3 He and another X nucleus in the same magnetic field (B) leads to Equation (10) and means the transfer of shielding from the helium-3 to the X nucleus.The experiment, according to Equation (10), requires only an NMR spectrometer and the nuclear dipole moments (µ He and µ X ) [80][81][82]: I X and I He are the nuclear spin numbers of the X and helium-3 nuclei, respectively.As seen in Equation (10), the knowledge of nuclear magnetic moments is crucial for the determination of shielding, and this problem is generally discussed in the next section.
Nuclear Magnetic Dipole Moments
As mentioned in the Introduction, the present state-of-the-art calculations of shielding are very powerful [12][13][14][15][16][17][18] and can often be used to improve our knowledge of nuclear magnetic shielding in molecules.The best theoretical results of magnetic shielding in molecules were also applied for the improvement of nuclear magnetic moments [83].It was necessary because the existing results of the International Atomic Energy Agency (IAEA) at that time [84] were not reliable for all the heavier nuclei beyond hydrogen and helium-3.The determination of more accurate values of nuclear dipole moments was performed using the best available calculated results of shielding preferably in one molecule (σ X and σ Y ) and the gas phase measurements of resonance frequencies for the same isolated molecule (ν Y and ν Y ) where the index X refers to the reference nucleus, and Y is for the studied nucleus: This way, the nuclear magnetic moment of the X reference nucleus was transferred to the other Y nucleus using NMR results for isolated small molecules and state-of-the-art shielding calculations performed for the same molecules.Protons and helium-3 were mostly used as the reference nuclei because their magnetic moments were repeatedly verified [26,80,81,[84][85][86][87][88][89][90][91][92][93][94].In some cases, the reference molecules could not contain protons, and other nuclei served as the reference X nuclei.The results obtained from the application of Equation ( 11) to isolated molecules are summarized in Table 2.
Table 2. Nuclear magnetic dipole moments are determined from the gas phase NMR measurements and quantum chemical calculations of shielding.
with helium-3 is precious, and later, it was also exploited for the measurements of magnetic moments of rare gases [93,94].
Recently, the search for better and more accurate nuclear magnetic moments has continued.The most important study was published by Harding et al. [100].The authors determined the nuclear magnetic moment with much higher accuracy for an unstable 26 Na nucleus (1.1 s half-life time) using an improved version of the β-technique NMR combined with ab initio calculation of nuclear magnetic shielding performed for the stable 23 Na reference.New studies and further possible improvements in the determination of magnetic moments for stable nuclei also appeared [101][102][103][104], and three review papers on the same subject were published [105][106][107].
Universal Approach to Shielding Measurements
Nuclear magnetic shielding in molecules can be determined using a few different methods, as shown in Section 4. It is very helpful for the cross-checking of available experimental results and permits a better understanding of nuclear magnetic shielding.In our opinion, the application of Equation ( 10) is very promising because the number of reliable data for nuclear magnetic moments is quickly growing [83,.We also have an excellent reference standard of shielding, which is independent of temperature, the isolated helium-3 atom.However, the idea of the multinuclear experiments requires the simultaneous measurements of two different types of nuclei, the studied (X) and the reference nucleus ( 3 He).Fortunately, it can be achieved using any standard NMR spectrometer with the deuterium lock system when first, the selected 2 H NMR signal of lock solvent is calibrated by applying Equation (10), and then the deuterium lock solvent is used as the secondary reference of nuclear magnetic shielding [82].Let us note that this way, all the shielding measurements remain referenced to an isolated helium-3 atom, σ ref ( 3 He) = 59.9670 ppm [26].In our opinion, an isolated helium-3 atom is the best choice for the primary reference standard of shielding in multinuclear NMR experiments.The above method is schematically presented in Figure 4.The absolute magnetic shielding known for an isolated helium-3 atom σref( 3 He) can be encoded into popular deuterated lock solvents [82,107].Then, a standard NMR spectrometer is used as the double nuclear device, which permits the measurement of shielding for other magnetic nuclei.Such an experiment requires the exact reading of two resonance frequencies: νX for the observed nuclei and νD for the deuterium lock solvent.Important, the nuclear magnetic dipole moments must be known with satisfactory accuracy for the above shielding measurements.
Let us note that the new method of shielding measurements is well-calibrated with an isolated helium-3 atom and permits for using any standard NMR spectrometer in the double-resonance experiments according to Equation (12) where νD, µD, and σD are the frequency, 2 H magnetic moment, and calibrated 2 H shielding of liquid lock solvent, respectively; ID = 1.[82,107].Then, a standard NMR spectrometer is used as the double nuclear device, which permits the measurement of shielding for other magnetic nuclei.Such an experiment requires the exact reading of two resonance frequencies: ν X for the observed nuclei and ν D for the deuterium lock solvent.Important, the nuclear magnetic dipole moments must be known with satisfactory accuracy for the above shielding measurements.
Let us note that the new method of shielding measurements is well-calibrated with an isolated helium-3 atom and permits for using any standard NMR spectrometer in the double-resonance experiments according to Equation (12) where ν D , µ D, and σ D are the frequency, 2 H magnetic moment, and calibrated 2 H shielding of liquid lock solvent, respectively; I D = 1.
Equation ( 12) can be generally used for all NMR samples if the lock solvent is separated from the observed substance.It only replaces the measurement of δ i in Equation ( 5) for the direct reading of shielding (σ i in Equation ( 5)).Both the parameters (δ i and σ i ) are based on the same observation of experimental frequencies represented by ν i in Equation ( 6).In the present method of shielding measurements, nothing is modified inside the sample shielding (σ i in Equation ( 5)).Therefore, this method, described by Equation (12), can be safely used for the measurements of shielding in paramagnetic samples.
The σ D values of Equation ( 12) were calibrated for numerous signals from liquid lock solvents and published in original papers [82,107].Table 3 below shows only the most popular solvents that are frequently used in NMR laboratories.[82,107].Then, a standard NMR spectrometer is used as the double nuclear device, which permits the measurement of shielding for other magnetic nuclei.Such an experiment requires the exact reading of two resonance frequencies: νX for the observed nuclei and νD for the deuterium lock solvent.Important, the nuclear magnetic dipole moments must be known with satisfactory accuracy for the above shielding measurements.
Let us note that the new method of shielding measurements is well-calibrated with an isolated helium-3 atom and permits for using any standard NMR spectrometer in the double-resonance experiments according to Equation (12) where νD, µD, and σD are the frequency, 2 H magnetic moment, and calibrated 2 H shielding of liquid lock solvent, respectively; ID = 1.
Equation ( 12) can be generally used for all NMR samples if the lock solvent is separated from the observed substance.It only replaces the measurement of δi in Equation ( 5) for the direct reading of shielding (σi in Equation ( 5)).Both the parameters (δi and σi) are based on the same observation of experimental frequencies represented by νi in Equation ( 6).In the present method of shielding measurements, nothing is modified inside the sample shielding (σi in Equation ( 5)).Therefore, this method, described by Equation (12), can be safely used for the measurements of shielding in paramagnetic samples.
The σD values of Equation ( 12) were calibrated for numerous signals from liquid lock solvents and published in original papers [82,107].Table 3 below shows only the most popular solvents that are frequently used in NMR laboratories.) in 5 mm o.d.spinning NMR sample tubes at 300 K.More the σ D parameters are available in ref. [107].
Equation ( 12) contains many constants that can be consolidated into one number for practical applications, for example, in the most popular 1 H and 13 C NMR experiments.The problem is then simplified to the reading of absolute frequencies (ν H ) for protons or (ν C ) for carbons-13 and simultaneously for deuterium nuclei (ν D ) in lock solvent [82]: At this point, everything is ready for the recording of multinuclear NMR spectra with the scale of magnetic shielding instead of chemical shifts.We are applying Equations ( 13) and ( 14) and the σ D parameters of Table 3 to get such a spectrum using a standard NMR spectrometer with a deuterium lock system.It is illustrated in Figure 5, where the 13 C and 1 H spectra of ethyl crotonate (CH 3 CH=CHCOOC 2 H 5 ) dissolved in CDCl 3 are shown as examples.It was possible to record them automatically due to a small modification of software in the computer of our 500 MHz VarianINOVA NMR spectrometer.
the scale of magnetic shielding instead of chemical shifts.We are applying Equations ( 13) and ( 14) and the σD parameters of Table 3 to get such a spectrum using a standard NMR spectrometer with a deuterium lock system.It is illustrated in Figure 5, where the 13 C and 1 H spectra of ethyl crotonate (CH3CH=CHCOOC2H5) dissolved in CDCl3 are shown as examples.It was possible to record them automatically due to a small modification of software in the computer of our 500 MHz VarianINOVA NMR spectrometer.As shown, the measurements of nuclear magnetic shielding with the application of 1 H and 13 C NMR spectra are easy and comfortable for everyone.It does not require any special equipment and delivers valuable information on nuclear magnetic shielding.One feature of shielding measurements is especially interesting: the new method allows the measurement of the first-order isotope effects in shielding hydrogen isotopologues [108], As shown, the measurements of nuclear magnetic shielding with the application of 1 H and 13 C NMR spectra are easy and comfortable for everyone.It does not require any special equipment and delivers valuable information on nuclear magnetic shielding.One feature of shielding measurements is especially interesting: the new method allows the measurement of the first-order isotope effects in shielding hydrogen isotopologues [108], which was impossible before [109].All the isotope effects of hydrogen are illustrated by the original results presented in Table 4. Unexpectedly, the primary isotope effects in shielding 0 ∆( 2/1 H) observed for H 2 , HD, and D 2 molecules are stronger than the secondary isotopic effects 1 ∆( 2/1 H) for the same molecules.It is an important measurement because standard NMR methods cannot be used to determine the primary isotopic effects in shielding.In this case, the direct measurements of 1 H and 2 H shielding were performed for the gaseous mixtures of hydrogen molecules with helium-3.All the frequency measurements were extrapolated to the zero-density limit, and Equation (10) was applied for the determination of all 1 H and 2 H shielding data in H 2 , HD, and D 2 molecules [108].
The measurements of 13 C magnetic shielding relative to σ ref ( 3 He) were also extended on solid samples in MAS (Magic Angle Spinning) NMR spectra [111].It was possible to use spherical samples of liquid TMS, a solution of 1% TMS in CDCl 3 , and solid fullerene C 60 for the 13 C shielding measurements in the standard NMR experiments, and the same samples were also observed by the MAS NMR method.Then, the 13 C shielding values of popular MAS references like glycine, hexamethylbenzene, and adamantane were obtained by reading carbon-13 chemical shifts for solids [111].
Recently, the 1 H, 13 C, and 14 N magnetic shielding parameters were measured for emodin and chuanxiongzine, plant products that have pharmacological properties and are frequently used in traditional Chinese medicine [112].The direct measurements of 1 H and 13 C shielding were also applied in the studies of daidzein and puerarin which have natural anti-oxidant properties [113].
Conclusions
The origin of nuclear magnetic shielding in diamagnetic molecules is briefly discussed in the present review article.As shown, the important properties of molecules can be observed via chemical shifts in NMR spectra or can be calculated using advanced quantum chemical methods.The relations between shielding parameters and chemical shifts are rather complex because chemical shifts are separately defined for various magnetic nuclei by their reference standards.On the other hand, the measurements of shielding are badly needed for the direct comparison of experimental and calculated shielding values, also known as shielding constants.An isolated helium-3 atom is the natural choice for the universal reference standard of shielding.It can be easily applied in multinuclear NMR spectroscopy when its shielding is encoded into pure deuterated liquids that are used for the precise stabilization of the external magnetic field (lock system in NMR spectrometers).The method of shielding measurements is completed and can be applied to any liquid or gaseous NMR sample.
As shown in the previous section, the chemical shifts can be completely replaced in the future by the measurement of shielding parameters, and this alternative method of standardization of NMR spectra has numerous experimental advantages.The most important features of the new method are listed as follows.First, it unifies multinuclear methods into one NMR spectroscopy because the values of magnetic shielding have the same meaning independently of observed nuclei.Second, there is no need to use any additional reference standard if the NMR experiment is carried out with the calibrated 2 H solvent, as shown in Table 3, because the same original reference standard of shielding is always preserved-an isolated helium-3 atom [82].Third, the new method allows for the first time the measurement of the first-order isotope effects in shielding, as it has already been shown for hydrogen isotopologues [108].Fourth, the measurements of 13 C shielding relative to σ ref ( 3 He) can be extended on solid samples in MAS NMR spectra, as shown in ref. [111].Fifth, the measurements of shielding values are performed with the same precision as the standard determination of chemical shifts because they are based on the same reading of resonance frequencies, cf.Equation (6) vs. Equations ( 12)-( 14).Last but not least, the determined shielding parameters can always be converted back into chemical shifts using Equation ( 5) if necessary and without the use of any additional reference standard.The simultaneous measurements of both the shielding parameters and appropriate chemical shifts are always available from the same single NMR experiment.
To summarize, the direct measurements of nuclear magnetic shielding are already available for selected light nuclei, and they are relatively easy to apply if the calibrated lock solvents are used as the secondary reference standards.It permits the use of every NMR spectrometer with a 2 H lock system for required double-resonance experiments.We believe that progress in the knowledge of accurate nuclear moments will be growing fast, and the exact measurement of shielding will be available for more and more magnetic nuclei soon.
Figure 1 .
Figure 1.Selected examples from multinuclear NMR spectroscopy qualitatively illustrate the origin and further modifications of nuclear magnetic shielding in diamagnetic molecules.Black arrows mean the increase in shielding, and red arrows represent the deshielding effects.
Figure 1 .
Figure 1.Selected examples from multinuclear NMR spectroscopy qualitatively illustrate the origin and further modifications of nuclear magnetic shielding in diamagnetic molecules.Black arrows mean the increase in shielding, and red arrows represent the deshielding effects.
Figure 2 .
Figure 2. On the scale of nuclear magnetic shielding, the blue bars represent the range of chemical shifts for selected nuclei [43].The values of chemical shifts can be positive (+) or negative (−) depending on the position of the reference standard (I).Special shielding effects are also highlighted by black triangles for three compounds containing iodide atoms: HI in 1 H, CI 4 in 13 C, and SiI 4 in29 Si NMR[35,[44][45][46][47].
Figure 4 .
Figure 4.The absolute magnetic shielding known for an isolated helium-3 atom σref(3 He) can be encoded into popular deuterated lock solvents[82,107].Then, a standard NMR spectrometer is used as the double nuclear device, which permits the measurement of shielding for other magnetic nuclei.Such an experiment requires the exact reading of two resonance frequencies: νX for the observed nuclei and νD for the deuterium lock solvent.Important, the nuclear magnetic dipole moments must be known with satisfactory accuracy for the above shielding measurements.
Figure 4 .
Figure 4.The absolute magnetic shielding known for an isolated helium-3 atom σ ref ( 3 He) can be encoded into popular deuterated lock solvents[82,107].Then, a standard NMR spectrometer is used as the double nuclear device, which permits the measurement of shielding for other magnetic nuclei.Such an experiment requires the exact reading of two resonance frequencies: ν X for the observed nuclei and ν D for the deuterium lock solvent.Important, the nuclear magnetic dipole moments must be known with satisfactory accuracy for the above shielding measurements.
Figure 4 .
Figure 4.The absolute magnetic shielding known for an isolated helium-3 atom σref( 3 He) can be encoded into popular deuterated lock solvents[82,107].Then, a standard NMR spectrometer is used as the double nuclear device, which permits the measurement of shielding for other magnetic nuclei.Such an experiment requires the exact reading of two resonance frequencies: νX for the observed nuclei and νD for the deuterium lock solvent.Important, the nuclear magnetic dipole moments must be known with satisfactory accuracy for the above shielding measurements.
Figure 5 .
Figure 5. 1 H and 13 C NMR spectra of liquid ethyl crotonate in CDCl3 on a 500 MHz Varian INOVA spectrometer.The 2 H solvent signal of CDCl3 is used as the internal reference standard of nuclear magnetic shielding.The measurements contain all the components of shielding attributed to the particular nuclei in this sample.
Figure 5 .
Figure 5. 1 H and 13 C NMR spectra of liquid ethyl crotonate in CDCl 3 on a 500 MHz Varian INOVA spectrometer.The 2 H solvent signal of CDCl 3 is used as the internal reference standard of nuclear magnetic shielding.The measurements contain all the components of shielding attributed to the particular nuclei in this sample.
Table 1 .
Spectral NMR parameters and recommended reference standards for selected magnetic nuclei.
* For liquids observed at the external parallel magnetic field (
[107] liquids observed at the external parallel magnetic field (Bǁ) in 5 mm o.d.spinning NMR sample tubes at 300 K.More the σD parameters are available in ref.[107]. | 2024-06-05T15:23:00.152Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "5f569117c89e7b16769409e14d4fbfb13e3c4064",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/29/11/2617/pdf?version=1717379276",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3de705f569abe12bee79a28ea3ccc021746e2be4",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259129993 | pes2o/s2orc | v3-fos-license | Trabeculectomy Vs Non‐penetrating Deep Sclerectomy for the Surgical Treatment of Open-Angle Glaucoma: A Long-Term Report of 201 Eyes
Introduction Glaucoma is the second leading cause of vision loss worldwide. The reduction of intraocular pressure remains the backbone of its therapy. Among surgical techniques for its treatment, deep non-penetrating sclerotomy is the most widely practiced non-penetrating surgery. The purpose of this study was to evaluate the long-term efficacy and safety of deep non-penetrating sclerotomy compared to standard trabeculectomy in patients with open-angle glaucoma. Patients and methods Retrospective study including 201 eyes with open-angle glaucoma. Closed-angle and neovascular cases were excluded. Absolute success was considered when intraocular pressure under 18 mmHg, or a reduction of at least 20% in patients with a baseline intraocular pressure below 22 mmHg, was obtained after 24 months, without the use of medication. Qualified success was considered when those targets were met with or without the use of hypotensive medication. Results Deep non-penetrating sclerectomy showed a slightly lower long-term hypotensive effect compared to standard trabeculectomy, with significant differences at 12 months, but not at 24 months of follow-up. The absolute and qualified success rates were 51.85% and 65.43% for the trabeculectomy group and 50.83% and 60.83% for the deep non-penetrating sclerectomy, without significant differences. Regarding postoperative complications, mainly due to postoperative hypotonia, or related to the filtration bleb, they were significantly different between groups, with 10.8% and 24.7%, in deep-nonpenetrating sclerectomy and trabeculectomy groups, respectively. Conclusion Deep non-penetrating sclerectomy seems to be an effective and safe surgical option for patients with open-angle glaucoma unable to be controlled by non-invasive strategies. Data suggests that the intraocular pressure-lowering effect of this technique may be marginally lower than that of trabeculectomy, but the achieved efficacy outcomes were similar, with a significantly lower risk of complications.
Introduction
Glaucoma is the second leading cause of vision loss worldwide, 1 and its prevalence is increasing with the global aging population. Projections predict that by 2040, approximately 112 million patients will be diagnosed with glaucoma, disproportionally affecting people residing in Asia and Africa. 2 The risk and subtypes of glaucoma vary among races and countries, with primary open-angle glaucoma (POAG) being more prevalent in individuals of African descent than Caucasians. 2 The reduction of intraocular pressure (IOP) remains the backbone of glaucoma therapy. While in many cases medical and LASER techniques provide a reliable, long-term IOP control, surgery is advisable when optimum medical/LASER therapy fails to sufficiently lower IOP, or there is evidence that the patient cannot comply with those therapeutic strategies. 3 Trabeculectomy (TB) is considered the standard penetrating surgical procedure for the treatment of glaucoma and has been widely used for over 50 years. IOP is lowered by creating a fistula between the inner compartments of the eye and subconjunctival space, requiring full-thickness penetration of the anterior chamber, under a partial thickness scleral flap. 4 This alternative path allows aqueous humor to accumulate under the conjunctiva and form a filtering bleb. 5 The fistula is covered by a scleral flap that provides some resistance to the outflow, preventing profound hypotony. However, it is often associated with the development of important postoperative complications attributable to excessive aqueous humour filtration. 6 Non-penetrating surgical techniques for glaucoma have been developed to improve the safety of conventional filtering procedures. Deep non-penetrating sclerectomy (DNPS) is the most widely practiced technique and is often associated with a hydrophilic implant under the scleral flap to improve aqueous humour filtration. It prevents the sudden hypotony that occurs after penetrating surgery by creating progressive filtration of aqueous humour without perforating the eye through the preservation of the thin trabeculo-Descemet membrane. Nevertheless, it is technically hard, with a long learning curve, and many authors argue that it is less effective than TB for intraocular pressure reduction on a medium and long-term basis. [7][8][9][10][11][12][13] To date, only a few studies have directly compared TB and DNPS regarding which technique provides the best results in terms of efficacy and safety, 14 and most of them are outdated, have relatively small samples, and do not consider the modern surgical technique, with the use of intrascleral implants and augmentation with antimetabolites. [15][16][17][18][19][20] The purpose of this study was to evaluate the long-term efficacy and safety of DNPS compared to standard TB in patients with open-angle glaucoma.
Methods
A retrospective study was conducted, including 201 eyes that underwent glaucoma surgery in Hospital Pedro Hispano -Unidade Local de Saúde de Matosinhos between January 2012 and August 2019.
Data included patients diagnosed with open-angle glaucoma, regardless of age, race, or sex. The diagnosis was defined based on: typical optic disc damage, with glaucomatous cupping and loss of neuroretinal rim; visual field defects compatible with glaucomatous optic neuropathy; open-angle objectified by gonioscopy. Cases of pseudoexfoliation glaucoma (PEXG) and pigmentary glaucoma (PG) were considered eligible. Patients diagnosed with closed-angle or neovascular glaucoma were excluded.
Considered interventions were TB and DNPS, with or without intraoperative antimetabolite augmentation. DNPS was always done with the implantation of an intrascleral implant, whether Aquaflow ® or Esnoper ® .
Patient data were collected preoperatively and 1 week, 1, 6, 12, and 24 months after the procedure, regarding IOP, measured by applanation tonometry, visual acuity (VA), hypotensive medication, and postoperative complications. The adopted outcome measures, reported after the follow-up period of 2 years, consisted of absolute success, when IOPs below 18 mmHg were reached or, when the baseline IOP was ≤ 21 mmHg, a reduction of at least 20% was obtained, without the usage of hypotensive medication. Qualified success was considered when IOPs below 18 mmHg or, when the baseline IOP was ≤ 21 mmHg, a reduction of at least 20% was obtained with or without medication.
This work was carried out with the agreement of the ethics committee of Hospital Pedro Hispano. Informed consent was obtained from the study participants, and the guidelines outlined in the Declaration of Helsinki were followed.
Statistical analysis was conducted using Statistical Package for the Social Sciences (SPSS) version 23.0 for Macintosh. The assumption of normality of distribution and homogeneity of variance were tested by the Kolmogorov-Smirnov test. When these assumptions were verified, a t-test for paired/independent samples was used. When those assumptions were not proved, the Mann-Whitney test for independent samples was used. Statistical significance was defined as p < 0.05.
Results
A total of 201 eyes were included in this study. accounted for 34.8% and PG for 2%. Regarding the surgical technique, TB was performed in 40.3% (n=81) and DNPS in 59.7% (n=120). Both techniques were performed combined with cataract surgery in most cases: 74.1% in TB and 79.2% in DNPS groups. Surgeries were augmented with the use of mitomycin C in 66.2% of the cases, 58% in the TB group, The baseline values were statistically significant between groups (p=0.006). There were no statistically significant differences in mean IOP values after 1, 6, 12, or 24 months. Mean IOP reduction achieved was −10.17mmHg in the eyes who underwent TB and −8.37mmHg in the eyes who did DNPS. Table 2 shows the IOP outcome differences between several subgroups. Comparing the effects of mitomycin C augmentation in each surgical technique, the TB group showed the mean IOP values at 1 week and 1, 6, 12, or 24 months after the surgery of 10 The values showed a tendency towards higher IOP values in the PEXG group, but statistical differences were only noted in the measurement 1 month after the surgery (p=0.046). In the DNPS, the values were 11.73 ± 5.72; 12.54 ± 3.78; 13.01 ± 3.39; 13.94 ± 3.08; 14.67 ± 2.93, and 12.70 ± 7.34; 12.11 ± 4.25; 13.24 ± 4.10; 12.30 ± 3.36; 13.05 ± 3.12, in the POAG and PEXG subgroups, respectively, with statistically significant differences in the 12 months (p= 0.018) and 24 months (p=0.004) groups. A direct comparison between the two surgical techniques, regarding the outcome in each of the glaucoma types, showed a similar IOP in the POAG patients, with significant differences found in only the 1-week measurement values (p=0.025). In the PEXG group, values were also similar, with significant differences found in the 24 months measurement (p=0.015). .92 in the isolated glaucoma surgery and combined surgery subgroups, respectively, with significant differences found only in the measurement taken after 12 months. The raw mean IOP reduction, calculated based on the measurements 6, 12, and 24 months after the surgeries was 41.16%; 39.12%; 34.07%, and 35.76%; 31.11%; 27.45% in the TB and DNPS groups, respectively, with a trend towards a higher reduction in the TB group, with significant values in the 12 months measurement (p=0.052; p=0.016; p=0.053).
Absolute success was achieved in 51.85% and 50.83% of the TB and DNPS groups, respectively, without significant differences between groups (p=0.887). Relative success was achieved in 65.43% and 60.83% of the TB and DNPS groups, respectively, also without significant differences between groups (p=0.508) (Figure 2).
Mean values of best-corrected visual acuity [BCVA] before and 2 years after each procedure were 0.63 and 0.75, in the TB group. These values were 0.7 and 0.78 in the DNPS group. The average VA difference was 1.2 lines in the TB group (p=0.002), and 0.8 lines in the DNPS group (p<0.001). Differences in VA improvement between TB and DNPS groups were not statistically significant.
Regarding postoperative complications, as shown in Table 3, they were registered in 24.7% (n=20) of the eyes that underwent TB and 10.8% (n=13) of the eyes that did DNPS, with a significant difference between groups (p=0.007). The most serious complications were observed in the TB group, with 1 case of severe hyphema, 1 case of malignant glaucoma, and 1 case of choroidal detachment in the immediate postoperative period.
The average number of hypotensive medications taken by the patients, reduced from 2.53 to 0.52 in the TB group, and 2.32 to 0.41 in the DNPS group, after 2 years of follow-up, without significant differences between groups at baseline (p=0.204) or after 2 years (p=0.609) (Figure 3).
Discussion and Conclusion
According to our results, DNPS showed a slightly lower long-term hypotensive effect, while compared to TB. The differences in raw mean percentage IOP reduction were significantly higher for the TB group in the measurements taken 12 months after the surgery (p=0.016) but did not reach statistical significance in the measurements taken 6 and 24 months after the procedure (p=0.052; p=0.053). The absolute and qualified success rates of 51.85% and 65.43% for the TB and 50.83% and 60.83% for the DNPS groups again showed a slight difference between techniques but did not reach statistical significance (p=0.887; p=0.508). Across the literature, conclusions about the efficacy of both types of surgeries are variable. Some authors suggest that TB leads to a greater decrease in IOP. Rulli E, et al, in a metanalysis published in 2012, involving 18 studies and 945 eyes, concluded that penetrating surgery had a slightly higher, but significant, IOPlowering effect after a follow-up of 1 year, which was considered potentially relevant in patients requiring a greater IOP reduction. 14 On the other hand, Leszczyński and colleagues, in a study involving 78 eyes, found no significant differences regarding the efficacy of both types of surgical techniques, during a 24-month follow-up period. 20 These findings were also corroborated by Sayyad, et al, in a 78 eyes study, where each patient required bilateral glaucoma surgery and underwent TB in one eye and DNPS in the other. 19 Russo et al, showed, in a sample of 93 eyes, similar IOP-lowering effects, when DNPS was augmented with mitomycin C and SK-GEL scleral implant. 15 Two randomized clinical trials with a limited number of patients, and without using the modern surgical technique compared the efficacy of both procedures. Chiselita compared 34 eyes of 17 patients where one eye underwent TB and the other DNPS, objectifying a significantly higher hypotensive effect after TB. 18 However, neither surgery was performed with anti-metabolites, and DNPS did not include the implantation of a drainage device, which may have an important influence on the final results. On the other hand, Cillino et al 21 analyzed both procedures in a study involving 65 eyes, where they found no significant differences between groups, even considering that no adjuvants, such as antimetabolites or intrascleral implants, were used. In recent years, all surgeries done in our center have been performed with DovePress mitomycin C augmentation. The reported procedures without the use of antimetabolites correspond to surgeries performed mainly before 2015, before its universal use. The use of antimetabolites as adjuvants (mitomycin or 5-fluorouracil) theoretically inhibits conjunctival healing, which improves the surgical outcome and minimizes blebrelated complications. Despite this, the results did not show statistically significant differences in IOP reduction, concerning antimetabolite augmentation.
Regarding the analysis of the effects of the procedures on patient subgroups, no statistically significant differences were found in the hypotensive efficacy between different types of open-angle glaucoma, namely POAG and PEXG. Also, no significant differences were found between surgical techniques when performed isolated, or in combination with cataract surgery. A recent multicenter retrospective study involving a total of 117.697 eyes concluded that combined surgery was associated with lower reoperation rates than isolated procedures, but stand-alone procedures resulted in slightly greater IOP reduction. 22 Our data did not allow the evaluation of the impact of drainage devices in DNPS, as all were done with the implantation of either Esnoper ® or Aquaflow ® . Nevertheless, evidence is not entirely clear regarding how much they augment the efficacy of the procedure, which has important clinical implications, because they significantly increase the cost of the surgery, which is otherwise the same as standard TB, as highlighted in the previously cited metanalysis. 14 As expected, TB was associated with a higher number and severity of postoperative complications. Regarding our data, the difference between groups was statistically significant, favoring DNPS as the safer option. The most frequent postoperative complications in both groups included transitory hypotony, often accompanied by a shallow or flat anterior chamber, transitory choroidal effusion, and bleb-related complications, consisting mainly of seidel, bleb-failure, and encapsulation. All the complications registered in the DNPS group were transitory. The most serious complications were observed in the TB group, with 1 case of severe hyphema, 1 case of malignant glaucoma, and 1 case of persistent choroidal detachment. This difference in safety is corroborated in every study which compared the rate of postoperative complications between penetrating and non-penetrating techniques. 14,18,21 Rulli's metanalysis reported a higher incidence of short and long-term complications in TB, considering that deep sclerectomy seems to be a clinically reasonable compromise due to its higher safety profile. 14 Success rate report, considering VA after 2 years of follow-up, showed that both procedures were associated with a significant visual improvement, with a mean VA improvement of 1.2 lines for TB and 0.8 lines for DNPS, which is explained by the fact that most cases were combined with cataract surgery.
The comparison of data and results from different studies regarding efficacy and safety is made difficult by the fact that most differ in methodology, sample size, population characteristics, follow-up time, use of intrascleral implants, and anti-metabolites. Surprisingly, studies that provide an analysis of the modern technique with the use of intrascleral devices and antimetabolites are scarce.
The limitations of this study include its retrospective design, the fact that the surgeries were performed by 5 different surgeons, the usage of antimetabolites in most, but not all surgeries, and the usage of 2 different implants in DNPS (Aquaflow ® , or Esnoper ® ).
In conclusion, DNPS seems to be a valid and safe alternative to treat patients with open-angle glaucoma, unable to be controlled by non-invasive strategies. Although it is more difficult to perform, requires a long learning curve, and is usually more expensive, our results suggested that its IOP-lowering effect may only be marginally lower than TB, as shown by the absence of significant differences in the absolute or qualified success rates, with a definite lower risk of complications. We believe that we presented a robust sample, broader than that of most studies conducted in this area, which shows the potential efficacy of non-penetrating surgery when performed in an experienced center. | 2023-06-12T05:05:14.199Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "589572d12d8b29093cca92320cf562c36522b2e0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2147/opth.s405837",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "589572d12d8b29093cca92320cf562c36522b2e0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11357599 | pes2o/s2orc | v3-fos-license | Spinal cord involvement in patients with cirrhosis
A severe spinal cord involvement may rarely occur in patients with cirrhosis and other chronic liver diseases; this complication is usually associated with overt liver failure and surgical or spontaneous porto-systemic shunt. Hepatic myelopathy (HM) is characterized by progressive weakness and spasticity of the lower extremities, while sensory and sphincter disturbances have rarely been described and are usually less important. The diagnosis is assigned in the appropriate clinical setting on clinical grounds after the exclusion of other clinical entities leading to spastic paraparesis. Magnetic resonance imaging is often unremarkable; however, also intracerebral corticospinal tract abnor-WJG malities have been reported recently. The study of motor evoked potentials may disclose central conduction abnormalities even before HM is clinically manifest. HM responds poorly to blood ammonia-lowering and other conservative medical therapy. Liver transplantation represents a potentially definitive treatment for HM in patients with decompensated cirrhosis of Child-Pugh B and C grades. Other surgical treatment options in HM include surgical ligation, shunt reduction, or occlusion by interventional procedures. most pathophysiological, aspects
Abstract
A severe spinal cord involvement may rarely occur in patients with cirrhosis and other chronic liver diseases; this complication is usually associated with overt liver failure and surgical or spontaneous porto-systemic shunt. Hepatic myelopathy (HM) is characterized by progressive weakness and spasticity of the lower extremities, while sensory and sphincter disturbances have rarely been described and are usually less important. The diagnosis is assigned in the appropriate clinical setting on clinical grounds after the exclusion of other clinical entities leading to spastic paraparesis. Magnetic resonance imaging is often unremarkable; however, also intracerebral corticospinal tract abnor-WJG 20 th Anniversary Special Issues (11): Cirrhosis TOPIC HIGHLIGHT malities have been reported recently. The study of motor evoked potentials may disclose central conduction abnormalities even before HM is clinically manifest. HM responds poorly to blood ammonia-lowering and other conservative medical therapy. Liver transplantation represents a potentially definitive treatment for HM in patients with decompensated cirrhosis of Child-Pugh B and C grades. Other surgical treatment options in HM include surgical ligation, shunt reduction, or occlusion by interventional procedures.
INTRODUCTION
Patients with chronic liver disease frequently experience neurological problems, with hepatic encephalopathy (HE) being the most common. Comparatively rare is the involvement of the spinal cord; the so-called hepatic myelopathy (HM) is usually associated with an extensive portosystemic shunt (PSS) of blood, either surgically created or occurring spontaneously.
In this review we will focus on the studies that have investigated the pathophysiology and the therapeutic strategies of this important, but likely often overlooked, neurological complication of chronic liver diseases. The MEDLINE, accessed by Pubmed (1966-August 2013) and EMBASE (1980-August 2013) electronic databases were searched using the medical subject headings (MeSH) "hepatic myelopathy", "liver cirrhosis", "spastic paraparesis", "chronic liver disease", "therapy", "liver transplantation".
Two review authors (YH and SL) screened the titles and abstracts of the initially identified studies to determine if they satisfied the selection criteria. Any disagreement was resolved through consensus. Full-text articles were retrieved for the selected titles, and reference lists of the retrieved articles were searched for additional publications.
The two reviewers independently assessed the methodological quality of each study and risk of bias, focusing on blinding and other potential sources of bias. The search strategy described above yielded 44 results. We excluded 2 studies after reading the full published papers; thus, 42 studies contributed to this review: the earliest was published in 1949 and the most recent in 2013.
The most characteristic and distinctive feature in HM is a progressive lower extremity corticospinal tract deficit. Involvement of the upper extremities has rarely been described [3,19] .
There are only a few reports of sensory or sphincter impairment [1,4,9,20] . Moreover, a delayed onset posterior column dysfunction (proprioception and vibratory sensory loss) and a small fiber length-dependent axonal polyneuropathy has been recently documented [16] , both progressing concomitantly with the motor deficits. Most HM patients display normal or minimal sensory findings [5] , but some patients exhibit more significant sensory deficits [1,4,9] .
Since the first description of HM in 1949, there have been approximately 90 cases reported in the literature [14][15][16][17][18][21][22][23][24][25][26][27][28][29][30][31][32] . In rare instances, HM may be a presenting sign of liver disease [17] : Ben Amor et al [18] recently reported two patients who had no history of previous liver cirrhosis. In most of the reported cases, episodes of overt HE have preceded the development of the myelopathy [4,8,10,11] . HM can occur before or after HE, but also patients without any episodes of HE have been reported [18] . In the vast majority of the reported cases, the patients were males in the 5 th decade of life at the time of their presentation with HM [14] . The middle age of onset is reported as 47 years.
The first reports of HM patients occurred when surgical shunting was more commonly performed. Some authors [11] have hypothesized that the incidence would eventually decrease as shunting became replaced by other less invasive treatments. However, with the transjugular intrahepatic portosystemic shunt (TIPS) becoming the standard procedure for refractory variceal bleeding, increasing reports of HM have emerged [15] . To date, there has been one case report describing reversal of HM by occlusion of TIPS [15] .
HISTOLOGY
The histology of HM consists of symmetrical loss of myelin in the lateral pyramidal tracts, with demyelination beginning in the cervical spine, becoming more intense at lower levels, and occasionally being associated with axonal loss [5,33] . In the early stages, demyelination seems to predominate, but as the disease progresses, axonal loss occurs, and this is likely to be irreversible [4,11] . Occasionally, demyelination has also been found in the ventral pyramidal tracts, in the posterior columns and spinocerebellar tracts. These pathological findings in the posterior columns of patients with HM were first described by Leigh et al [20] in 1949 and by Mendoza et al [11] in 1994. These findings raise the possibility that posterior column spinal cord pathology may be more common in HM than previously realized. Even if the lesions have shown up typically within the spinal cord, there are occasional reports of lesions within the brainstem without involvement of other tracts [20] . Additionally, Alzheimer type-Ⅱ cells and spongiform degeneration in the cerebral cortex have been described in HM [22] .
PATHOPHYSIOLOGY
The pathophysiology of HM is not yet completely understood. There is a close relationship between an extensive PSS and the occurrence of HM, even in the absence of liver dysfunction [10] . This observation supports the hypothesis that the shunting of blood may allow nitrogenous breakdown products or a neurotoxin to bypass the liver and damage the spinal cord. In particular, nitrogenous products such as ammonia have been identified as a major contributor to the development of HM [14] . Portocaval shunts or less commonly spleno-renal shunts seem to play a substantial role in the occurrence of HM-associated neurologic disturbances [34] . Shunts can occur spontaneously, after surgery, or due to "functional shunting" filtration of portal blood through a dysfunctional liver [14] . Impairment of neurological function in the form of encephalopathy was recognized in the early 20 th century in patients undergoing surgical shunting or porticaval anastomosis (PCA) and was later described as "portalsystemic encephalopathy" by Sherlock et al [35] . Because some of the earliest reported patients with HM had undergone PCA, shunting was considered [36] a possible explanation for the development of myelopathy, and a similar mechanism causing both HE and HM has been postulated. However, in most of the reported cases, episodes of overt HE have preceded the development of the myelopathy [33] . It has been suggested that a nutritional deficiency may underlie HM as a result of dietary restriction in patients with precedent episodes of HM [11] . However, this hypothesis is unlikely, because there are also reports of patients in whom HE never occurred [33] and who had been following a normal diet. Moreover, in contrast to HE, HM usually do not respond to blood ammonialowering therapies [26] . Therefore, the pathophysiology is most likely different in HM and HE. Protein restriction, as well as the use of lactulose and neomycin treatments, were not found to be beneficial for HM [1,[5][6][7]10] . Moreover, surgical colonic diversion is helpful for HE, but does not reverse HM [37] . Treatments with lactulose, xifaxan, gabapentin, and pentoxifylline were also attempted in the interesting case reported by Caldwell et al [16] , and none of the treatments was successful. Moreover, none of the HM patients who had eventually underwent LT had any reported neurological improvement in response to standard medical therapies for PSE [26][27][28][29] .
Reversal of HM by occlusion of TIPS, as reported by Conn et al [15] , lends support to some mechanism inherent to the presence of PSS. Approximately 20% of the patients (3/15) in the recent review by Caldwell et al [16] had no demonstrable evidence of PSS vs 10% of HM patients previously reported in the literature [15] .
In addition to the possibility of a putative neurotoxin causing HM in patients with PSS, other etiological factors should be considered, including nutritional deficiencies [14] and metabolic abnormalities. Nutritional deficiency was first considered by Leigh et al [20] as a possible cause of the permanent spinal cord abnormalities observed in their patients. Vitamin B12 deficiency was taken into consideration in the two gastrectomy patients who later developed HM. However, the hematological profiles and vitamin B12 levels in these individuals were normal [1] . Serum vitamin B12 levels were normal in previously published HM patients [24][25][26][27][28][29] , as well as in the patient reported by Caldwell et al [16] , in whom serum vitamin B12, folate, and methylmalonic acid levels were within normal limits.
It has been suggested that altered circulation could increase spinal cord susceptibility to injury in HM. This was discussed in the context of portal hypertension [26,38] , perhaps occurring in individuals with anatomic variants.
The topography of the spinal cord lesions in HM suggests that HM may be related to hemodynamic factors, as the observed lesions are located just within those spinal segments that miss an extensive collateral circulation [38,39] .
HM can occur in patients with congenital hepatic fibrosis [10] and with focal nodular hyperplasia [40] , and this observation underscores the point that the severity of HM does not necessarily parallel the degree of hepatic dysfunction.
DIAGNOSIS
Diagnosing HM is often difficult, but it can be achieved after the exclusion of other causes of spastic paraparesis in the appropriate clinical setting. A detailed history, along with an accurate neurological examination including appropriate neurophysiological tests and neuroimaging procedures, are of crucial importance for the early detection of the disease. Other myelopathies with normal spine imaging should be included in the neurological differential diagnosis. These are listed in the algorithm proposed by Caldwell et al [16] : metabolic/nutritional diseases (renal disease, vitamin B/E or copper deficiency, lathyrism); vascular events (arterovenous malformation, infarct, vasculitis), spirochetes (Lyme, syphilis) or fungal (Cryptococcus, Aspergillus) infections, postinfection (transverse myelitis), autoimmunity (systemic lupus erythematosus, sarcoidosis, Sjogren's), neoplasm (lymphoma, paraneoplastic syndrome), toxicity (chemotherapy, radiation), genetic factors (leukodystrophy, Friedrich's ataxia), and motor neuron disease (amyotrophic lateral sclerosis). Magnetic resonance imaging (MRI) of the entire spinal cord and, when indicated, the brain is essential in the evaluation of HM. Infectious myelopathies can be assessed by patient history, spinal fluid analysis, imaging procedures, and serologies/cultures [26] . Infectious etiologies to consider include human immunodeficiency virus, human T-lymphotropic virus type-1, syphilis, and Lyme disease. A demyelinating syndrome that was recently reported in a patient with hepatitis B virus (HBV) manifested as a recurrent transverse myelitis with paraparesis and urinary retention [41] . Similarly, hepatitis A has also been implicated as a possible etiology for transverse myelitis [42] . However, none of the hepatotropic viruses (hepatitis A virus, HBV) has been implicated in HM. In the review by Caldwell et al [16] regarding HM patients after liver transplantation (LT), all 3 HCV patients exhibited reversal of the myelopathy, despite persistent viremia [25,26] . A paraneoplastic syndrome is another possible differential diagnostic consideration in the workup of HM, even if it has not been reported in the literature. The liver explanted by the Caldwell et al [16] contained a 1-cm hepatocellular carcinoma (HCC). One of the 2 patients with HM from the group of Koo and colleagues [28] also had HCC, but they had not undergone LT. That patient was a 64-year-old man with a 2.5-cm HCC who had undergone successful radiofrequency ablation of the lesion but exhibited no clinical or electrophysiological improvement up to 16 mo after treatment. HCC has been associated with necrotizing myelopathy in one case report [43] , but in the most recent comprehensive review of HM, none of the 61 patients had an underlying diagnosis of HCC [15] . Table 1 shows the differential diagnosis and Table 2 the recommended diagnostic evaluation of patients with HM.
NEUROPHYSIOLOGICAL FINDINGS
To determine the frequency and gravity of HM, Nardone er and upper extremities, and normal MEP values with radicular magnetic stimulation, suggesting that the lesion was localized within the cervical levels of spinal cord.
However, they could not perform any neuropathological investigations to corroborate this diagnosis.
Nardone et al [27] found an abnormal CMCT to the lower lumbar spinal segments and a normal CMCT to the upper cervical spinal segments, thus supporting localization to the thoracic spinal cord. Additionally, a MEP study of HM patients from Seoul [28] indicated that the sites of higher vulnerability are located between the upper thoracic and lumbar spinal cord.
Further MEP studies may not only provide a means for an early diagnosis, but also shed light on the spinal topography of HM.
NEUROIMAGING FINDINGS
Most case reports have not documented MRI abnormalities in the spinal cord. This suggests that MRI may be less sensitive than MEP/CMCT in the early detection of HM or that, to date, abnormal corticospinal tract signals on MRI may have been underappreciated.
Negative spinal cord MRI findings support HM in the differential diagnosis, because MRI is essential to rule out compression of the spinal cord or myelitis [31] .
However, abnormal spinal cord and even brain MRI imaging has been reported in HM patients [19,44] . In particular, the MRI finding of intracerebral corticospinal tract abnormalities in a recently reported patient [16] suggests the occurrence of HM-related pathology above the level of the foramen magnum. In fact, an increased FLAIR signal in the subcortical white matter and subcortical spi-et al [27] performed a study examining motor evoked potentials (MEP) elicited by transcranial magnetic stimulation in thirteen patients with liver cirrhosis associated with PSS.
The six patients with clinical signs of spinal cord involvement exhibited severe neurophysiological abnormalities, more precisely, a prolonged central motor conduction time (CMCT), whereas interestingly milder but unequivocal MEP abnormalities were found in four of the seven patients with normal clinical examination. These findings indicate that the electrophysiological evaluation of central motor conduction may disclose an impairment of the corticospinal pathways even before HM is clinically manifest. The clinical and neurophysiological features of patients with slight MEP abnormalities improved after LT, whereas the patients with a more advanced stage of disease (severe MEPs abnormalities) did not.
The findings of Nardone et al [27] and Utku et al [22] support the potential value of evaluating CMCT in the preclinical and early stages of HM. Patients who undergo transplantation with preclinical or early HM by MEP/ CMCT appear to have a greater likelihood of recovery both clinically and electrophysiologically [27] . It is thus possible that MEP/CMCT have greater sensitivity in detecting preclinical or early HM and in assigning a prognosis for recovery after LT. Although a larger study comparing the sensitivity, specificity, and predictive value of MEP/ CMCT has yet to be conducted, central motor system neurophysiological studies are an important consideration in the workup of patients with HM.
Utku et al [22] performed a MEP study in two patients and found an absence of cortical MEPs in both the low- nal tracts was reported. This is the first report of an abnormal MRI intracerebral corticospinal tract FLAIR signal in HM, and indicates that the pathology of HM may not be confined to the spinal cord or that it may be tied to preclinical PSE. A similar abnormal FLAIR signal has also been described in PSE and cirrhosis [45] . Hyperintensity of the putamen and globus pallidus on T1-weighted MRI, attributed to manganese deposition in these nuclei, is not unique to HM and has been noted in other patients with chronic liver failure [46,47] . Although not specific to HM, these radiological findings correlated with the clinical findings in that patient. Interestingly, the improvement in abnormal brain imaging findings parallels the clinical improvement in spastic paraparesis after LT.
THERAPY
HM has a poor prognosis because of its progressive and irreversible nature. Today, no therapy for this disorder has been established. Conservative treatment strategies for HM include liver protection, neurotropic drugs, and measures to control blood ammonia concentration. However, as previously mentioned, HM responds poorly to conservative medical therapy [15,48] . In particular, in contrast to HE, HM usually does not respond to blood ammonialowering therapies [26] . Surgical treatment options in HM currently include LT, surgical ligation, shunt reduction, or occlusion by interventional procedures. Surgical ligation has been reported to be effective, but is only used occasionally [22] .
Endovascular interventional procedures
Interventional endovascular shunt occlusion has been commonly used to treat encephalopathy due to postsurgical shunt and post-TIPS [15,48] . By contrast, the usefulness of this technique for post-surgical shunt HM has not yet been determined.
Recently, Wang et al [17] first reported reversal of HM by occlusion of a surgical splenorenal shunt using an AVP. In this interesting case, an impaird gait and a progressive decline in mobility were observed 14 mo after surgical splenorenal shunt. The patient had no history of HE, and his laboratory findings showed no liver dysfunction (with the exception of an increase in his serum ammonia level). Therefore, occlusion of the splenorenal shunt represented an alternative therapeutic option, and the large splenorenal shunt was successfully occluded using an AVP. Other possible embolizing materials for the embolization of the PSS are coils, and a detachable balloon. AVP implantation for this patient was chosen due to the relatively large size of the surgical splenorenal shunt. Moreover, coil migration can occur when used in short shunt tracts [49][50][51][52] . AVPs were recently found to be effective for the occlusion of internal iliac arteries [51] , the treatment of pulmonary arteriovenous malformations [52] and the occlusion of a splenorenal shunt arising after TIPS [50][51][52][53] . AVPs have an advantage over coils in that AVPs can be more precisely placed within the vessel and that they can be repositioned or removed, if necessary.
Following AVP embolization, a gradual improvement in leg strength and balance was observed; seven months after AVP embolization, the patient was able to walk 1 to 2 km aided by crutches, with only mild residual spasticity of the lower extremities.
After PSS embolization a sudden increase in portal pressure may constitute a severe complication, resulting in aggravation of esophageal varices or even development of new varices [15,54,55] . Therefore, embolization should be performed only in patients with absent or mild esophageal varices and without signs of hepatic failure (i.e., ascites or jaundice) [56] . Moreover, routine periprocedural endoscopy is recommended to minimize the incidence of embolization-related complications. Wang et al [17] used an occlusion balloon catheter initially to occlude the surgical shunt. Because further monitoring of the patient over a few days revealed no evidence of induced varices or ascites, an AVP was used to enable closure of the shunt.
Thus, Wang et al [17] are the first to report a surgical shunt related-HM successfully embolized with an AVP, which resulted in an immediate improvement in intrahepatic portal perfusion, a normalization of blood ammonia, and a gradual improvement of HM-related symptoms. The authors were also able to document a temporary balloon occlusion of the surgical shunt prior to permanent embolization, which also may be used to predict clinical and laboratory improvement.
Liver transplantation
Campellone et al [13] were the first to suggest the use of urgent LT for HM because of the progressive and irreversible nature of the disease.
Until the advent of LT, slow progression of spastic paraparesis over several years inevitably caused HM patients to become wheelchair bound. In the reviewed literature, nearly all patients with symptomatic HM who eventually underwent LT had severe paraparesis before the operation and required either a cane or a wheelchair [24,26,28,29] .
LT appears to be the only promising effective treatment modality for HM, as supported by several previously published reports [16,[25][26][27][28][29][30] . In particular, outcomes for those patients who had undergone LT sooner after being diagnosed with HM suggest a potential neurological benefit [16,26,57] . In the case reported by Counsell and Warlow [24] , LT was performed at least 18 mo after the onset of the myelopathy, and there was no improvement. In fact, LT earlier during the clinical course of HM and/or in the absence of marked abnormalities in MEP/CMCT is important in achieving satisfactory reversal of the neurological motor deficit.
It should be considered that, in HM patients with established cirrhosis, the degree of spastic paraparesis and the risk of permanency are discordant with the Child-Pugh score.
Interestingly, Caldwell et al [16] introduced the first use of a Model for End-Stage Liver Disease (MELD) points for the condition of HM to enable early LT resulting in the reversal of marked spastic paraparesis. The patient underwent LT approximately 1.5 years after being diagnosed with HM. In this case there was no overt HE. Both the spastic paraparesis and posterior column deficits rapidly and markedly improved within 3 mo after successful orthotopic LT. Expedited orthotopic LT may lead to a favorable neurological outcome after the granting of MELD exception points for HM as the primary indication for LT. Thus, the MELD system does not automatically prioritize these patients for LT, and submission of an appeal is necessary. Increased awareness will aid earlier diagnosis of HM, and because good neurological outcomes can be achieved by prompt LT, the transplant community should consider early and rapid transplant evaluation for those patients with HM. On the basis of their review, Caldwell et al [16] concluded that patients with HM should be prioritized for LT with the consideration of MELD exception points.
CONCLUSION
HM should be always considered in the differential diagnosis in patients with spastic paraparesis in the setting of chronic liver disease and/or portosystemic shunt. The diagnosis of HM should be established as early as possible to enhance the chance of a complete recovery of the spinal cord. Importantly, MEP studies may be suitable for the early diagnosis of HM, even in patients with preclinical stages of the disease. Even if HM is thought to be related to the increased shunting of portal venous toxins to the systemic circulation, conservative therapies are, unlike for HE, usually inefficient.
An early diagnosis of HM should prompt recognition of predisposing factors such as PSS or TIPS, which can be considered for shunt occlusion by interventional procedures. However, in most cases, LT represents the only option for patients with HM. In particular, LT remains a potentially definitive treatment for HM in patients with decompensated cirrhosis of Child-Pugh B and C grades [16,26,57] , while for patients with normal liver function or Child-Pugh A grade cirrhosis the choice of LT vs other treatments remains debatable [15,22] . In these patients, shunt occlusion may represent a suitable alternative therapy to LT, and occlusion can help to relieve shunt-induced HM symptoms. In fact, in the case described by Wang et al [17] , a large surgical splenorenal shunt was successfully occluded using an AVP, which resulted in significant clinical improvement of the shunt-induced HM symptoms. This technique represents a viable alternative to surgery or coil embolization, although further research is necessary. In addition, trial balloon occlusion of the shunt prior to performing permanent embolization can be used to predict clinical and laboratory improvement.
In conclusion, HM is a rare cause of spastic paraparesis, but clinical history, along with appropriate laboratory, neurophysiological and neuroimaging findings, may allow an early diagnosis in patients with chronic liver diseases.
We provide a comprehensive and updated review of the most pathophysiological and clinical aspects of HM. Moreover, we also discussed the appropriate and effective treatments for this possibly underrecognized neurological complication of liver cirrhosis. | 2018-04-03T00:25:55.453Z | 2014-03-14T00:00:00.000 | {
"year": 2014,
"sha1": "ac43275b1a1fa35136e24bafabfba455dfe0edeb",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v20.i10.2578",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "da7ff5234373d1acb04dcb8ef605c7068370454d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235504071 | pes2o/s2orc | v3-fos-license | Validation of Fresnel–Kirchhoff Integral Method for the Study of Volume Dielectric Bodies
In this work, we test a nondestructive optical method based on the Fresnel–Kirchhoff integral, which could be applied to different fields of engineering, such as detection of small cracks in structures, determination of dimensions for small components, analysis of composition of materials, etc. The basic idea is to apply the Fresnel–Kirchhoff integral method to the study of the properties of small-volume dielectric objects. In this work, we study the validity of this method. To do this, the results obtained by using this technique were compared to those obtained by rigorously solving the Helmholtz equation for a dielectric cylinder of circular cross-section. As an example of the precision of the method, the Fresnel–Kirchhoff integral method was applied to obtain the refractive index of a hair by fitting the theoretical curve to the experimental results of the diffraction pattern of the hair measured with a CCD camera. In a same manner, the method also was applied to obtain the dimensions of a crack artificially created in a piece of plastic.
Introduction
There is no doubt that optical methods have been applied with success in different fields of engineering such as experimental solid mechanics, fracture mechanics, civil engineering, etc. Fiber-optic sensor technology, for instance, has been widely used by civil engineers for performance monitoring of civil infrastructures. This is basically due to the fact that fiber-optic sensors have the advantages of small dimensions and good resolution and accuracy [1]. Digital image correlation (DIC) has also been employed for the measurement of small deformations, such as those occurring during fluid-structure interaction [2]. Moiré interferometry has also been a valuable experimental technique for the understanding of the mechanical behavior of materials and structures [3][4][5]. Photoelasticity [6] has also been applied to evaluate the stress and strain field around cracks [7], etc.
In this work, we introduce another technique to evaluate properties of defects, deformations, cracks, and in general any small-volume structure that can be modeled as a volume dielectric body. The method is based on the Fresnel-Kirchhoff integral [8], which in principle was intended to simulate planar structures. Nonetheless, it also has been applied to simulate volumetric dielectric structures as well [9][10][11]. To do this, the structure must be treated as a two-dimensional object. This is achieved by taking into account the phase accumulated by an incident plane wave after passing through the object [12]. The amplitude of the input field in the Fresnel-Kirchhoff integral is assumed to have this phase, and the calculation of the integral provides the field in the output plane. In this work, the accuracy of this method will be analyzed by comparing the results provided by the Fresnel-Kirchhoff approximation for a dielectric cylinder of circular cross-section to the results obtained by rigorously solving the Helmholtz equation.
Next, the method will be applied to obtain the refractive index of a body such as a human hair. To do this, the normalized intensity distribu fraction pattern of the hair will be measured by using a CCD camera. Then, curve will be fitted to the experimental results by the Fresnel-Kirchhoff m the validity of the method. Finally, to demonstrate the potential of the me be applied to obtain the dimensions of a crack artificially created in a piece
Fresnel Integral Method
The basic ideas of the method proposed are explained in this section ested in obtaining some particular parameters, such as refractive index, siz ticular object. The parameters to be evaluated must have the ability to ch of the incident light. In Figure 1, a basic scheme for the situation proposed will assume an object with a refractive index n and a thickness d, allowing b to vary inside the object. If plane B is described with coordinates (x',y'), and of the incident light is UA, then the amplitude of light at plane B can be cal UA exp(iknd) = U(x',y'), where k is the wavenumber, related to the wavelen as k = 2π/λ. Once the amplitude U(x',y') is evaluated at plane B, the field U(P) at ( Figure 2) can be obtained by applying the Fresnel approximation to the Fre integral [8]: Once the amplitude U(x',y') is evaluated at plane B, the field U(P) at a point P (x,y) ( Figure 2) can be obtained by applying the Fresnel approximation to the Fresnel-Kirchhoff integral [8]: The intensity of the field at point P(x,y) is computed as I(x,y) = |U(x,y)| 2 . By fitting the theoretical curve of the intensity to experimental data, one can extract information from the object under study. We will demonstrate this in Section 3. It is noticeable that in this method, the features of a volumetric object are described by a phase function, which is introduced into Equation (1). Therefore, the three-dimensional information of the object is incorporated in a two-dimensional integral. This, in our opinion, is an advantage of this method over other numerical methods existing in the literature [13], since we have reduced a three-dimensional problem to a two-dimensional one, with the correspondent saving of computational time. On the other hand, since the method is based on the evaluation of the integral (1), which is closely related to the Fresnel transform [14], it is well suited for solving inverse problems such as in [14]. It has nonetheless some limitations; for instance, since the method makes use of the Fresnel-Kirchhoff integral, the method could be inaccurate for large values of the Fresnel number, defined in Section 3. The intensity of the field at point P(x,y) is computed as I(x,y) = |U(x,y)| 2 . By fitting the theoretical curve of the intensity to experimental data, one can extract information from the object under study. We will demonstrate this in Section 3. It is noticeable that in this method, the features of a volumetric object are described by a phase function, which is introduced into Equation (1). Therefore, the three-dimensional information of the objec is incorporated in a two-dimensional integral. This, in our opinion, is an advantage of thi method over other numerical methods existing in the literature [13], since we have re duced a three-dimensional problem to a two-dimensional one, with the corresponden saving of computational time. On the other hand, since the method is based on the evalu ation of the integral (1), which is closely related to the Fresnel transform [14], it is wel suited for solving inverse problems such as in [14]. It has nonetheless some limitations for instance, since the method makes use of the Fresnel-Kirchhoff integral, the method could be inaccurate for large values of the Fresnel number, defined in Section 3.
For the particular case of a dielectric circular cylinder, we will assume that the phase accumulated by light from a plane just in front of the cylinder, A, and another just after it B, are [12]: where nc is the refractive index of the cylinder and a is the radius of a circle representing the cross-section of the cylinder. φa takes into account the phase accumulated in air, and φc is the phase accumulated by the cylinder. Now, assuming that a unit amplitude wav is incident onto the cylinder, U(x') can be evaluated as U(x') = exp(ikφ), where φ takes into account at each point x' the different contributions of φa and φC. The expression of th amplitude of the wave field at a point P(x,y) of the output plane ( Figure 2) is finally: where FI(α,β) depends on the Fresnel integrals of α and β as: For the particular case of a dielectric circular cylinder, we will assume that the phases accumulated by light from a plane just in front of the cylinder, A, and another just after it, B, are [12]: where n c is the refractive index of the cylinder and a is the radius of a circle representing the cross-section of the cylinder. ϕ a takes into account the phase accumulated in air, and ϕ c is the phase accumulated by the cylinder. Now, assuming that a unit amplitude wave is incident onto the cylinder, U(x') can be evaluated as U(x') = exp(ikϕ), where ϕ takes into account at each point x' the different contributions of ϕ a and ϕ C . The expression of the amplitude of the wave field at a point P(x,y) of the output plane ( Figure 2) is finally: where FI(α,β) depends on the Fresnel integrals of α and β as: being: and: where B depends on the wavelength and the distance of the input plane to the output one: and K'(z) is:
Rigorous Solution
The aim of this section is to solve Maxwell equations for the particular problem of the scattering of an infinite circular dielectric cylinder. These solutions are obtained without any approximation and will serve to test the validity of the Fresnel approximation made in Section 2.1. It is important to say that solutions of this kind exist in the literature [15,16], but in this work we have chosen another route to obtain the analytical expressions for the scattering coefficients, and we believe that this derivation is interesting on its own for the scientific community. Needless to say, although the scattering coefficients are different in this work from those obtained in other derivations, the final scattering and internal electric fields are the same.
In this derivation, the starting point is the scalar Helmholtz wave equation: Once scalar solutions of previous equation are obtained, vector solutions of Maxwell equations can be found in terms of the scalar solutions by building the so-called vector harmonics: Here, → e is an arbitrary vector. In our derivation, we chose this arbitrary vector to be the unitary vector → e ρ , according to Figure 3, whereas in other works [15,16] the unit vector was chosen to be → e z . Although → e z is a natural choice for axial symmetric objects, we believe that the choice of → e ρ is a more general option, allowing for the simulation of dielectric bodies with other shapes. In this way, we give another expansion of the electric and magnetic fields in terms of the new vector harmonics calculated in this work (Equations (16) and (17)). In Figure 3, the axis of the cylinder under study was chosen to be the z axis. and K'(z) is:
Rigorous Solution
The aim of this section is to solve Maxwell equations for the particular proble scattering of an infinite circular dielectric cylinder. These solutions are obtained wit approximation and will serve to test the validity of the Fresnel approximation mad tion 2.1. It is important to say that solutions of this kind exist in the literature [15,16 this work we have chosen another route to obtain the analytical expressions for tering coefficients, and we believe that this derivation is interesting on its own scientific community. Needless to say, although the scattering coefficients are dif this work from those obtained in other derivations, the final scattering and interna fields are the same.
In this derivation, the starting point is the scalar Helmholtz wave equation: Once scalar solutions of previous equation are obtained, vector solutions of M equations can be found in terms of the scalar solutions by building the so-calle harmonics: Here, e ⃗ is an arbitrary vector. In our derivation, we chose this arbitrary vec the unitary vector e ⃗ , according to Figure 3, whereas in other works [15,16] the un was chosen to be e ⃗ . Although e ⃗ is a natural choice for axial symmetric objects, w that the choice of e ⃗ is a more general option, allowing for the simulation of dielectr with other shapes. In this way, we give another expansion of the electric and magne in terms of the new vector harmonics calculated in this work (Equations (16) and Figure 3, the axis of the cylinder under study was chosen to be the z axis. From the particular configuration that we are treating here, it is clear that a proper choice of coordinates are the cylindrical coordinates (ρ, ϕ, z). In these coordinates, the Helmholtz equation takes the form: Separable solutions of this equation can be found in the form: where υ = 0, 1, 2, . . . and h is dictated by the form of the incident wave. In this work, we will assume that the electric field is incident parallel to the axis of the cylinder (in the z direction), so that we can consider that h = 0. On the other hand, Z υ (kρ) satisfy the following Bessel equation [17]: For each solution of type (14), we can apply Equations (11) and (12) to calculate the corresponding vector harmonics, giving: where Z υ (kρ) = dZ υ (kρ) d(kρ) ; that is, the prime denotes the absolute derivative of z with respect its argument. Now the electric and magnetic fields can be expanded in terms of the cylindrical harmonics. Outside the cylinder, the electric and magnetic fields are obtained by the sum of the scattered (denoted by the subscript "s"), and the incident (subscript "i") one: We will also denote the fields inside the cylinder with the subscript (1). We will consider three expansions for the electric field: incident, scattered, and inside the cylinder. The proper Bessel functions Z υ (kρ) will be chosen accordingly for each case. In particular, since the electric field must be finite at the origin, the Bessel functions of the first kind J υ (kρ) will be chosen in the cases of the incident and the internal field. In the case of the scattered field, Hankel functions H υ (kρ) will be chosen as the generating functions, since their asymptotic behavior is that of a decaying wave at large distances. The expansion of the electric field is: where j = i, 1, s for the incident, internal, and scattered fields, respectively.
The corresponding magnetic fields are: In this particular work, we will assume that light is polarized in the z axis direction. For this particular case, it is easy to see that the incident electric field depends only on the → M υ harmonics and the magnetic field on the → N υ harmonics. So, the electric field expansion for the incident light is: and the expansion for the incident magnetic field is: On the other hand, assuming a unit incident plane wave, the incident electric field can also be expressed in the form: Now, making use of the expansion of the exponential function in terms of Bessel functions [17]: So, comparing Equations (22) and (24) and making use of Equations (16) and (25), the expansion coefficients for E i and H i can be calculated as: For the case of the scattered and internal fields, the values of A where → n is a unit vector perpendicular to the surface of the cylinder and directing outward, which in this case coincides with → e ρ . This is equivalent to saying that the z and ρ components of the electric and magnetic fields are continuous at the surface of the cylinder. Since in this work we are interested in the field outside the cylinder, we give the results obtained for the expansion coefficients for the scattered field: where x = ka and n is the refractive index of the cylinder. By using Equations (16), (20), (29), and (30), the scattered field of an infinite dielectric cylinder can be obtained when a plane wave is incident perpendicular to the axis of the cylinder. Finally, with the aid of Equations (18) and (24), the total electric field at any point of the space outside the cylinder can be calculated.
Validation of the Fresnel Method by Comparison with the Rigorous Solution
The aim of this section is the validation of the more general Fresnel method for volume objects described in Section 2.1, which can be applied to a great number of situations, by the comparison with the rigorous solution for the dielectric cylinder obtained in Section 2.2. To make a proper comparison, the parameters in the formalism of Section 2.2 must be addressed adequately. We present in Figure 4 the geometric scheme we considered. In this case, we want to obtain the intensity pattern created by a cylinder when light impinges on it perpendicularly to its axis on a screen that is positioned at a distance of z p from the cylinder. For the case of the rigorous solution, the intensity is obtained as: where E T is the modulus of the total electric field of Equation (20).
The intensity for the Fresnel approach is obtained from Equation (4), and multiplying the amplitude by its complex conjugate.
Appl. Sci. 2021, 11, x FOR PEER REVIEW addressed adequately. We present in Figure 4 the geometric scheme we consid this case, we want to obtain the intensity pattern created by a cylinder when l pinges on it perpendicularly to its axis on a screen that is positioned at a dista from the cylinder. For the case of the rigorous solution, the intensity is obtained is the modulus of the total electric field of Equation (20). The intensity for the Fresnel approach is obtained from Equation (4), and mu the amplitude by its complex conjugate. 6 show the comparison of both theories for a circular cylinder. The pattern was calculated at a screen that was positioned at an axial distance that was the simulations. The wavelength of the incident light was chosen to be 633 nm, whi dius of the cylinder was considered as 30 µm in the case of Figure 5 and 80 µm in th Figure 6. It can be observed that the behavior of both curves, one obtained by u Fresnel approximation and the other by using the rigorous solution, behaved in t manner as a function of the axial distance. Both models gave basically the same but it was clear that the higher the distance to the screen, the better the agreement both models. This is due to the fact that the Fresnel approximation works better f values of the Fresnel number (NF) [8], which is defined as: where a is dimension parameter of the object under study (the radius of the cy this case), λ is the wavelength of light, and z is the distance to the screen. Increasin of z give lower values of NF, and therefore a better behavior for the Fresnel approx The increase of the refractive index also worsens the results of Fresnel method which can be observed when comparing parts (a) and (b) of Figures 5 and 6. Here corresponds to a refractive index n = 1.85, whereas (b) corresponds to n = 3.4, considerably higher. 6 show the comparison of both theories for a circular cylinder. The intensity pattern was calculated at a screen that was positioned at an axial distance that was varied in the simulations. The wavelength of the incident light was chosen to be 633 nm, while the radius of the cylinder was considered as 30 µm in the case of Figure 5 and 80 µm in the case of Figure 6. It can be observed that the behavior of both curves, one obtained by using the Fresnel approximation and the other by using the rigorous solution, behaved in the same manner as a function of the axial distance. Both models gave basically the same results, but it was clear that the higher the distance to the screen, the better the agreement between both models. This is due to the fact that the Fresnel approximation works better for lower values of the Fresnel number (NF) [8], which is defined as: where a is dimension parameter of the object under study (the radius of the cylinder in this case), λ is the wavelength of light, and z is the distance to the screen. Increasing values of z give lower values of NF, and therefore a better behavior for the Fresnel approximation. The increase of the refractive index also worsens the results of Fresnel method slightly, which can be observed when comparing parts Despite the slight disagreement observed for both methods in the range of values considered, it must be said that the distances considered in the simulations of Figures 5 and 6 were rather conservative, since typical measuring distances from the object to the camera (CCD) were higher than 6 mm, which was the maximum distance considered in the simulations. Figures 7 and 8 show the diffraction pattern observed at a screen situated 50 mm from the cylinder for a radius of 30 µm and 80 µm, respectively. The agreement of both theories was clear in this case, thus validating the method proposed to the simulation of volume dielectric bodies. Despite the slight disagreement observed for both methods in the range of values considered, it must be said that the distances considered in the simulations of Figures 5 and 6 were rather conservative, since typical measuring distances from the object to the camera (CCD) were higher than 6 mm, which was the maximum distance considered in the simulations. Figures 7 and 8 show the diffraction pattern observed at a screen situated 50 mm from the cylinder for a radius of 30 µm and 80 µm, respectively. The agreement of both theories was clear in this case, thus validating the method proposed to the simulation of volume dielectric bodies. Despite the slight disagreement observed for both methods in the range of values considered, it must be said that the distances considered in the simulations of Figures 5 and 6 were rather conservative, since typical measuring distances from the object to the camera (CCD) were higher than 6 mm, which was the maximum distance considered in the simulations. Figures 7 and 8 This is also clear from Figure 9, where the relative error of the Fresnel method with respect to the rigorous one is depicted for a range of values for the refractive index from 1.3 to 4 and values of the radius of the cylinder from 30 to 90 µm. In order to calculate the error, the solution given by the Fresnel method and the rigorous one were calculated at each pair of values (a,n). The error was calculated at the target plane for every tested (a,n) pair using the L2-norm of the error under constant illumination. This error was divided with respect to the intensity obtained by the rigorous method integrated at the target plane. From the figure, two conclusions can be obtained: on the one hand the error in the This is also clear from Figure 9, where the relative error of the Fresnel method with respect to the rigorous one is depicted for a range of values for the refractive index from 1.3 to 4 and values of the radius of the cylinder from 30 to 90 µm. In order to calculate the error, the solution given by the Fresnel method and the rigorous one were calculated at each pair of values (a,n). The error was calculated at the target plane for every tested (a,n) pair using the L2-norm of the error under constant illumination. This error was divided with respect to the intensity obtained by the rigorous method integrated at the target plane. From the figure, two conclusions can be obtained: on the one hand the error in the This is also clear from Figure 9, where the relative error of the Fresnel method with respect to the rigorous one is depicted for a range of values for the refractive index from 1.3 to 4 and values of the radius of the cylinder from 30 to 90 µm. In order to calculate the error, the solution given by the Fresnel method and the rigorous one were calculated at each pair of values (a,n). The error was calculated at the target plane for every tested (a,n) pair using the L2-norm of the error under constant illumination. This error was divided with respect to the intensity obtained by the rigorous method integrated at the target plane. From the figure, two conclusions can be obtained: on the one hand the error in the range of values evaluated mainly increased with the radius of the cylinder; on the other hand, it can be seen that small differences existed between the two methods. range of values evaluated mainly increased with the radius of the cylinder; on the other hand, it can be seen that small differences existed between the two methods. To evaluate the sensitivity of the method to changes in the refractive index and radius, simulations were carried out starting with the rigorous solution, with values for the refractive index of n = 1.5 and a = 60 µm. Then simulations were performed using the Fresnel method, slightly varying the radius and the refractive index. Figure 10 shows the intensity profile at a distance of 20 mm from the cylinder. The rigorous solution was calculated for a refractive index of n = 1.5 and a = 60 µm, and the Fresnel method was performed for thickness values of 60, 62, 64, 66, 68, and 70 µm. From the figure, it is clear that variations of 2 µm in the radius created visible changes in the shape of the curves obtained using the Fresnel method. The distance from the maximum value to the minimum value of the curve also changed with an increase or decrease of 2 µm in the radius of the cylinder. To evaluate the sensitivity of the method to changes in the refractive index and radius, simulations were carried out starting with the rigorous solution, with values for the refractive index of n = 1.5 and a = 60 µm. Then simulations were performed using the Fresnel method, slightly varying the radius and the refractive index. Figure 10 shows the intensity profile at a distance of 20 mm from the cylinder. The rigorous solution was calculated for a refractive index of n = 1.5 and a = 60 µm, and the Fresnel method was performed for thickness values of 60, 62, 64, 66, 68, and 70 µm. From the figure, it is clear that variations of 2 µm in the radius created visible changes in the shape of the curves obtained using the Fresnel method. The distance from the maximum value to the minimum value of the curve also changed with an increase or decrease of 2 µm in the radius of the cylinder. range of values evaluated mainly increased with the radius of the cylinder; on the other hand, it can be seen that small differences existed between the two methods. To evaluate the sensitivity of the method to changes in the refractive index and radius, simulations were carried out starting with the rigorous solution, with values for the refractive index of n = 1.5 and a = 60 µm. Then simulations were performed using the Fresnel method, slightly varying the radius and the refractive index. Figure 10 shows the intensity profile at a distance of 20 mm from the cylinder. The rigorous solution was calculated for a refractive index of n = 1.5 and a = 60 µm, and the Fresnel method was performed for thickness values of 60, 62, 64, 66, 68, and 70 µm. From the figure, it is clear that variations of 2 µm in the radius created visible changes in the shape of the curves obtained using the Fresnel method. The distance from the maximum value to the minimum value of the curve also changed with an increase or decrease of 2 µm in the radius of the cylinder. Figure 11 shows also the intensity profile at a distance of 20 mm from the cylinder, but in this case the modified parameter was the refractive index. The rigorous solution was calculated for a refractive index of n = 1.5 and a = 60 µm, and the Fresnel method was performed for values of the refractive index of 1.50, 1.51, 1.52, 1.53, 1.54, and 1.55. As in Figure 10, variations of 0.01 created visible changes in the shape of the curves and also in the difference between the maximum and minimum value of the curve. On the other hand, it is interesting to note that small changes in the radius provoked different, although subtle, variations in the theoretical curve than in the case of changing the refractive index. For instance, if one looks the behavior of the curves in the range of x: (−130, −70) µm, or symmetrically in the range of x: (70, 130) µm, one can see that in the transformation of the curve at starting values (red curve) to that of the final values (light blue curve), those transforming through changes in the radius possessed a local maximum and a local minimum, whereas those transforming through changes in the refractive index did not.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 11 of 14 Figure 11 shows also the intensity profile at a distance of 20 mm from the cylinder, but in this case the modified parameter was the refractive index. The rigorous solution was calculated for a refractive index of n = 1.5 and a = 60 µm, and the Fresnel method was performed for values of the refractive index of 1.50, 1.51, 1.52, 1.53, 1.54, and 1.55. As in Figure 10, variations of 0.01 created visible changes in the shape of the curves and also in the difference between the maximum and minimum value of the curve. On the other hand, it is interesting to note that small changes in the radius provoked different, although subtle, variations in the theoretical curve than in the case of changing the refractive index. For instance, if one looks the behavior of the curves in the range of x: (−130, −70) µm, or symmetrically in the range of x: (70, 130) µm, one can see that in the transformation of the curve at starting values (red curve) to that of the final values (light blue curve), those transforming through changes in the radius possessed a local maximum and a local minimum, whereas those transforming through changes in the refractive index did not. Figure 11. Intensity of the diffraction pattern as function of the distance to the center of the diffraction pattern for a dielectric cylinder with internal radius a = 60 µm and a refractive index of 1.50, 1.51, 1.52, 1.53, 1.54, 1.55.
Experimental Validation
Although the Fresnel method described was validated in Section 3.1, it is interesting to observe its ability to extract information from a determined volumetric dielectric object. In this section, in order to use the expressions of Section 2.1, a small cylindrical object will be studied, which in this case was chosen to be a human hair (which can be considered nearly cylindrical). Figure 12 shows the experimental setup used to obtain the diffraction pattern of the hair. The light coming from a He-Ne laser (633 nm) was collimated by using a system of lenses; the sample (hair) was placed between the laser and a CCD connected to a personal computer, which was used to process the data. Figure 13 shows the diffraction pattern obtained from the hair by using this setup.
Experimental Validation
Although the Fresnel method described was validated in Section 3.1, it is interesting to observe its ability to extract information from a determined volumetric dielectric object. In this section, in order to use the expressions of Section 2.1, a small cylindrical object will be studied, which in this case was chosen to be a human hair (which can be considered nearly cylindrical). Figure 12 shows the experimental setup used to obtain the diffraction pattern of the hair. The light coming from a He-Ne laser (633 nm) was collimated by using a system of lenses; the sample (hair) was placed between the laser and a CCD connected to a personal computer, which was used to process the data. Figure 13 shows the diffraction pattern obtained from the hair by using this setup. Figure 11 shows also the intensity profile at a distance of 20 mm from the cylinder, but in this case the modified parameter was the refractive index. The rigorous solution was calculated for a refractive index of n = 1.5 and a = 60 µm, and the Fresnel method was performed for values of the refractive index of 1.50, 1.51, 1.52, 1.53, 1.54, and 1.55. As in Figure 10, variations of 0.01 created visible changes in the shape of the curves and also in the difference between the maximum and minimum value of the curve. On the other hand, it is interesting to note that small changes in the radius provoked different, although subtle, variations in the theoretical curve than in the case of changing the refractive index. For instance, if one looks the behavior of the curves in the range of x: (−130, −70) µm, or symmetrically in the range of x: (70, 130) µm, one can see that in the transformation of the curve at starting values (red curve) to that of the final values (light blue curve), those transforming through changes in the radius possessed a local maximum and a local minimum, whereas those transforming through changes in the refractive index did not. Figure 11. Intensity of the diffraction pattern as function of the distance to the center of the diffraction pattern for a dielectric cylinder with internal radius a = 60 µm and a refractive index of 1.50, 1.51, 1.52, 1.53, 1.54, 1.55.
Experimental Validation
Although the Fresnel method described was validated in Section 3.1, it is interesting to observe its ability to extract information from a determined volumetric dielectric object. In this section, in order to use the expressions of Section 2.1, a small cylindrical object will be studied, which in this case was chosen to be a human hair (which can be considered nearly cylindrical). Figure 12 shows the experimental setup used to obtain the diffraction pattern of the hair. The light coming from a He-Ne laser (633 nm) was collimated by using a system of lenses; the sample (hair) was placed between the laser and a CCD connected to a personal computer, which was used to process the data. Figure 13 shows the diffraction pattern obtained from the hair by using this setup. In Figure 14, the normalized intensity obtained by dividing the intensity captured by the CCD by its maximum values is shown as a function of the distance to the center of the screen. In order to fit the theoretical function to the experimental data, we used the "lsqcurvefit" function of MATLAB, which implemented the "trust region reflective" algorithm [18]. The starting values for the algorithm were: radius a = 50 µm and refractive index n = 1.7; the fitting of the theoretical curve was obtained by using the method in Section 2.1. The experimental data provided a refractive index of 1.55 and an internal radius of 25.4 µm. The size of the hair was also measured using an optical microscope, which found a radius of 25 ± 2 µm, whereas the standard value of the refractive index of the human hair given in the literature is 1.55 [19]. The figure also demonstrates an experimental validation of the method proposed. Finally, in order to demonstrate the potential of the method, this was applied to obtain the dimensions of a crack artificially created in a piece of plastic. Figure 15 shows an image obtained from an optical microscope of the piece of plastic and the crack artificially created (a), and the diffraction pattern observed (b). Figure 16 shows the experimental data (extracted from the diffraction pattern) and a theoretical fit made by using the method described in Section 2.1. In this case, Equations (2) and (3) were changed to account for an elliptical crack with refractive index 1 (air), and the refractive index of the surroundings was set to 1.51 (refractive index of plastic). The varying parameter in this In Figure 14, the normalized intensity obtained by dividing the intensity captured by the CCD by its maximum values is shown as a function of the distance to the center of the screen. In order to fit the theoretical function to the experimental data, we used the "lsqcurvefit" function of MATLAB, which implemented the "trust region reflective" algorithm [18]. The starting values for the algorithm were: radius a = 50 µm and refractive index n = 1.7; the fitting of the theoretical curve was obtained by using the method in Section 2.1. The experimental data provided a refractive index of 1.55 and an internal radius of 25.4 µm. The size of the hair was also measured using an optical microscope, which found a radius of 25 ± 2 µm, whereas the standard value of the refractive index of the human hair given in the literature is 1.55 [19]. The figure also demonstrates an experimental validation of the method proposed. In Figure 14, the normalized intensity obtained by dividing the intensity captured by the CCD by its maximum values is shown as a function of the distance to the center of the screen. In order to fit the theoretical function to the experimental data, we used the "lsqcurvefit" function of MATLAB, which implemented the "trust region reflective" algorithm [18]. The starting values for the algorithm were: radius a = 50 µm and refractive index n = 1.7; the fitting of the theoretical curve was obtained by using the method in Section 2.1. The experimental data provided a refractive index of 1.55 and an internal radius of 25.4 µm. The size of the hair was also measured using an optical microscope, which found a radius of 25 ± 2 µm, whereas the standard value of the refractive index of the human hair given in the literature is 1.55 [19]. The figure also demonstrates an experimental validation of the method proposed. Finally, in order to demonstrate the potential of the method, this was applied to obtain the dimensions of a crack artificially created in a piece of plastic. Figure 15 shows an image obtained from an optical microscope of the piece of plastic and the crack artificially created (a), and the diffraction pattern observed (b). Figure 16 shows the experimental data (extracted from the diffraction pattern) and a theoretical fit made by using the method described in Section 2.1. In this case, Equations (2) and (3) were changed to account for an elliptical crack with refractive index 1 (air), and the refractive index of the surroundings was set to 1.51 (refractive index of plastic). The varying parameter in this Finally, in order to demonstrate the potential of the method, this was applied to obtain the dimensions of a crack artificially created in a piece of plastic. Figure 15 shows an image obtained from an optical microscope of the piece of plastic and the crack artificially created (a), and the diffraction pattern observed (b). Figure 16 shows the experimental data (extracted from the diffraction pattern) and a theoretical fit made by using the method described in Section 2.1. In this case, Equations (2) and (3) were changed to account for an elliptical crack with refractive index 1 (air), and the refractive index of the surroundings was set to 1.51 (refractive index of plastic). The varying parameter in this case was the width of the crack. As in the case of the hair, we used the "lsqcurvefit" function of MATLAB with an initial guess of 100 µm. The width of the crack obtained with the aid of the microscope was 175 ± 5 µm, whereas the fit gave a value of 172 µm. case was the width of the crack. As in the case of the hair, we used the "lsqcurvefit" function of MATLAB with an initial guess of 100 µm. The width of the crack obtained with the aid of the microscope was 175 ± 5 µm, whereas the fit gave a value of 172 µm.
Conclusions
In this work, a nondestructive optical method based on the Fresnel-Kirchhoff integral was tested. The method is suitable for volume dielectric bodies of arbitrary shapes. In particular, we studied the validity of the method by comparing it with a rigorous one for the particular case of a circular cylinder. Maxwell equations were solved exactly for this particular situation. The results demonstrated good agreement between the theories. Finally, the method was experimentally tested by observing the diffraction pattern of a human hair on a CCD camera, which also demonstrated good agreement between the theoretical model and the experimental data. To demonstrate the potential of the method, this was applied to obtain the dimensions of a crack artificially created in a piece of plastic. case was the width of the crack. As in the case of the hair, we used the "lsqcurvefit" function of MATLAB with an initial guess of 100 µm. The width of the crack obtained with the aid of the microscope was 175 ± 5 µm, whereas the fit gave a value of 172 µm.
Conclusions
In this work, a nondestructive optical method based on the Fresnel-Kirchhoff integral was tested. The method is suitable for volume dielectric bodies of arbitrary shapes. In particular, we studied the validity of the method by comparing it with a rigorous one for the particular case of a circular cylinder. Maxwell equations were solved exactly for this particular situation. The results demonstrated good agreement between the theories. Finally, the method was experimentally tested by observing the diffraction pattern of a human hair on a CCD camera, which also demonstrated good agreement between the theoretical model and the experimental data. To demonstrate the potential of the method, this was applied to obtain the dimensions of a crack artificially created in a piece of plastic.
Conclusions
In this work, a nondestructive optical method based on the Fresnel-Kirchhoff integral was tested. The method is suitable for volume dielectric bodies of arbitrary shapes. In particular, we studied the validity of the method by comparing it with a rigorous one for the particular case of a circular cylinder. Maxwell equations were solved exactly for this particular situation. The results demonstrated good agreement between the theories. Finally, the method was experimentally tested by observing the diffraction pattern of a human hair on a CCD camera, which also demonstrated good agreement between the theoretical model and the experimental data. To demonstrate the potential of the method, this was applied to obtain the dimensions of a crack artificially created in a piece of plastic. | 2021-06-22T17:55:51.322Z | 2021-04-22T00:00:00.000 | {
"year": 2021,
"sha1": "7e67853c0123dd1ace6d1b7957d4323dd8220e90",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/9/3800/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5496046008a3f121323bcba8c54ac4e486f863b8",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
22795846 | pes2o/s2orc | v3-fos-license | Purinergic signaling during Porphyromonas gingivalis infection
Despite recent advances unraveling mechanisms of host–pathogen interactions in innate immunity, the participation of purinergic signaling in infection-driven inflammation remains an emerging research field with many unanswered questions. As one of the most-studied oral pathogens, Porphyromonas gingivalis is considered as a keystone pathogen with a central role in development of periodontal disease. This pathogen needs to evade immune-mediated defense mechanisms and tolerate inflammation in order to survive in the host. In this review, we summarize evidence showing that purinergic signaling modulates P. gingivalis survival and cellular immune responses, and discuss the role played by inflammasome activation and cell death during P. gingivalis infection.
P2X7 receptor
Oral microbes Inflammasome a b s t r a c t Despite recent advances unraveling mechanisms of hostepathogen interactions in innate immunity, the participation of purinergic signaling in infection-driven inflammation remains an emerging research field with many unanswered questions. As one of the moststudied oral pathogens, Porphyromonas gingivalis is considered as a keystone pathogen with a central role in development of periodontal disease. This pathogen needs to evade immune-mediated defense mechanisms and tolerate inflammation in order to survive in the host. In this review, we summarize evidence showing that purinergic signaling modulates P. gingivalis survival and cellular immune responses, and discuss the role played by inflammasome activation and cell death during P. gingivalis infection.
Innate immunity and oral microbes
The host organism is always ready to respond to foreign stimuli, such as infection by pathogens. The first response of the host immune system is carried out by innate immunity. Unlike the adaptive response (which includes specific antibodies and lymphocytes), the repertoire of the innate response is common among all normal and healthy individuals. This response involves cellular and humoral activities, as well as chemical (e.g. acidic stomach pH, saliva and tears) and anatomical barriers (e.g. epithelial cells throughout the body).
Cellular responses to pathogens depend on the recognition of evolutionarily conserved structures that are typically present in microbes but not in the host. These molecules, called "Pathogens Associated Molecular Patterns" (PAMPs), are recognized by innate immune cells by "Pattern Recognition Receptors" (PRRs). PRRs include Toll-like receptors (TLRs), nucleotide-binding and oligomerization domain (NOD)-like receptors (NLRs), retinoic-acid-inducible gene I (RIG-I)-like receptors (RLRs), and the C-type lectin receptors (CLRs), as well as DNA receptors (cytosolic sensors for DNA) [1e3]. Different PAMPs can be recognized by PRRs, whose ligation leads to activation of transcription factors, such as activator protein 1 (AP-1) and nuclear factor kappa B (NF-kB), which in turn, modulate gene transcription of pro-inflammatory cytokines and chemokines [4]. This response is important for the host to control infections and prevent disease.
Knowledge about the human microbiota and its relationship with the host is also crucial for better understanding the mechanisms of immunity, since the microbiota can stimulate and modulate the immune system. The human body contains 10 times more prokaryotic than eukaryotic cells, and humans and microbes have evolved to gradually become dependent on one another [5]. The oral mucosa is associated with hundreds of different viruses and bacterial, archaeal, fungal, and protozoan species, many of which can interact to form biofilms conferring resistance to the microorganisms against mechanical and chemical stress [5,6]. In this context, bacterial communities found in the oral mucosa are highly complex (comprising around 1000 species) and are one of the most complex in the whole body [6]. Most of the oral microbes live as commensals within the host, but some species can become pathogenic in response to the host genotype, stress, diet or behavior (e.g. smoking) [7].
Not surprisingly, some oral microbes have been related to oral disorders, including periodontal disease (or periodontitis), which is commonly manifested as a chronic and inflammatory condition induced by biofilms and pathogens closely associated with the periodontium (the structures that protect and support teeth, such as gingiva, periodontal ligament and alveolar bone). When the periodontium is damaged, a process of uncontrolled bone resorption is initiated and can lead to tooth loss as a consequence [7,8]. Socransky and colleagues identified several bacteria present in subgingival biofilms from individuals with periodontitis and grouped these bacteria in complexes according to their association with disease [9]. Among these complexes, the one most strongly associated with periodontitis is the so-called "red complex" bacteria. The "red complex" consists of Tannerela forsythia, Treponema denticola and Porphyromonas gingivalis, which are related to human periodontitis because of their strong association with the diseased sites [9,10].
P. gingivalis is an anaerobic, asaccharolytic, blackpigmented, non-motile and non-spore forming Gramnegative bacterium, which exists as different strains with variable virulence [11e13]. P. gingivalis is mainly found during diseased states but it can be also found in healthy individuals, with a prevalence that ranges around 25% in healthy individuals and 79% in individuals with periodontitis [14]. Recent studies have suggested a new paradigm of periodontal pathogenesis, which assigns a larger role for some bacteria from the oral microbiota and host susceptibility in development of disease. Usually, pathogenic bacteria lead to inflammation by direct infection and dysregulation of the commensal microbiota. P. gingivalis was described as a "keystone pathogen" due to its ability to induce dysbiosis and inflammation even in relatively low numbers. Using a murine model of experimental periodontitis, it was shown that, even in low-abundance after inoculation, P. gingivalis can infect the oral mucosa and promote changes in numbers and composition of the oral commensal microbiota, leading to a dysbiotic environment [15]. Dysbiosis is characterized by an imbalance in the relative abundance of species within the microbiota that is related to disease induction [7]. This study also showed that P. gingivalis inoculation in the oral cavity led to bone loss in specific pathogen free (SPF) mice, but P. gingivalis alone could infect but did not induce bone loss in germ-free mice. These data resulted in the "keystone pathogen hypothesis" of periodontal disease [15].
Besides being considered a keystone pathogen, this bacterium is also associated with increased risk of diverse systemic diseases. Oral health is important for overall health since it has been shown that periodontitis increases the patients' risk for atherosclerosis, rheumatoid arthritis, and cancer [16e20]. In fact, during periodontal disease, bacteria reach blood vessels temporarily and this transient bacteremia is responsible for spreading bacteria to different sites of the organism, such as atherosclerotic plaques (as detected by PCR [17]), where they can accelerate pathogenesis.
P. gingivalis has evolved several virulence factors to evade innate and adaptive immunity and cause disease. There is a large body of evidence describing the ambivalent behavior of this microorganism, which on one hand needs to escape immune-mediated detection and on the other hand tolerates and perpetuates inflammation to survive in the host. It was shown that P. gingivalis fimbriae can highly activate human monocytes and poorly activate epithelial cells with regards to interleukin (IL)-6, IL-8, macrophage colony stimulating factor (M-CSF) and tumor necrosis factor (TNF)-a responses, thus reflecting different strategies used by this bacterium when interacting with distinct host cell types [21]. As another example, P. gingivalis can manipulate host neutrophils through complement C5a receptor and TLR2 pathways in a cooperative crosstalk involving downstream adaptor molecules such as MyD88 [22]. This study showed higher bacterial survival in infected MyD88-deficient mice compared with wild-type mice. In vitro, MyD88-depleted human periodontal fibroblasts showed decreased expression of major inflammatory cytokines such as IL-6 and IL-8 compared to control cells when stimulated with P. gingivalis LPS [23]. Moreover, cysteine proteinases called gingipains from P. gingivalis can influence the composition of polymicrobial biofilms [24]. This work revealed an interdependency between the gingipains of P. gingivalis and T. forsythia or T. denticola, thus suggesting supported survival and virulence of the biofilm community as a whole. The capacity of P. gingivalis to interact with other periodontal pathogens such as Fusobacterium nucleatum was also demonstrated in the case of P. gingivalis' ability to suppress inflammasome activity [25], which will be discussed later in this review. Finally, P. gingivalis expresses hemaglutinins, as well as an atypical and less immunogenic LPS, which can be an antagonist of TLR4 ligation; a serine phosphatase B (SerB) which inhibits IL-8 synthesis by gingival epithelial cells (GECs); fimbriae that are involved in bacterial adhesion; and a nucleoside-diphosphate-kinase (NDK), which hydrolyzes extracellular ATP (eATP) and will be discussed in more detail later in this review.
P. gingivalis can also evade different mechanisms of adaptive immunity. Human neutrophils, peripheral blood mononuclear cells (PBMCs) and GECs infected with P. gingivalis produced IL-1b but not the T-cell chemokine CXCL10 [26]. Thus, P. gingivalis infection inhibited interferon (IFN)-g-induced and F. nucleatum-induced CXCL10 secretion by epithelial cells. Moreover, P. gingivalis adhesion induced morphological changes, reactive-oxygen-species (ROS) production and increased intracellular Ca þ2 levels in T-cells [27]. This periodontal pathogen also inhibited AP-1 and NF-kB activity, as well as IL-2 accumulation, by means of its gingipains. Another study demonstrated that gingipains of P. gingivalis cleave immunoglobulin G1 (IgG1) in gingival crevicular fluid of patients and may suppress antibody-dependent antibacterial activity in vivo [28]. Also, it was shown that, upon initial encounter with P. gingivalis in vivo, murine splenic T cells and CD11b þ cells produced IL-10, and this cytokine suppressed IFNg T cell responses [29]. Furthermore, it was demonstrated that supernatants of human immune cells infected with two different strains of P. gingivalis-induced T helper cell polarization to a Th17 profile instead of a Th1 profile [30]. Moreover, P. gingivalis favored the generation of Th17-related cytokines such as IL-1b, IL-6 and IL-23 but not the Th1-related cytokine, IL-12. Interestingly, another laboratory showed that subcutaneous vaccination with formalin-killed P. gingivalis protected the mice from alveolar bone resorption and inflammation by downregulation of Th17 cells and IL-17A production, while promoting upregulation of regulatory T (Treg) cells, IL-10 and transforming growth factor-b1 (TGF-b1) [31].
Purinergic signaling in the context of infection and inflammation
ATP is traditionally associated with cellular energy metabolism in all prokaryotic and eukaryotic cell types, but it is also recognized that ATP and other nucleotides are released from cells following stress or injury [32,33]. It has been shown that controlled and uncontrolled mechanisms of ATP release to the extracellular space takes place during cellular stress, death or tissue injury. It has been demonstrated thus far that ATP is released from necrotic cells via pannexin channels, connexin hemichannels and also via the P2X7 receptor [34,35]. In the extracellular compartment, nucleotides can be recognized by the host immune system as danger signals and can promote several biological activities in different immune cell types. Examples include: maturation of immature dendritic cells, secretion of pro-inflammatory cytokines by macrophages, chemotaxis and IL-8 production by eosinophils, and costimulation for antigenic stimulation by T and B cells [35,36]. These molecules bind to purinergic receptors expressed on virtually all immune cell types [33].
Purinergic receptors are divided into two families: P1 and P2 receptors [32]. The G-protein coupled metabotropic P1 receptors recognize exclusively adenosine and can be subdivided into A 1 , A 2A , A 2B and A 3 receptors, all of which with different binding affinities for adenosine [32,37]. The receptors A 1 and A 3 are coupled to protein G i , and A 2A and A 2B becoming associated with protein G s [37,38]. The P2 receptors can be subdivided into two subtypes: non-selective ion-gated channel P2X receptors (that recognize ATP) and G-coupled P2Y receptors (that recognize ATP, ADP, UTP, UDP and UDP-glucose) [35,36]. To date, seven P2X receptors (from P2X1 to P2X7) with different affinities for ATP have been described. Among these receptors, the P2X7 receptor has a low affinity for ATP (requiring 100 mM to be activated; while others can be activated at lower concentrations) and has been associated with immune responses and inflammation, such as inhibition of infection by intracellular pathogens and activation of the inflammasome, which will be discussed later in this review [4,39e41,41].
ATP ligation of the P2X7 receptor leads to opening of a transmembrane non-selective cationic channel that allows K þ efflux and Na þ and Ca 2þ influx and promotes cytoplasmic membrane depolarization [35]. P2X7 receptor activation is associated with pore formation, which depends on the concentration and duration of ATP treatment [41]. Continuous stimulation of the P2X7 receptor with ATP can induce cell death either by necrosis or apoptosis [42], as well as lead to opening of a pore that allows the passage of molecules up to 900 Da [43].
Since recognition of high levels of eATP results in modulation of immune responses, the host has a sophisticated and sensitive mechanism to regulate the composition, duration, intensity and magnitude of purinergic signaling, as reviewed elsewhere [41]. Immune and non-immune cells utilize a group of nucleotide-hydrolyzing enzymes called ecto-nucleotidases to control exacerbated levels of nucleotides and maintain steady-state conditions. The ecto-nucleoside triphosphate diphosphohydrolases (E-NTPDases which degrade extracellular tri-and diphosphonucleosides to monophosphonucleosides), ecto-nucleotide pyrophosphatase/phosphodiesterases (E-NPPs that hydrolyze pyrophosphate and phosphodiester bonds in a wide range of substrates), and ecto-5 0 -nucleotidase (which degrades AMP) are the most relevant ecto-nucleotidases in the context of innate immunity [41]. These enzymes are responsible for maintaining healthy and stable levels of eATP [36] and generating adenosine, a metabolite of ATP breakdown [41].
While ATP exhibits pro-inflammatory and stimulatory effects in the immune system and physiology [35,44e51], adenosine has primarily anti-inflammatory and inhibitory effects [37,52e56]. Therefore, the balance between ATP versus adenosine levels is important in modulating cellular immune responses and pathogen survival [ Fig. 1].
ATP effects in intracellular pathogen infection
Immune and epithelial cells activate microbicidal pathways and contribute to inflammation after ligation of purinergic P2 receptors by eATP [ Fig. 1]. Facultative and obligate intracellular microbes survive inside the host cell, where they acquire nutrients needed for microbial replication and propagation of the infection [53,57]. For intracellular pathogens, it is advantageous to be able to prevent or delay apoptosis of the host cell in order to promote survival and growth of the pathogen. In this regard, several pathogens evolved different mechanisms to promote their own growth inside the host cell, as for example Mycobacterium tuberculosis, Chlamydia trachomatis, Leishmania species, Toxoplasma gondii and also P. gingivalis.
Unlike activation of surface CD95 (Fas receptor), leading to host cell apoptosis, or complement-mediated host cell lysis, which do not induce mycobacterial death [50], eATP treatment of macrophages enhances their antimicrobial properties in a P2X7 receptor-dependent manner. We and others showed that eATP-related killing of M. tuberculosis and C. trachomatis within human and murine macrophages is mediated by phospholipase D, which is associated with mobilization of intracellular Ca þ2 and consequently lysosomal fusion and acidification of the phagosomes containing the pathogen [44,48,49]. In another study, it was shown that adenine nucleotides (adenosine, AMP and ATP) can inhibit C. trachomatis growth in epithelial cells [58]. Moreover, millimolar eATP concentrations inhibit chlamydial infection via P2X7 receptor in macrophages [48], while micromolar eATP concentrations reversibly inhibit chlamydial infection via the P2X4 receptor in epithelial cells [58].
Infection with the protozoan parasite Leishmania amazonensis is also controlled by eATP treatment. Murine macrophages infected with L. amazonensis showed enhanced P2X7 receptor expression in vitro and were more responsive to eATP activation in vitro and in vivo, where cells from established cutaneous lesions were more sensitive to eATP than cells from uninfected mice [46]. Additionally, elimination of Leishmania via eATP ligation of the P2X7 receptor is involved with leukotriene B4 production in a 5-lipoxigenase dependent-manner [45,46]. Another work showed that UTP, but not UDP, inhibits L. amazonensis infection in murine macrophages, inducing morphological damage in the intracellular parasite, promoting apoptosis of macrophages and production of oxygen and nitrogen reactive species, and increasing intracellular Ca þ2 concentrations [59]. These effects are believed to occur via P2Y2 and P2Y4 receptors following their upregulation during L. amazonensis infection. Periodate-oxidized ATP also induces morphological changes directly in the parasite, dampening the attachment and entry of the protozoa in murine macrophages [60].
Another intracellular protozoan parasite affected by purinergic signaling is T. gondii. eATP treatment in infected macrophages promotes T. gondii elimination via P2X7 receptor through acidification of the parasitophorous vacuole and ROS production [47,51]. Additionally, our group recently demonstrated that UTP and UDP treatment in murine macrophages infected with T. gondii promotes 90% elimination of the parasite, without inducing NO, ROS or apoptosis in the host cell On the other hand, on the right, eATP released from stressed, dying or infected cells binds to P2 receptors (for example, P2X7) and leads to pathogen elimination through several pathways: (1) host cell death; (2) inflammasome activation and IL-1b secretion; (3) ROS and NO production; (4) or phospholipase D activation, promoting lysosome and phagosome fusion. Importantly, ecto-nucleotidases (E-NTPDases) from several pathogens inhibit pathogen elimination by eATP cleavage and/or favor microbial survival by generating extracellular adenosine. [61]. Interestingly, UTP and UDP induced parasite egress from the host cell via P2Y2, P2Y4 and P2Y6, thus compromising infectivity and replication of the egressed parasites.
The expression of P2X2, P2X4, P2X5, P2X6 and P2X7 receptors was also reported in GECs, and eATP, unlike other extracellular nucleotides such as UTP and ADP, induced apoptosis of these cells [62]. These studies also demonstrated that P. gingivalis infection inhibits eATP-induced apoptosis in GECs through a mechanism that depends on a homolog of nucleoside-diphosphate-kinase (NDK), which is secreted by P. gingivalis [62,63]. NDK is an ubiquitous enzyme that is highly conserved in prokaryotes and eukaryotes, including plants [64,65]. The main role of NDK is to catalyze the transfer of terminal phosphate groups from 5 0 -triphosphate-to 5 0diphosphate-nucleotides [64e66]. Thus NDK has the ability to hydrolyze (and also synthesize) any NTP/dNTP [65,67]. In this context, the NDK from P. gingivalis can hydrolyze eATP after infection in human GECs. The enzyme can thus (1) diminish eATP-induced apoptosis [62]; (2) inhibit eATP-induced ROS via P2X7/NADPH oxidase signaling [63]; and (3) attenuate eATP-induced inflammasome activation, therefore impairing IL-1b release [68]. In contrast, the presence of NDK from Pseudomonas aeruginosa [69] and P. gingivalis during infection in murine macrophages contributes to pro-IL-1b production/ stability and induces IL-1b secretion [unpublished data e Fig. 2], thus demonstrating differences in the role of NDK from P. gingivalis after infection in different cell lines.
Adenosine effects on P. gingivalis infection
Adenosine is also considered as a danger signal, which can be released from stressed, necrotic or dying cells, or can be generated via dephosphorylation of eATP by the ecto-nucleotidases CD39 and CD73, as reviewed elsewhere [41,70]. In favor of the role of adenosine in downregulating immune responses and inflammation [37,52,53,55,56] [Fig. 1], a recent study has found that agonist of adenosine receptors, when added exogenously in vitro, stimulates the growth of P. gingivalis in GECs [54]. This work demonstrated that GECs express all the Fig. 2 e Schematic figure of the effects of P1 receptors, P2 receptors, and ecto-nucleotidases on P. gingivalis-induced inflammasome activation. (1) As the first signal required for inflammasome activation, P. gingivalis is recognized by the host cell through TLR2, and (2) reaches the cytosolic compartment. (3) Recognition of the bacterium via TLR2 promotes the translocation of the transcriptional factor NF-kB to the nucleus. (4) Once in the nucleus, NF-kB induces pro-inflammatory cytokines and transcription of inflammasome components (5) and promotes pro-IL-1b synthesis. (6) Concomitantly, P. gingivalis secretes NDK, which may be released to the extracellular compartment. (7) In addition, after infection, this oral bacterium induces ATP release from the host cell to the extracellular space. As a second signal for inflammasome activation, (8) ATP ligation to the P2X7 receptor can promote K þ efflux, ROS generation and/or lysosome damage, (9) which can activate the NLRP3 inflammasome. The NLRP3 inflammasome activates procaspase-1 to the mature caspase-1, and (10) this enzyme, in turn, proteolytically processes pro-IL-1b into IL-1b, (11) which is released from the cell. (12) NDK from P. gingivalis hydrolyzes eATP, generating its own metabolites, such as ADP, (13) which are recognized and cleaved via ecto-nucleotidases from the host, such as CD39, generating AMP. (14) AMP binds to CD73 on the host cell, and this enzyme can generate adenosine. (15) Adenosine interaction via adenosine receptor (16) can promote pro-IL-1b stability and thus, supports IL-1b secretion. b i o m e d i c a l j o u r n a l 3 9 ( 2 0 1 6 ) 2 5 1 e2 6 0 adenosine receptors, and stimulation of the A 2A receptor via CGS21680 specific-agonist could enhance proliferation of P. gingivalis. On the other hand, a high-affinity adenosine receptor agonist NECA, or adenosine by itself, could reversibly inhibit growth of the intracellular bacterium C. trachomatis via A 2B receptors [71]. Adenosine is also important in survival of protozoan parasites, such as the obligate intracellular pathogen T. gondii. CD73-generated extracellular adenosine is important for T. gondii survival because this pathogen can not to make its own [53]. Thus, CD73-deficient mice infected with T. gondii have lower parasite levels and are protected from chronic infection when compared with wild-type mice. Since mice lacking adenosine receptors have no effect in cyst formation, CD73 expression is thought to promote T. gondii differentiation and cyst formation by a mechanism dependent on adenosine generation, but independent of adenosine receptor signaling [53], suggesting that the parasite utilizes adenosine as a substrate for its own metabolism. Thus, adenosine effects on cells infected with intracellular pathogens depends on the specific type of pathogen and may depend on which adenosine receptor is stimulated [ Fig. 1].
Because of the role of adenosine in facilitating survival of some microorganisms, some pathogens have evolved mechanisms to stimulate extracellular adenosine generation independently of the host. Staphylococcus aureus produces adenosine synthase A (AdsA) as a virulence factor, a cell wallanchored enzyme which allows the bacteria to escape from phagocytic clearance and favoring the formation of organ abscesses [52]. Moreover, the same study showed that other prominent bacteria from the oral cavity including Enterococcus faecalis and Streptococcus mutans possess uncharacterized homologs of adenosine synthase.
Furthermore, some pathogens have evolved extracellular nucleotide-hydrolyzing enzymes that mimic the ecto-nucleotidases expressed in the host [ Fig. 1]. For example, the surface of Trypanosoma cruzi expresses a Mg 2þ -dependent ecto-ATPase activity, whose function is 20 times greater in trypomastigotes than in epimastigotes, suggesting a role for this enzyme in promoting infection in the vertebrate host [72]. A comparison of the Mg 2þ -ecto-ATPase activities of the three forms of T. cruzi showed that the noninfective epimastigotes are less efficient at hydrolyzing eATP than the infective trypomastigote and amastigote stages [73]. Another parasite with Mg 2þ -dependent ecto-ATPase is L. amazonensis, where the avirulent promastigotes are less efficient than the virulent promastigotes in hydrolyzing eATP, suggesting that virulent strains acquire adenosine and utilize it in its favor [74].
Interestingly, some bacteria secrete ATP during their growth [75], which may play a role in bacterial physiology. In the oral cavity, it was found that Aggregatibacter actinomycetemcomitans but not P. gingivalis, Prevotella intermedia, or F. nucleatum secrete ATP into the culture supernatant during its growth [76]. Infection in general can induce release of ATP from the host cell [77], and P. gingivalis infection stimulates ATP liberation after infection in GECs [63], THP-1 macrophages [78], and murine macrophages (unpublished data). It is tempting to speculate that eATP may be secreted during infection by some pathogens during their growth, such as the common bacteria Escherichia coli, Staphylococcus, Acinetobacter, and others; while supporting the survival of other pathogens during infection e for example during infection by L. amazonensis, T. gondii and T. cruzi, due to their ability to hydrolyze eATP generating adenosine, which is necessary for their survival.
Inflammasomes and purinergic signaling associated with P. gingivalis infection Inflammasomes, discovered in 2002, are multi-protein complexes assembled in the host cell in response to infection or cellular stress, leading to a type of cell death called pyroptosis and/or maturation and secretion of pro-inflammatory cytokines, such as IL-1b and IL- 18 [79e81]. Pyroptosis is a nonhomeostatic and lytic cell death dependent on caspase-1 and/or caspase-11 activation [80,82,82]. P2X7 receptor was shown to activate NLRP3 inflammasomes [83]; and recently, caspase-11-induced pyroptosis was shown to require pannexin-1 channels and the P2X7 receptor [84]. This kind of cell death presents similarities to necrosis such as an increase in the cytoplasmic volume and rupture of the cellular plasma membrane. In the context of inflammasome activation, proinflammatory cytokines are important for eliminating pathogens [4,82]. Pyroptosis is important because of the cytokines, chemokines and DAMPs which are released to the extracellular compartment, and also because this type of cell death exposes intracellular bacteria to extracellular immune surveillance, thus allowing their destruction by antimicrobial peptides, immunoglobulins, and the complement system, and their uptake by immune cells [79]. IL-1b affects virtually all cells and organs of the body and is one of the most important cytokines that mediate autoimmunity, infections and degenerative diseases [85]. This cytokine has a role in the central nervous system as an endogenous pyrogenic agent, and it can also induce inflammation, leukocyte recruitment, and Th17 profile immune responses [85,86]. Inflammasomes are usually studied in immune cells, but they can also be activated in several types of epithelial cells, including GECs [87].
Canonical inflammasomes convert procaspase-1 into the catalytically active enzyme, caspase-1, whereas a still completely undefined non-canonical inflammasome promotes activation of procaspase-11 [82,88]. Several canonical inflammasome complexes have been identified depending on the receptor that recognizes the PAMPs (for example, NLRP1, NLRP3, AIM2, NLRC4), while the non-canonical inflammasome can be activated by cytosolic LPS derived from Gram-negative bacteria. One of the best characterized inflammasomes contains the NLR member, NLRP3, the adaptor protein, apoptosisassociated speck-like protein containing a CARD (ASC), and the protease, caspase-1. The NLRP3 inflammasome can be activated by different stimuli, such as bacterial, viral, and fungal pathogens, pore-forming toxins, crystals, silica and DAMPs (for example, eATP) [82,89]. The activation of the canonical NLRP3 inflammasome typically requires two signals: (1) a PAMP, such as LPS, leading to transcription of NF-kB, upregulating genes encoding pro-inflammatory cytokines, chemokines and proteins involved in the inflammasome platform; and (2) a DAMP, such as eATP, which induces inflammasome activation after ligation to the P2X7 receptor [39]. Once activated, these complexes promote activation of b i o m e d i c a l j o u r n a l 3 9 ( 2 0 1 6 ) 2 5 1 e2 6 0 the protease caspase-1, which cleaves pro-IL-1b and pro-IL-18 into their active forms: IL-1b and IL-18.
As reviewed elsewhere [79], inflammasome activation occurs in response to microbial invasion and is important for controlling infections. The murine NLRP1 inflammasome recognizes the cytosolic Bacillus anthracis lethal toxin, and mutations in the Nlrp1b gene confer susceptibility to anthrax lethal toxin-induced macrophage death. A defective NLPR3 inflammasome leads to mice susceptibility to Candida albicans and Aspergillus fumigatus infections [79]. Besides stimulating secretion of IL-1b and IL-18, the induction of pyroptosis is a critical in vivo mechanism by which the NLRC4 inflammasome clears flagellin-expressing bacteria, such as Legionella pneumophila and Burkholderia thailandensis. Moreover, Francisella tularensis activates the AIM2 inflammasome, as shown by the increased susceptibility of caspase-1-deficient mice to infection with this pathogen [79].
IL-1b regulates innate immune responses and is critical for host defense against bacterial infection. However, excessive IL-1b production or inflammasome components [78], as well as increased P2X7 receptor and NLRP3 mRNA levels [90,91], are linked to periodontal disease in human gingival tissues. Recently, our group demonstrated that in murine macrophages, eATP-induced IL-1b secretion is impaired by P. gingivalis fimbriae in a P2X7-dependent manner [92]. In human macrophages, P. gingivalis induces IL-1b secretion and inflammatory cell death via caspase-1 activation (pyroptosis). Moreover, IL-1b secretion and pyroptotic cell death requires both NLRP3 and AIM2 inflammasome activation by this oral pathogen. P. gingivalis infection induces ATP release from macrophages, which is mediated by NLRP3 inflammasome activation via P2X7 receptor stimulation and lysosomal damage [78]. In addition, GECs express the inflammasome components, NLRP3, NLRC4 and NLRP1 [93]. P. gingivalis stimulated expression of IL-1b mRNA and intracellular accumulation of pro-IL-1b, although IL-1b secretion required on the addition of eATP in vitro. In GECs, eATP, but not P. gingivalis alone, induced caspase-1 activation [93]. Thus, P. gingivalis infection can provide the signals necessary for synthesis of pro-IL-1b, but an exogenous danger signal, such as eATP, must activate the inflammasome, allowing the infected cell to secrete mature IL-1b. eATP induces ROS production through a complex consisting of the P2X4, P2X7 receptor and pannexin-1, and the P2X7mediated ROS production can activate the NLRP3 inflammasome and caspase-1 [94]. Interestingly, P. gingivalis infection in GECs induces partially reduction of NLRP3 mRNA levels compared with uninfected GECs [93], suggesting that P. gingivalis can inhibit inflammasome components to promote its own survival. Consistent with the idea that P. gingivalis suppresses immune responses, P. gingivalis also suppresses inflammasome activation due to infection by another oral bacterium, F. nucleatum. This repression affects IL-1b and IL-18 processing and cell death, in both human and murine macrophages. F. nucleatum activates IL-1b secretion via the NLRP3 inflammasome, but when macrophages are co-infected with F. nucleatum and P. gingivalis, activation of the inflammasome and caspase-1, as well as IL-1b secretion, are inhibited by P. gingivalis [25]. Since inflammasome activation is important for controlling infection, and P. gingivalis-induced inflammasome activation is linked to induction of periodontitis, it is still unclear if deficiencies in inflammasome activation could favor P. gingivalis infection or, instead, impair periodontitis and alveolar bone loss induced by this periodontal pathogen.
Concluding remarks
Purinergic signaling can up-or down-modulate immune responses depending on the danger signals present, the purinergic receptor that is activated, and the cell type involved in the process [ Fig. 1]. A large body of evidence demonstrates that purinergic signaling affects clearance or persistence of infection by P. gingivalis and others pathogens. P1 as well as P2 receptors modulate P. gingivalis infection as a function of the danger signal involved. Moreover, ecto-enzymes from the host cell or from the pathogens can modulate the course of infection by influencing the availability of nucleotides in the microenvironment. Finally, P2X7 receptor is involved in the activation of inflammasomes and its activation can control different infections. Because of the purinergic signaling can modulate different intracellular infections including those with the oral pathogen P. gingivalis, this field represents an important focus for future research regarding survival and elimination of different pathogens.
Conflicts of interest
All authors have declared that are no conflicts of interest. | 2018-04-03T04:15:51.920Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "03dff54e67bec6869e054efb28bd9ba46b266fcf",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bj.2016.08.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03dff54e67bec6869e054efb28bd9ba46b266fcf",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
33945145 | pes2o/s2orc | v3-fos-license | Primary anthropogenic aerosol emission trends for China , 1990 – 2005
An inventory of anthropogenic primary aerosol emissions in China was developed for 1990–2005 using a technology-based approach. Taking into account changes in the technology penetration within industry sectors and improvements in emission controls driven by stricter emission standards, a dynamic methodology was derived and implemented to estimate inter-annual emission factors. Emission factors of PM2.5 decreased by 7%–69% from 1990 to 2005 in different industry sectors of China, and emission factors of TSP decreased by 18%–80% as well, with the measures of controlling PM emissions implemented. As a result, emissions of PM2.5 and TSP in 2005 were 11.0 Tg and 29.7 Tg, respectively, less than what they would have been without the adoption of these measures. Emissions of PM 2.5, PM10 and TSP presented similar trends: they increased in the first six years of 1990s and decreased until 2000, then increased again in the following years. Emissions of TSP peaked (35.5 Tg) in 1996, while the peak of PM 10 (18.8 Tg) and PM2.5 (12.7 Tg) emissions occurred in 2005. Although various emission trends were identified across sectors, the cement industry and biofuel combustion in the residential sector were consistently the largest sources of PM 2.5 emissions, accounting for 53%–62% of emissions over the study period. The non-metallic mineral product industry, including the cement, lime and brick industries, accounted for 54%–63% of national TSP emissions. There were no significant trends of BC and OC emissions until 2000, but the increase after 2000 Correspondence to: K. B. He (hekb@tsinghua.edu.cn) brought the peaks of BC (1.51 Tg) and OC (3.19 Tg) emissions in 2005. Although significant improvements in the estimation of primary aerosols are presented here, there still exist large uncertainties. More accurate and detailed activity information and emission factors based on local tests are essential to further improve emission estimates, this especially being so for the brick and coke industries, as well as for coalburning stoves and biofuel usage in the residential sector.
Introduction
Understanding China's anthropogenic aerosol emission trends has considerable scientific importance due to the broad impact of aerosols on climate and air quality.Human-made aerosols impact the climate system directly by enhancing the scattering and absorption of solar radiation and indirectly by providing the condensation nuclei for cloud drops and ice crystals (Ramanathan et al., 2001;Ramanathan and Carmichael, 2008).Atmospheric aerosol trends in China have been suggested as possible causes for many of the fundamental changes in regional climate that have been observed.These include the decrease of surface temperature (Qian and Giorgi, 2000;Giorgi et al., 2002Giorgi et al., , 2003;;Menon et al., 2002;Qian et al., 2003;Huang et al., 2006), changes in surface solar radiation trends (Kaiser and Qian, 2002;Che et al., 2005;Qian et al., 2006;Streets et al., 2006aStreets et al., , 2008Streets et al., , 2009;;Xia et al., 2007), changes in cloud properties (Kawamoto et al., 2006;Qian et al., 2006), the reduction of precipitation (Giorgi et al., 2003;Zhao et al., 2006;Huang et Published by Copernicus Publications on behalf of the European Geosciences Union.
Aerosols downgrade air quality and visibility, and damage human health (Pope et al., 1995).Heavy aerosol loadings have been reported throughout China, from the coast to the interior (e.g., He et al., 2001;Ho et al., 2003;Wang et al., 2006b;Cao et al., 2007;Li et al., 2007;Zhang et al., 2008a).Satellite observations have also indicated the possibility of significant health hazards due to aerosol pollution throughout the country (Carmichael et al., 2009).In recent Atmospheric Brown Cloud (ABC) observations, a number of Chinese mega-cities were identified as "aerosol hot spots" from satellite observations (Ramanathan et al., 2007).To date, particulate matter less than 10 µm in diameter (PM 10 ) has been the main atmospheric pollutant exceeding the National Ambient Air Quality Standard (NAAQS) in major Chinese cities, and has been the focus of local and national government control efforts (He et al., 2002;Hao and Wang, 2005;Chan and Yao, 2008).Aerosols can also impact regional air quality through their long-range transport.Modeling studies have indicated that Beijing's PM concentrations have been significantly enhanced by anthropogenic emissions from surrounding provinces (Chen et al., 2007;Streets et al., 2007).It is even argued that aerosol concentrations found within the United States are enhanced by Asia's emissions through trans-Pacific transport (Heald et al., 2006;Dunlea et al., 2009).In addition to effects on atmosphere, Calcium and Magnesium in aerosols also play important roles in soil acidification process in China (Zhao et al., 2007).
A primary aerosol emission inventory for China with interannual trends is essential for both the atmospheric science community and China's stakeholders.Primary aerosol emission inventories that include data on particulate size ranges and inter-annual trends are available for certain developed countries through their national emission inventory systems; e.g., USA (USEPA, 2004), Canada (EC, 2007), and most Europe countries (UNECE, 2003;Vestreng, 2006).But this is not the case for developing countries like China.China's Ministry of Environmental Protection (MEP) reports annually the national total suspended particulate (TSP) emissions in the two categories of "smoke" (generated from combustion) and "dust" (generated from mechanical impact and grinding during industrial processes), but these statistics only include emissions from large industries (ECCEY, 1992(ECCEY, -2006)).Furthermore, sectoral information and the spatial distribution of emissions are not provided, and therefore these reported statistics are insufficient for comprehensive scientific study.
China's carbonaceous aerosol emissions have previously been estimated within a national inventory (Streets et al., 2001(Streets et al., , 2008;;Streets and Aunan, 2005;Cao et al., 2006) or as part of regional (Streets et al., 2003;Ohara et al., 2007;Klimont et al., 2009) and global (Cooke et al., 1999;Bond et al., 2004Bond et al., , 2007) ) inventories, and emission trends have also been reported by some of these studies (i.e., Streets et al., 2008;Ohara et al., 2007).A few studies on emissions of base cations indicated that China's anthropogenic emissions of Ca and Mg might be larger than natural sources (Zhu et al., 2004), although significant emissions of mineral dusts come with sand storms.In our previous study, using a technologybased approach, we presented the first comprehensive estimates of primary aerosol emissions in China for the year 2001 based on three particulate size fractions, e.g., TSP, PM 10 and fine particulate matter less than 2.5 µm in diameter (PM 2.5 ), and four major components, e.g., black carbon (BC), organic carbon (OC), Ca and Mg.(Zhang et al., 2006(Zhang et al., , 2007b)).Using the same methodology, and as part of INTEX-B Asian emission inventory, we then updated the estimates and reported for the year 2006 (Zhang et al., 2009).However, the temporal coverage of the above work has been limited and as yet bottom-up inventory studies have not been used to gain insights into China's anthropogenic aerosol emission trends.
The purpose of this paper is to rectify this situation by developing a comprehensive view of China's anthropogenic aerosol emission trends using bottom-up methodology.In this work, we apply model frameworks similar to those described in Zhang et al. (2006Zhang et al. ( , 2007b) ) and we use a dynamic methodology similar to that of Zhang et al. (2007a) to reflect the dramatic change in China's aerosol emissions driven by energy growth and technology renewal.The dynamic methodology used in this study is detailed in Sect. 2. The inter-annual variations of net aerosol emission factors (EFs) derived from the dynamic methodology are then given in Sect.3. The results, including inter-annual emissions of TSP, PM 10 , PM 2.5 , BC, OC, Ca and Mg, and gridded emissions are reported in Sect. 4. We compare our estimates with other bottom-up and top-down studies in Sect.5, and also discuss the uncertainties associated with our analysis in that section.
Methodology
To date, estimating primary aerosol emissions for China remains a challenge and is much more difficult than for other gaseous pollutants.Firstly, in addition to emissions from energy consumption, primary aerosols are widely emitted from various industrial processes and construction activities, some of which are fugitive and therefore make accurate quantification of emissions from these sources very difficult.Secondly, the net aerosol emission rate from a specific sector is closely related to the degree of penetration of control technologies within that sector.Therefore an understanding of the utilization of various control technologies is necessary to allow meaningful EF estimates.Finally, but most importantly for emission trends, net EFs can change dramatically in only a few years in China because new technologies are continually coming into the market.For example, the building of new, large coal-fired power plants to replace or augment older, smaller plants has dramatically altered the balance of power plant technologies in use, and has reduced the average NO x EF of the whole power sector by 16% in just 10 years (Zhang et al., 2007a).This situation could also be true for aerosol emissions.
Here we develop a dynamic, technology-based methodology to estimate the primary aerosol emissions in China.A spreadsheet model was established to calculate the emissions.The geographical extent covers 31 provinces of mainland China (emissions from Hong Kong and Macao are not included in this study because the detailed technology information of these cities is inadequate to support our analysis), and the temporal scope is 1990-2005.The key innovation of this method is the estimation of EFs on a year-by-year basis using careful examination of the utilization of new control technologies during the period, instead of using fixed EFs for all years.
Model structure and calculation method
Emissions were calculated from the combination of activity rate, technology distribution, unabated EFs, the penetration of emission control technologies and the removal efficiency of those technologies, using an approach similar to that of Klimont et al. (2002) and Zhang et al. (2007b).The emissions were estimated for three size fractions: PM 2.5 , PM 2.5−10 (PM with diameter more than 2.5 µm but less than 10 µm, coarse particles), and PM >10 (PM with diameter more than 10 µm).The basic equations are: For a given combustion/production technology m in sector j , the final EF of diameter range y was estimated by the following equation: Where i represents the province (municipality, autonomous region); j represents the economic sector; k represents the fuel or product type; y represents the diameter range of PM; z represents the year; m represents the type of combustion and process technology; n represents the PM control technology; E y,z is the emissions of PM in diameter y in year z; A is the activity rate, such as fuel consumption or material production; X m is the fraction of fuel or production for a sector consumed by a specific technology m, and F is the net EF after abatement by control devices; F TSP is the unabated EF of TSP before emission control; f y is the mass proportion of PM in diameter y relative to total PM; C n,z is the penetration of PM control technology n in year z, and n C n = 1; η n,y is the removal efficiency of control technology n for PM in diameter y.
In addition to total aerosol emissions, we also estimated the emissions of several chemical components in aerosols: BC, OC, Ca and Mg.EFs for BC and OC were calculated as the mass ratio of BC and OC to PM 2.5 EFs, with the assumption that control technologies have the same removal efficiency for PM 2.5 , BC and OC.This assumption is more or less unrealistic because the removal efficiency for PM 2.5 and carbonaceous particles are usually different.For example, some recent tests (Roden et al., 2006(Roden et al., , 2009) ) showed that the BC/TC ratio of flue gas from traditional wood stoves is 0.2, whereas that from improved stoves with a chimney is 0.5.The different removal efficiency is mainly attributed to the combustion condition, which impacts the formation process of BC and OC in different ways.However, to date we lack adequate local tests to quantify the mass ratio of BC or OC to PM 2.5 before and after control technologies.Therefore we have no choice but to assume the same removal efficiency for PM 2.5 , BC and OC, despite the possibility to introduce additional uncertainty.Similarly, EFs for Ca and Mg were determined by their fraction in TSP emissions.
Emission sources are classified into three groups: stationary combustion, industrial process, and mobile sources.The stationary combustion sources involve three sectors (power plants, industry, and residential) and seven types of fuel (coal, diesel, kerosene, fuel oil, gas, wood and crop residues).The industrial process sources cover 22 products/processes in metallurgical industries, non-metallic mineral production industries and chemical industries, where cement production, coke production and iron and steel production were the most important.The mobile emission sources include seven types of on-road mobile sources: light-duty gasoline vehicles (LDGV), light-duty gasoline trucks (LDGT1), midduty gasoline trucks (LDGT2), light-duty diesel vehicles (LDDV), heavy-duty gasoline trucks (HDGV), heavy-duty diesel trucks (HDDV), and motorcycles (MC); and six types of off-road mobile sources: rural vehicles, tractors, construction equipment, farming equipment, locomotives and vessels.
Activity rates (A)
We followed our previous approach to derive activity data from a wide variety of sources, with a critical examination of the data quality (Streets et al., 2006b;Zhang et al., 2007a).Generally, fuel consumption by sector and industrial production by product can be accessed from various statistics at the provincial level.In this study, fuel consumption in stationary combustion by sector and by province was derived from the China Energy Statistical Yearbook (except diesel, see below) (CESY, National Bureau of Statistics, 1992-2007) , 1991-2006a) and many unofficial statistics from industry associations (CISIA, 1995(CISIA, -2007;;CBTIA, 2006;CLIA, 2006).Diesel consumption was broken down into three categories: industrial boilers, on-road vehicles, and off-road vehicles and machinery, following the method described in Zhang et al. (2007a) (see Sect. 3.3 of that paper for details).For on-road vehicles, the calculation of gasoline and diesel consumption by vehicle type was further refined using a fuel consumption model developed by He et al. (2005).For off-road vehicles and machinery, fuel consumption by tractors and rural vehicles was estimated from their population, fuel economy and annual travel mileage; diesel consumption by farming and construction machinery was estimated from their total power (National Bureau of Statistics, 1991-2006b) and their average number of working hours (Nian, 2004); diesel consumption of trains and vessels was estimated based on passenger and freight turnover for railways and inland waterways, respectively, fuel economy, and the distribution of the modes of transport (YHCTC, 1991(YHCTC, -2006)).
Technology distributions (X)
Unabated PM emissions are always determined by the technology used for combustion or in the industrial process.Over recent decades, the balance of technologies used has changed considerably in China.For instance, the percentage of cement produced by precalciner kilns increased from 20% in the mid-1990s to 65% in 2008 (Lei et al., 2011).The distribution of the combustion technology in each sector and the processing technology for each industrial product is generally not available from national government statistics.We therefore collected these data from a wide range of published and unpublished statistics provided by various industrial associations and technology reports.The detailed data sources for the main sectors are listed in Table 1.
Unabated EFs (EFTSP and F y )
According to Eq. ( 2), net EFs for PM were determined by unabated EFs for TSP, the size distribution of PM, the penetration of PM control technologies and their removal efficiency.Unabated EFs for TSP and the size distribution were considered constant for each specific technology in stationary emission sources, as listed in Table 2. Most of the information was derived from available measurements in China or from estimates based on the actual technology level and practice (SEPA, 1996a;Zhang et al., 2000Zhang et al., , 2006;;Lei et al., 2011).EFs for similar activities from the US AP-42 database (USEPA, 1995) and the RAINS-PM model (Klimont et al., 2002) were used where local information was lacking.The control measures for PM emissions from on-road vehicle emissions were different from stationary sources.The EFs of each type of on-road vehicle under each emission standard were derived from Zhang et al. (2007b), and are listed in Table 3.
Penetrations of PM control technology (C)
There is little statistical information on the penetration of PM control technologies in China's emission sources except for the power sector (which will be discussed in Sect.3.1).Recent studies (Klimont et al., 2009;Zhang et al., 2009) tried to estimate the penetration based on the legislation.Following this idea, where data were lacking we used an alternative method to estimate the penetration of PM control technologies.We considered the Chinese government's new emission standards to be the driving force for the implementation of advanced control technologies.Assuming the emission sources comply with the emission standards of the day when it is built or retrofitted (stationary sources) or came into the market (mobile sources), the typical penetration of PM control technologies in new emission sources was estimated for each year, based on the threshold value of the active emission (Wang et al., 2006a).Thereby the penetration of PM control technologies in each source category was estimated for each year.An example applying this approach to estimate inter-annual EFs in cement industry is described in Sect.2.3 of a related paper (Lei et al., 2011).
The emission standards considered in this work are listed in Table 4.
Fugitive dust control technologies were categorized into "normal practice" and "good practice".Klimont et al. (2002) have summarized the removal efficiencies of these technologies based on practices in Europe and US, but any suboptimum operation of the control devices would lead to lower removal efficiencies.The removal efficiencies that we used are listed in Table 5; they are mostly taken from the estimation from Klimont et al. (2002), but some changes were made based on local emission source tests made in China (Yi et al., 2006b).
EFs for BC, OC, Mg and Ca
BC and OC, formed during incomplete combustion, are mainly concentrated in the fine fractions.The United States Environmental Protection Agency (USEPA) has compiled the mass ratio of BC and OC in PM 2.5 for major sources in SPECIATE, a source profile database.But there is little systematic research on source profiles of PM 2.5 , especially from boilers and kilns in China.In this study, for most industrial process sources we used data from the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) model, by deriving the mass ratios of BC and OC (Kupiainen and Klimont, 2004Klimont, , 2007) ) in PM 2.5 (Klimont, 2002).Note that although the emission factors in GAINS have been recently updated, they still rely on many assumptions and little measurement data.
For most stationary combustion sources and mobile sources, we used the mass ratios of BC and OC in PM 1 from Bond et al. (2004) and converted them into ratios in PM 2.5 by the following equation: where F BC/OC represents the mass ratio of BC or OC in PM 2.5 ; f BC/OC refers to the mass ratio of BC or OC in PM 1 from Bond et al. (2004), f 1 refers to the mass ratio of PM 1 in PM 10 from Bond et al. (2004); EF 10 , EF 2.5−10 and EF 2.5 is the unabated EFs of PM 10 , PM 2.5−10 and PM 2.5 , respectively, as listed in Table 2.The exception was for residential coal stoves, because emission tests for fine PM, BC and OC have been conducted by Chinese researchers in recent years (Chen et al., 2005(Chen et al., , 2006(Chen et al., , 2009;;Zhang et al., 2008b;Zhi et al., 2008Zhi et al., , 2009)).As such, we used the average EFs derived from the latest BC and OC emission test results (Chen et al., 2009).Although Li et al. (2009) calculated EFs for BC and OC from biofuel combustion based on local tests in China, their calculated ratio of BC/OC is much higher than the published results from other research.They attributed the high ratio both to the tested stoves having a better oxidization atmosphere and hence improved combustion efficiency and to the protocol used in BC and OC analysis.Since there is no evidence to show that stoves typically used in China will have the relatively high combustion efficiency of Li et al.'s (2009) study, we did not use their BC and OC emission factors.The mass ratios of BC and OC to PM 2.5 are listed in Table 6.Emissions of Ca and Mg in PM come from coal burning and the raw materials used in industrial processes.Zhu et al. (2004) investigated the mass percentage of Ca and Mg in fly ash from coal combustion and the raw materials used in non-metallic mineral product industries by province.Here we use the mass ratios of Ca and Mg derived from their study, as listed in Table 7. a Average mass ratio of BC and OC to PM 2.5 from coal stove dropped while share of briquettes in coal consumption increased.b Note that there's no EF for BC and OC from brick making industry in Kupiainen and Klimont (2004).Here we apply the same OC ratio and a little higher BC ratio of coke industry.
Trends in net emission factors
Net EFs for PM are not only affected by the penetration of PM control technologies, but also by the balance of technologies employed within the emission sources.In this section, we focus on some emission sources (including power plants, the cement industry, the iron and steel industry, the coke industry, residential coal stoves and on-road vehicles) which may make a significant contribution to China's PM emissions, or which may show a significant change through time.
Power plant and industrial boilers
The power sector is the largest consumer of coal in China.China's thermal power generation increased from 0.49 trillion kWh in 1990 to 2.05 trillion kWh in 2005 (NBS, 1992(NBS, -2007)).Accordingly, coal consumption by China's power plants increased from 270 Tg to 1050 Tg (NBS, 1992(NBS, -2007)), with an annual rate of increase of 9.4% and a percentage share of total coal consumption increasing from 30% to 50%.
Pulverized coal boilers are the dominant combustion technology used in power plants, accounting for 92% of capacity in the power sector (SEPA, 1996a).Grate furnaces account for the remaining 8%, mostly used in small electricity generation units within industry self-supplying power plants.ESP, WET and CYC were widely used in power plants to mitigate PM emissions.In recent years, FAB has increasingly been installed, but we do not consider it in our model as its share of the power sector before 2005 is negligible.There were three emission standards for thermal power plants published from 1990 to 2005.The first release gave various standard values for new power plants using coals with different ash contents (SEPA, 1991); the second release gave a unique standard value for all new power plants (SEPA, 1996b), resulting in a phasing out of inefficient PM removal technologies such as CYC; and the third release gave a stricter standard value (SEPA, 2003), which not all power plants could meet without the use of ESP or FAB.In this work, based on the penetration rate in China of the three types of PM control technologies in the early 1990s (SEPA, 1996a) and after 2000 (China Electricity Council, 2004), we estimated the PM EFs from 1990 to 2005 by interpolating penetration rates of the PM control technologies based on the three versions of emission standards, as shown in Fig. 1a.The estimated net EF of PM 2.5 , PM 2.5−10 and PM >10 were found to have decreased by 67%, 65%, and 54% from 1990 to 2005, respectively.
Coal consumption by industrial boilers increased at a lower rate than the power sector, from 250 Tg in 1990 to 540 Tg in 2005 (NBS, 1992(NBS, -2007)).Supplying heating and hot water for industrial processes, industrial boilers are mostly equipped with grate furnaces.Most industrial boilers are installed with WET and CYC because they are generally much smaller in capacity and their unabated EFs of PM are lower than power plant boilers.Using the same approach for power plant boilers, we estimated the net EF of Y. Lei et al.: Primary anthropogenic aerosol emission trends PM from industrial boilers.The results shows that the net EFs of PM 2.5 , PM 2.5−10 and PM >10 have decreased by 12%, 40%, and 70% from 1990 to 2005, respectively.
Cement industry
China's cement industry is a typical emission source that utilizes both new, advanced technologies and older, increasingly out-moded ones.Shaft kilns, which have been replaced in many industrially more advanced countries, have played a major role in China's cement industry for a long period, and in the mid-1990s accounted for over 80% of cement production.Precalciner kilns (generally known as "new-dry process kilns" in China) increased their cement production 11 times over between 2000 and 2008, and in 2006 exceeded production from shaft kilns (Lei et al., 2011).Unabated PM EFs are different among cement-producing processes, but what greatly increased the difference in net EFs is the quite different PM control technologies utilized within cement plants.
There have been three emission standards for the cement industry in China (SEPA, 1985(SEPA, , 1996c(SEPA, , 2004)).CYC was applied to recycle the raw material before publication of the first standard.After that, WET, ESP and FAB were gradually developed and introduced into the market place, enabling cement plants to reduce PM emissions.SEPA (1996a) calculated the net TSP EF to be 23.2 g kg −1 in the early 1990s by testing 264 cement production lines.The Chinese Research Academy of Environmental Sciences (CRAES, 2003) tested emissions from 90 cement plants utilizing advanced PM control devices, and found the average net TSP EF to be approximately 2 g kg −1 .Based on this information, we estimated PM EFs for different types of cement kilns in China for the period from 1990 to 2008 (Lei et al., 2011).The penetration of PM control technologies as well as the net PM EFs from 1990 to 2005 is shown in Fig. 1b.The net EF of PM 2.5 , PM 2.5−10 and PM >10 decreased by 69%, 72% and 75% from 1990 to 2005, respectively.
Coke industry
China is the largest coke producer in the world.Production of coke increased 3.5 times during the period 1990-2005, driven by a tremendous demand from the domestic iron and steel industries and its high price on international markets.In industrially more advanced countries, coke plants are usually located within iron and steel plants, and supply coke for iron smelting.However in China, two thirds of total coke production comes from individual coke companies, many of which are equipped with small-scale indigenous ("beehive") coke production facilities.
PM is emitted not only from coke ovens, but also by several processes such as coal crushing, coal feeding and coke quenching (USEPA, 1995).However, China has no emission standard for these processes, only for the direct emissions from coke ovens (SEPA, 1996d).PM control devices are in-stalled in most large coke plants with mechanized coking facilities; however few are installed in small plants with indigenous coking facilities.Through a similar approach described in the previous two sections, the penetration of PM control technologies and the net PM EFs were calculated from the annual production from mechanized and indigenous coking ovens, as shown in Fig. 1c.Our estimations indicate that EFs increased in the first half of the 1990s as the share of coke produced from indigenous ovens increased.However, this share decreased from 49% in 1995 to 18% in 2005, resulting in a decrease in PM EFs as well.
Iron and steel industry
The iron and steel industries involve a series of interrelated processes.Besides coke production (see above), the major release points of PM include sinter production, pig iron production, steel production and casting.There are three type of technology in steel production: Open Hearth Furnace (OHF), Basic Oxygen Furnace (BOF) and Electric Arc Furnace (EAF).These processes/technologies were considered separately in our estimation of PM emissions from the iron and steel industry.
Prior to 2005, in China there have been two emission standards for the iron and steel industry (SEPA, 1988.We assume that more efficient control technologies were promoted in most processes after the release of the 1996 standard, except for casting and OHF, which were gradually replaced by other processes after the mid-1990s.The penetration of PM control technologies before 1996 was derived from source test results (SEPA, 1996a), and the penetration after 1996 was calculated based on investigation of key iron and steel companies (Sino-Steel TianCheng Environmental Protection Science and Technology Co., Ltd, 2007).The trends in TSP EFs in the iron and steel industry were then estimated for different processes/technologies, as shown in Fig. 2. The EFs of TSP from sinter production, iron production, BOF and EAF decreased by 18% to 27% from 1996 to 2005, and EFs of PM 2.5 decreased by 7% to 21%.
Residential coal stove combustion
It is believed that residential coal stoves are a major source of BC emissions in China (Streets et al., 2001;Bond et al., 2004).Recent experimental research conducted in China indicated that the following three factors could lead to one or two orders of magnitude difference in EFs for BC and OC: (1) the type of coal (e.g.bituminous or anthracite), (2) the shape of the coal when it is burned (e.g.chunk or briquette), and (3) the type of stove (e.g.traditional stoves or improved stoves) (Chen et al., 2009;Zhi et al., 2009).Chen et al. (2009) of chunk/briquette ratio, we followed Chen et al.'s (2009) approach and estimated EFs for BC and OC for the period 1990-2005 (Fig. 3), assuming the mix of chunk and briquette coal changed linearly from 1990.As the share of briquettes in coal consumption in residential coal stoves increased from 20% to 50%, average net EFs for BC and OC dropped by 34% and 10%, respectively.
On-road vehicles
Net annual EFs of on-road vehicles were estimated from the population of new-sale vehicles and raw EFs using similar methodology to that described by Zhang et al. (2007a).The raw EF of new-sale vehicles was estimated from the current emission standard in force at the time of manufacture.China began to implement emission control standards for on-road vehicles in 1999.As listed in sources were minor relative to other sources; however their proportion in total PM 2.5 emissions more than doubled in the 15-year study window (from 1.3% to 2.9%).
The lime and brick industries are more important in terms of emissions of larger particles.The non-metallic mineral product industry, including the cement, lime and brick industries, accounted for 55%-65% of national TSP emissions.This estimate is larger than the official statistical data (EC-CEY, 1992-2006).We attribute the difference to the absence from the official data of emission estimates from small plants (including small industrial boilers and industrial processes).These small plants commonly lack emission control devices and moreover are generally not included in official emission statistics because of their diffused distribution over rural China, away from cities.
Industrial boilers contributed less than 10% of PM emissions.Although TSP emissions did not change much during 1990-2005, PM 2.5 emissions from industrial boilers increased by 90%.As the industrial boilers are usually located in populated area, more efficient PM control devices to reduce PM 2.5 emissions, such as ESP, are needed for the benefit of public health.
Figure 6 shows the PM 2.5 emissions by province in 1990, 1995, 2000 and 2005.Shandong, Hebei, Jiangsu, Henan, Guangdong and Sichuan combined accounted for about 40% of total PM 2.5 emissions in China.Emissions of PM 10 and TSP have a similar distribution across provinces to that of PM 2.5 .PM emissions from provinces that have more ad-
Carbonaceous aerosols
Emissions of BC increased from 1.1 Tg in 1990 to 1.5 Tg in 2005, and emissions of OC varied between 2.5 and 3.2 Tg for the same period, as shown in Table 9 and Fig. 7. Significant increase occurred for both BC and OC emissions during 2000-2005.Most of the increase (0.13 Tg of BC and 0.51 Tg of OC) was due to biofuel combustion, followed by the coke industry (0.09 Tg of BC and 0.11 Tg of OC) and mobile sources (0.04 Tg of BC and 0.02 Tg of OC).The residential sector is the largest contributor of carbonaceous aerosol emissions, accounting for 47%-69% of China's total BC emissions and 81%-92% of total OC emissions.The transportation sector is the dominant contributor to anthropogenic BC emissions in developed countries such as the United States (203 of 354 Gg) and OECD Europe (226 of 343) (Bond et al., 2004).However, total BC emissions from China's mobile sources, including on-road transportation and off-road mobile sources, were 187 Gg in 2005, much less than those of the industrial (609 Gg) or residential (701 Gg) sectors.Compared to on-road vehicles (54 Gg in 2005), off-road mobile engines emitted much more BC (133 Gg in 2005) because there are fewer emission control policies on these sources.Figure 8 illustrates the large differences in BC emissions among sectors and provinces that our anal- ysis identified.Industries such as coke and brick-making plants are the most significant contributors in northern China (Hebei, Shanxi, Shandong and Henan), while the residential sector is the dominant source of emissions in the south, and especially in the southwest (e.g.Guangxi, Chongqing and Sichuan) since much more coal and biofuel are used there.
Ca and Mg
Figure 9 shows the emission trends of Ca and Mg in China.
The cement and lime industries contribute 90% of total Ca emissions, while production of cement, iron, steel, lime and brick contribute 75% of total Mg emissions.Ca and Mg showed similar emission trends in the 1990s: an increase in the first 6 years followed by a decrease.After 2000, emissions of Ca were relatively stable, although they show a decrease in 2005.However emissions of Mg showed a further increase from 2000 to 2005, a trend that can mainly be attributed to increased emissions from the iron and steel industries.Our estimates of emissions in 2001 (6.11 Tg Ca and 0.29 Tg Mg) are higher than those of Zhang et al. (2007b), who estimated emissions of 4.52 Tg and 0.23 Tg, respectively.Further examination reveals that the discrepancy is due to the different data sources used for brick and lime production.There were more than 80 000 small brick workshops and about 5 000 small lime plants in China (Zhou, 2003), but there are no statistical data on production of brick and lime in recent years.This situation therefore increases the uncertainty of any estimation of Ca and Mg emissions.
Note that these results could have underestimated the anthropogenic emissions of Ca and Mg because construction activities are not included in our study.In addition to the anthropogenic sources, natural sources, such as deserts, also contribute significant emissions of Ca and Mg.
Trends in several key sectors
Trends of PM emissions were found to be different for each sector.Here we discuss seven key sectors that either emitted large amounts of PM or showed a sharp change in emissions.
Power plant boilers
PM emissions from power plants rose from 1990 to 1996, and then dropped until 2000.With significant increases in power generation since 2000, PM emissions increased again after 2000, and reached their peaks in 2005 (1.4 Tg PM 2.5 , 2.3 Tg PM 10 , and 3.1 Tg TSP).
Estimates of PM emissions were compared with China's governmental statistical data (ECCEY, 1992(ECCEY, -2006) ) in Fig. 10a.Our estimates are about 25% lower than the statistical data, but show a similar inter-annual trend.Since the government's statistics are mostly based on calculated emissions, not derived from monitored data, we attribute the difference between the government's estimates and our own to the different parameter values used in the calculations.We also compared our PM emissions in 2001 and 2003 with Zhang et al. (2007b) and Yi (2006a), and the differences are much less (approximately 2%).
Cement industry
As a major contributor of PM emissions, the cement industry accounts for about 30% of total emissions in China.Historically there have been two periods where cement production increased very rapidly: 1990-1995, when the average annual rate of increase was 17.8%, and 2002-2005, when the average annual rate of increase was 12.4%.However, the emissions of PM show a different trend in these two periods, as shown in Fig. 10b.In the first period, PM emissions increased rapidly and reached their peaks in 1997, with 4.4 Tg PM 2.5 , 7.2 Tg PM 10 and 10.4 Tg TSP.With the implementation of a new emission standard that was released in 1996, and the slowing down in the expansion of the cement industry, PM emissions dropped in the late 1990s.In spite of a rapid increase in cement production after 2000, PM emissions remained at around 8 Tg, because the widespread replacement of older shaft kilns by newer precalciner kilns offset any potential increase in PM emissions.From 2004 to 2005, cement production from shaft kilns decreased by 9% while that from precalciner kilns increased by 50%.This structural change within the cement industry led to a 5.4% decrease in PM emissions in just one year.
Coke industry
The historical trend of PM emissions from the coke industry is shown in Fig. 10c.Annual PM emissions from the coke industry have been about 1 Tg since 1995, of which PM 2.5 accounts for more than half of the mass.Two emission peaks are identified, in 1995 and 2005, which are in accordance with the historical changes in coke production.
Thirty-six percent of national coke production was from Shanxi province for the period 1990-2005.Indigenous coke ovens were dominant in Shanxi in 1990s, accounting for more than 80% of coke production (Polenske, 2006).Although the statistical data (National Bureau of Statistics, 2006) show that indigenous coke ovens were largely replaced by automatic, mechanized coke ovens after 2000, the rapid increase in coke production offset any decrease in PM emissions from the use of cleaner technologies.Our result shows that the annual emissions of PM 2.5 from the coke industry in Shanxi have been above 200 Gg since 1994, accounting for more than one-third of total emissions in this province.Note that the estimates of emissions from indigenous coke ovens are highly uncertain because we did not find any report of emission measurement for the whole coke producing process and the unabated EFs were assumed to be same as the mechanized oven.
Iron and steel industry
PM emissions from the iron and steel industry show a continuous increase over the period 1990-2005, as shown in Fig. 10d.Although EFs levelled off after 1996, production of steel increased from 130 Tg in 2000 to 360 Tg in 2005 and, as a result, PM emissions from the industry doubled in the five years, from 1.2 Tg TSP to 2.3 Tg TSP.PM >10 accounts for about 60% of total PM emissions by mass.Our results show that 86% of PM >10 are fugitive dust from the processes of sinter production and pig iron production.Note that fugitive dust emissions cannot be directly measured and the true practices of its control vary a lot from one plant to another.Thus the uncertainty of emission estimates in this part could be very high.PM 2.5 emissions are dominated by three points of emission: the beginning and end processes of the sinter machine, the casting facility in iron production, and the EAF in steel production, which combined account for more than 75% of total emissions.
Residential sector
As the largest contributor of PM 2.5 emissions, the residential sector emitted about 4 Tg of PM 2.5 annually from 1990 to 2005, as shown in Fig. 10e.Eighty percent of PM 2.5 emissions in this sector come from the combustion of biofuel (firewood and stalks) in rural households.As fuel for cooking and heating, firewood and stalks are usually combusted in indigenous stoves that have low thermal efficiency and high emissions.Biofuel will continue to play an important role in supplying energy to rural China in the near future (Zhou, 2003).Promotion of cleaner biomass stoves could be one way to reduce PM emissions from the residential sector.
Coal boilers and stoves contribute the remaining 20% of PM 2.5 emissions from this sector.On the one hand, coal as a fuel for cooking is being gradually replaced by gas and electricity with the process of urbanization and with the general improvement in the quality of life across China; however, on the other hand, coal consumption for heating has shown a very rapid growth.As the result, coal consumption in the residential sector has increased by 25% over the 1990-2005 study period, and correspondingly we calculate that PM 2.5 emissions have remained more or less constant at around 0.8 Tg.
As shown in Fig. 7, the residential sector is dominant in terms of BC and OC emissions.Although BC and OC emissions from residential coal combustion decreased by 41% and 19%, respectively, from 1990 to 2005, emissions from the sector as a whole did not change greatly because the emissions from biofuel combustion are relatively constant.
On-road vehicles
PM emissions from on-road vehicles were much less than from stationary sources; however our findings show that they increased more than any other sector.PM 2.5 , accounting for 90% of total PM emissions from on-road vehicles, increased from 27.7 Gg in 1990 to 132.15 Gg in 2005, with an average annual increase rate of 11%, as shown in Fig. 10f.
As discussed in Sect.3.6, EFs of on-road vehicles were getting lower due to implementation of stricter emission standards and regional regulations since 1999.However, the PM emissions continued to increase for several years as many more vehicles came onto the market than were taken off the road.PM emissions decreased a little in 2005, and this decrease or levelling off may be a feature of the near future as stricter emission standards come into effect.
Off-road mobile sources
As a large consumer of diesel oil, off-road mobile sources, including transportation with locomotive and inland waterway, agricultural vehicles and machinery, and construction machinery, emitted much more PM than on-road vehicles.As shown in Fig. 10g, emissions of PM 2.5 from off-road mobile sources increased from 93.0 Gg in 1990 to 233.2 Gg in 2005 due to growing diesel consumption.However, emissions of coarse PM and PM >10 decreased sharply because the steam locomotives, which are driven by coal-fired boilers, were gradually substituted by diesel and electric ones.
In China, control of PM emissions from off-road mobile sources lagged behind those from on-road vehicles.The government did not release emission standard for off-road mobile sources until 2005.As BC emissions from diesel engines could be considerable, and emission control on on-road vehicles is moving forward quickly, off-road sources need to be addressed more in China's future policy making.
Gridded emissions and data availability
Using a similar approach to that of Streets et al. (2003) and Woo et al. (2003), we mapped PM emissions onto a 30 min × 30 min grid using various spatial proxies.Figure 11 shows the mapped emissions of PM 10 , PM 2.5 , BC andOC in 1990 and2005.A significant increase of PM 2.5 and PM 10 emissions during this period can be seen in Northern China, especially over Shandong, Hebei, Henan and Jiangsu, The trend of BC emissions was similar to PM 2.5 and PM 10 , while OC emissions showed a little different trend.The most significant increase of OC emissions took place in Sichuan when biofuel was more and more used by rural residents; however, OC emissions in more developed provinces, such as Jiangsu and Zhejiang, decreased, possibly due to the gradual replacement of biofuel by cleaner fuels such as gas.
All regional and gridded emission data sets can be downloaded from our web site (http://mic.greenresource.cn/China-aerosol-trends).Users can examine emissions by province and by sector from the summary tables.Gridded data include the emissions of PM 2.5 , PM 10 , BC and OC by sector (power, industry, residential, and transportation) at 30 min × 30 min resolution.
Improvements from our previous studies
Our previous work estimated the emissions of PM, BC, OC, Ca and Mg in 2001and 2006(Zhang et al., 2007b, 2009).By taking more technology information into account, both emission factors and activity data were updated in this study.As a result, although PM 10 emissions were similar (16.1 Tg), higher TSP emissions (30.3 Tg vs. 25.1 Tg) and lower PM 2.5 emissions (10.9 Tg vs. 11.7 Tg) were calculated in our updated estimations for 2001, and consequently our new results show higher emissions of Ca and Mg but lower emissions of BC and OC (Fig. 12).
Different emission estimates for the industrial sector are the main reasons for the differences in total emissions.With updated information from various industry associations, emission factors of some industrial processes were adjusted in this study.Firstly, our previous study used unabated EFs from Europe (Klimont et al., 2002) for several industrial processes, while here we have been able to update them based on operational practices in China.Updated EFs for PM 2.5 are usually smaller, but those for TSP are usually larger, compared to the European EFs; for example, average unabated EFs of PM 2.5 , PM 10 and TSP for the cement industry changed from 23.4 g kg −1 , 54.6 g kg −1 and 130.0 g kg −1 to 16.6 g kg −1 , 51.3 g kg −1 and 191.5 g kg −1 , respectively.Secondly, updated penetration rates of PM removal technologies within the industrial sector also contribute to the differences in emission estimates.
The other big difference between this study and Zhang et al. (2007b) is the estimation of BC emissions from coal combustion in the residential sector.In our previous studies, the ratio of BC to PM 2.5 was assumed to be 0.50; however, recent local tests (Chen et al., 2005(Chen et al., , 2006(Chen et al., , 2009;;Zhi et al., 2008Zhi et al., , 2009) ) indicate that this ratio could in fact be much lower.Indeed, in this study the ratio was determined to be 0.17 in 2001, following the approach described in Sect.3.5.Consequently, the estimate of BC emissions for this sub-sector was reduced by 66.5% to 127 Gg.
PM emissions from power sector
Emissions from the power sector have been a hot topic because power plants account for more than half of coal consumption in China in recent years.The estimates in this study are 38%, 24% and 12% higher than those of Zhao et al. (2008) who estimated power sector emissions in 2005 to be 994 Gg, 1842 Gg and 2774 Gg for PM 2.5 , PM 10 and TSP, respectively.The latest database of EFs for China's power plants incorporates the results from recent test results, and includes an analysis of the sources of uncertainties in determining the EFs (Zhao et al., 2010).The database assumed lower removal efficiency of ESP (92% for PM 2.5 , 97% for PM 2.5−10 and 99.5% for PM >10 ) and resulted in higher final EFs.Consequently, the estimates of PM 2.5 , PM 10 and TSP emissions would be 11%, 25% and 20% higher than this study if the same activity data were used.
TSP emissions
The estimation of PM emissions is little studied in China.China's emission statistics for national TSP emissions are based on calculations using a bottom-up approach, while information about PM 10 and PM 2.5 emissions is unavailable.TSP emissions from our estimates as well as the government's statistical data are shown in Fig. 13.The statistical data are systematically lower than our estimates because two important emission sources (small industries and the rural residential sector) are not taken into account in the government data.Statistical TSP emissions changed significantly in 1993-1994 and 1996-1997; the main reason for this is a change in the statistical approach over this time period.The two sets of data show similar trends in the late 1990s, when China's energy consumption decreased.However, according to our estimates, emissions increased after 2000, while the statistical data suggest that annual emissions remain at around 20 Tg.Our estimates may be more accurate because most sectors grew rapidly during this period of time, as discussed in previous sections.Our previous studies on CO (Streets et al., 2006b) and NO x (Zhang et al., 2007a) show similar increases in emissions.
BC and OC emissions
BC and OC emissions of this study were compared with the previous studies of Bond et al. (2004), Cao et al. (2006), Ohara et al. (2007), Klimont et al. (2009), Streets et al. (2003) and Zhang et al. (2009) in Fig. 14.All these studies show that the residential sector is the dominant source of BC and OC.Our estimation of BC emissions from the residential sector is 30% lower than that of others because a much lower EF for briquette combustion was incorporated in the current study.As with our previous studies (Bond et al., 2004, andZhang et al., 2009), this study estimates higher BC emissions from industry, because we consider small coke plants and brick plants to be potentially important sources, although there are large uncertainties in the estimates.Our estimation of OC emissions is close to that of Zhang et al. (2009) and Streets et al. (2003); with any differences mainly being due to the different parameters used to calculate emissions from biofuel.
Estimates of BC emissions from industries are quite uncertain, especially for the coke and brick industry.For the coke industry we used mixed data sources to estimate the emissions.The unabated TSP EF was 13 g kg −1 , which is from local measurements.Then we used PM 2.5/ BC/OC fractions from the GAINS model (Klimont et al., 2002;Kupiainen and Klimont, 2004) to get final emission factors.However, both values are based on very limited measurements and subject to high uncertainty.Another difference between our estimation and GAINS is that we used time/provincial dependent penetrations of different production technologies to make the regional assessment.
The uncertainties of estimating emissions from brick kilns are mainly attributed to three factors: First, there are no re- Fig. 14.Comparison of BC and OC emission estimation among recent studies: Bond et al., 2004;Cao et al., 2006;Klimont et al., 2009;Ohara et al., 2007;Streets et al., 2003;Zhang et al., 2009.liable EFs due to the lack of emission tests on Chinese indigenous kilns; second, according to information from the China Brick Association, a technology transformation from the indigenous kilns to Hoffman kilns took place in China in the last two decades of the last century, but information is missing to understand the process and spatial characteristics of the transformation; and last but not least, new wall material such as autoclaved brick and steamed brick has begun to come into the market recently.The process of producing such material is quite different from traditional brick sintering and thus the EFs are much lower.However, the statistical data do not distinguish them from traditional bricks.Ohara et al. (2007) estimated the emission trends of BC and OC with activity data for 1995, 2000 and 2003, assuming constant EFs.Klimont et al. (2009) projected BC and OC emissions up to 2030, taking improvement of technologies into consideration.Their studies presented stable or decreasing emissions.However, our study does not indicate the same trends, with the differences mainly being attributable to different sources of activity data, especially the biofuel used within the residential sector.Biofuel usage in this study is 14%, 13%, and 41% higher than that of Ohara et al. (2007Ohara et al. ( ) for 1995Ohara et al. ( , 2000Ohara et al. ( and 2003, respectively. , respectively.Inter-annual change of biofuel usage data dominates the trend of its emissions because we assumed a constant EF for this sector.This indicates that in addition to EFs, uncertainty about biofuel consumption data could be another important source of error in the estimation of BC and OC emissions.This opinion is also noted and discussed in Klimont et al. (2009).
A comparison by Carmichael et al. (2003) of model calculations using the emission inventories of Streets et al. (2003) and using TRACE-P measurement data led to the conclusion that Streets et al.'s (2003) estimates of BC emissions are qualitatively correct.However, it is likely that BC emissions over southeast China were overestimated while those in northeast China were underestimated (Hakami et al., 2005).
Since our estimation is similar to Streets et al. (2003)'s results, it could also be true that there are some uncertainties in the spatial distribution of BC emissions.
Comparison with ambient observations
Although our estimates show increasing PM emissions after 2001, most ground observation data show an opposite trend of ambient aerosol concentrations over Chinese cities, such as Beijing (Chan and Yao, 2008), Lanzhou (Xia et al., 2008) and cities in the Yangtze River Delta (Shi et al., 2008).Qu et al. (2010) found decreasing PM 10 concentrations over 16 Chinese northern cities and 11 middle cities, but relatively constant PM 10 concentrations over southern cities, and attributed the different trend of emissions and concentrations of PM 10 partly to more and more dispersed emissions sources.As most urban monitoring sites are located in populated areas, the movement of industrial plants from urban to rural areas could result in decreasing PM concentrations over these cities.Lin et al. (2010) reanalyzed the satellite based AOD trend over Eastern China and found a positive linear trend for 2004-2008, indicating that the regional aerosol load, however, might increase because the total PM emissions are getting higher.
Moreover, a lot of processes, including transport, chemical reactions and deposition, play important roles in impacting the concentration of ambient aerosol.Lin et al. (2010) indicated that formation of secondary aerosol could be an important reason of inconsistency between the PM 10 trend captured by ground observations and the aerosol optical depth trend captured by satellites.Quantitative estimates of the contribution of secondary aerosol to China's aerosol loading should be addressed by further studies.
Effectiveness of PM emissions reduction in China
As discussed in Sect.3, implementation of advanced PM emission control technologies has significantly lowered the EFs during 1990-2005.To estimate the effectiveness of these technologies on the total PM emissions, we developed a hypothetical scenario and calculated the emissions assuming the EFs remained at 1990 levels, and compared them with the emission estimates introduced in Sect. 4. The results show that in 2005, the emissions of PM 2.5 , PM 10 and TSP were 11.0 Tg, 18.4 Tg and 29.7 Tg, respectively, less than what they would have been without the adoption of these control technologies.The inter-annual emission reductions of PM 2.5 and TSP are also broken down into sectors (Fig. 15).The cement industry and the power sector contributed more than 95% of the PM 2.5 emissions reduction, attributed to a much higher penetration rate of EST and FAB.As noted in Sect.2.2.4,new emission standards and regulations are the main driving forces of implementation of PM emission control technologies.For instance, the latest standard for PM emissions from cement kilns is 50 mg m −3 (SEPA, 2004), roughly 6% of the standard released in 1985 (SEPA, 1985).The improvement of the standards resulted in rapid promotion of EST and FAB in the industry and considerable reductions of PM emissions.Emissions reduction of PM 2.5 from other industrial sources, however, is much lower, whereas its reduction of TSP emissions is much more significant.This indicates that during 1990-2005, Chinese emission control regulations on PM were more effective on large particles for most anthropogenic sources.As fine PM has been proved to pose a higher risk to public health, the government needs to adjust the control regulations and focus more on fine PM.
Uncertainties
A detailed uncertainty analysis was conducted by combining uncertainties of both EFs and activity levels, following the approach described by Streets et al. (2003).As listed in of Zhang et al. (2009).For most sectors, uncertainties of emissions in 2005 are lower than those in 1990 because for the later date we are more confident about both the penetration of PM control technologies and the accuracy of activity data.Industry is the only exception, and what increases the level of uncertainty is the fact that the contribution from industries whose emissions are less easily quantified (e.g.lime and brick production) is getting larger while emissions from the cement industry are significantly reduced.The breakdown results show that the uncertainty of emissions from the coke industry and the residential sector are much larger than the other sources.The uncertainties of emissions from offroad mobile sources are much higher than those from onroad vehicles (see Table 10), indicating more studies should be focused on off-road sources.Both reliable activity data and local EFs derived from field tests are essential to reduce the uncertainty.
Conclusions
We use a technology-based methodology to estimate historical PM emissions in China in recent years.With this methodology, we derive a 15-year trend of PM emission factors in China from 1990 to 2005, taking into account the change in technology structure within sectors and improvements in emission controls driven by emission standards.Our results show that emission factors of PM 2.5 and TSP from several industry sectors decreased by 7% to 69% and 18% to 80% in China during the 15 years, respectively.
Emissions of TSP, PM 10 , PM 2.5 , BC, OC, Ca and Mg during the 15-year period are estimated.The trends of emissions of PM are similar to those of energy consumption in China during 1990China during -2005; that is, they increased in the first six years of 1990s and decreased until 2000, then increased again in the following years.Emissions of TSP reached a peak (35.5 Tg) in 1996, while emissions of PM 10 and PM 2.5 reached peaks in 2005 (18.5 Tg PM 10 and 12.7 Tg PM 2.5 ).With significant increase of BC and OC emissions during 2000-2005, BC and OC emissions reached peaks in 2005 (1.51 Tg and 3.19 Tg, respectively).The cement industry and biofuel combustion in the residential sector were consistently the dominant sources of PM 2.5 emissions in China, accounting for 53% to 62% of emissions from 1990 to 2005.The non-metallic mineral production industry, including the cement, lime and brick industries, accounted for 54% to 63% of national TSP emissions.Despite the huge increase of activity levels, successful implementation of control measures has led to slowdown, or even reversal, of increasing PM emissions in some sectors, such as cement industry, power sector and on-road vehicles.As a result, emissions of PM 2.5 and TSP in 2005 were 11.0 Tg, 18.4 Tg and 29.7 Tg, respectively, less than what they would have been without the adoption of these control measures.However, the average PM 10 concentration in Chinese cities (approximately 100 µg m −3 , Lin et al., 2010) is still much higher than the WHO guideline, and more efforts have to be made to control the emissions of PM, especially fine PM, in China.
The careful consideration of technology details significantly improves the accuracy of emission inventories; however there still remain large uncertainties in the estimation of primary aerosol emissions in China.More accurate and detailed activity information coupled with the measurement of emission factors from local tests are essential to further improve the quality of emission estimates, this especially being so for the brick and coke industries, as well as for coalburning stoves and biofuel usage within the residential sector.Some other sources, such as off-road machinery and small boilers used in industrial and residential sectors, also deserve more research on both activities and emission factors, because the PM emission controls on them are relatively weaker than on-road vehicles and power plant boilers.
Fig. 1 .
Fig. 1.As high efficient PM control technologies were gradually promoted during 1990-2005, EFs of TSP from (a) power sector, (b) cement industry, and (c) coke industry decreased.Bars represent the penetration rate of PM control technologies within the industries, and line represents the net emission factor of TSP.
Fig. 2 .Fig. 3 .
Fig. 2. Trends of net EF of TSP from processes/technologies in iron and steel industry.All data are normalized to the year 1996.
Fig. 5 .
Fig. 5. PM emissions from 1990 to 2005 (a) and the breakdown of emissions of (b) PM 2.5 , (c) PM 10 and (d) TSP by different sectors.
Fig. 12 .
Fig. 12.Comparison of emission estimates for 2001 in (a) this study and (b) our previous results from Zhang et al., 2007b.
Fig. 13 .
Fig. 13.Comparison of TSP emissions between estimates by this study and China's government statistical data.
Fig. 15 .
Fig. 15.Reductions of (a) PM 2.5 and (b) TSP emissions due to improved emission control regulations and technologies from power sector, cement industry and other sources.The dark solid line denotes our estimates of inter-annual PM 2.5 emissions in China, and the gray dashed line denotes the hypothetical PM 2.5 emissions if penetration of PM control technologies remains at the 1990 level.
Table 1 .
Data source of technology distributions for main PM emitting sectors in China.
Table 2 .
Unabated EFs for PM from stationary sources.
Table 4 .
Emission standards for industry and on-road vehicles before 2005.Industry sector/vehicle type Standard code Year published/revised * Emission standards for some individual industries were replaced by this standard.
Table 5 .
Removal efficiency of different PM control technologies, numbers show as percentage.
Table 6 .
Mass ratio of BC and OC to PM 2.5 from different emission sources, numbers show as percentage.
Table 7 .
Mass ratio of Ca and Mg to TSP from different emission sources, numbers show as percentage.
Table 8 ,
Beijing andShanghai implemented the standards in advance of the other provinces of China.In addition to this, some large cities such as Beijing, Shanghai and Guangzhou implemented some regional regulations to reduce vehicle emissions.For instance, old, polluting vehicles (called Yellow Label Vehicles) were required to be banned or eliminated in advance.Such re- gional regulations resulted in a greater reduction within those provinces of the average net PM EFs as the proportion of new vehicles increased through time.Taking these factors into consideration, we calculated the EFs for different regions of China (Fig. 4).Our estimates show that from 1999 to 2005, www.atmos-chem-phys.net/11/931/2011/Atmos.Chem.Phys., 11, 931-954, 2011 940 Y. Lei et al.: Primary anthropogenic aerosol emission trends
TSP, PM 10 and PM 2.5
Figure5shows an overview of inter-annual trends of PM emissions by particle size, as well as the contribution of PM emissions by sector from 1990 to 2005.The breakdown of emissions of PM 2.5 , PM 10 and TSP bysector in 1990, 1995, 2000 and 2005is listed in Table9.PM emissions increased rapidly in the six years after 1990 and reached a high of 35.5 Tg for TSP in 1996.Rapid development of the economy and the rise in energy consumption were the major driving forces of this trend in emissions.From 1996 to 2000, the decrease in PM emissions can be attributed to a much reduced increase of energy consumption and industrial production, coupled with the implementation of several new emission standards.After 2000, industries with high PM emissions developed at an enormous speed.Production of steel,
Table 8 .
Starting date of implementation of China's Stage I and Stage II emission standards for vehicles.
Table 10 .
Uncertainty in emissions estimates of TSP, PM 10 , PM 2.5 , BC, and OC in 1990 and 2005 (±95% Confidence Intervals), numbers shown in table as percentage.
* Uncertainties in emission estimates for 2006. | 2017-12-21T01:14:02.589Z | 2011-02-02T00:00:00.000 | {
"year": 2011,
"sha1": "d6dcf338437f5ce2d5cf92b8dbaa8c96e8270142",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/11/931/2011/acp-11-931-2011.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d6dcf338437f5ce2d5cf92b8dbaa8c96e8270142",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
248614408 | pes2o/s2orc | v3-fos-license | A Promising Insight: The Potential Influence and Therapeutic Value of the Gut Microbiota in GI GVHD
Allogeneic hematopoietic cell transplantation (allo-HSCT) is a reconstruction process of hematopoietic and immune functions that can be curative in patients with hematologic malignancies, but it carries risks of graft-versus-host disease (GVHD), thrombotic microangiopathy (TMA), Epstein–Barr virus (EBV) infection, cytomegalovirus infection, secondary hemophagocytic lymphohistiocytosis (sHLH), macrophage activation syndrome (MAS), bronchiolitis obliterans, and posterior reversible encephalopathy syndrome (PRES). Gastrointestinal graft-versus-host disease (GI GVHD), a common complication of allo-HSCT, is one of the leading causes of transplant-related death because of its high treatment difficulty, which is affected by preimplantation, antibiotic use, dietary changes, and intestinal inflammation. At present, human trials and animal studies have proven that a decrease in intestinal bacterial diversity is associated with the occurrence of GI GVHD. Metabolites produced by intestinal bacteria, such as lipopolysaccharides, short-chain fatty acids, and secondary bile acids, can affect the development of GVHD through direct or indirect interactions with immune cells. The targeted damage of GVHD on intestinal stem cells (ISCs) and Paneth cells results in intestinal dysbiosis or dysbacteriosis. Based on the effect of microbiota metabolites on the gastrointestinal tract, the clinical treatment of GI GVHD can be further optimized. In this review, we describe the mechanisms of GI GVHD and the damage it causes to intestinal cells and we summarize recent studies on the relationship between intestinal microbiota and GVHD in the gastrointestinal tract, highlighting the role of intestinal microbiota metabolites in GI GVHD. We hope to elucidate strategies for immunomodulatory combined microbiota targeting in the clinical treatment of GI GVHD.
Introduction
Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is considered to be an effective treatment for various hematological malignancies, bone marrow failure diseases, genetic metabolic diseases, and immunodeficiency diseases [1][2][3]. Hematopoietic stem cells are extracted from the bone marrow or peripheral blood of allogeneic donors and infused into recipients who have received myeloablative conditioning regimens. In addition to hematopoietic stem cells, the transplant also contains allogeneic donor T cells, which may attack the host tissue cells and lead to the occurrence of graft versus host disease (GVHD). Acute GVHD (aGVHD) is an immune-mediated process that occurs after allo-HSCT characterized by the response of donor antigenspecific lymphocytes (mainly T cells) to host allogeneic antigens and inflammatory cytokine cascades, and its progress can be briefly summarized in three consecutive stages, namely, the activation of antigen-presenting cells; the activation, proliferation, differentiation, and migration of donor T cells, as well as inflammatory cytokines; and the cellmediated destruction of target tissue [4].
Affecting 20% to 80% of allo-HSCT recipients during the use of long-term immunosuppressants for prophylaxis, aGVHD is still the major complication after allo-HSCT, and it can affect the gut, liver, lungs, and skin. Primarily due to the increased use of unrelated and/or HLAmismatched donors and granulocyte-colony-stimulating factor, an increasing incidence of GVHD has been observed in recent years [5]. Severe GI GVHD, often associated with poor patient outcomes, is one of the leading causes of transplant-related death. GI GVHD breaks down the mucosal immune system by attacking the intestinal crypt and its stem cell niche, and the risk factors for mortality include corticosteroid resistance, age > 18 years, increased serum bilirubin, and overt gastrointestinal bleeding [6,7].
The human body is colonized by a large number of microorganisms, most of which are bacteria, but also including viruses, fungi, and archaea. These microorganisms are usually called the intestinal microbiota, and their related community is called the intestinal microbiome [8]. There are relative differences in the composition of microorganisms between different sites in the same individual, but they all change dynamically. The intestine is the main site for microbial colonization, with approximately 100-150 bacterial species forming 100 trillion cells. The structural characteristics of a healthy bacterial community are conducive to the development and maturity of the host immune system, digestion of food, synthesis of essential amino acids, shortchain fatty acids and vitamins, regulation of the immune response, and enhancement of the resistance to pathogen infection [9][10][11][12].
Gut bacteria are in equilibrium with the host innate immune system and help to maintain homeostasis, which when altered directly impacts host health. Recent studies have found that certain intestinal bacteria are associated with the outcomes of allo-HSCT transplantation, and disruption of the normal structure of the gut microbiota is related to the risk of GVHD [10,13]. Taur et al. examined the effect of intestinal diversity on posttransplant mortality. Fecal specimens collected from 80 recipients were divided into low-, medium-, and high-diversity groups, with 3-year overall survival rates of 36%, 60%, and 67%, respectively. The transplant-related mortality (TRM) was 53%, 23%, and 9%, suggesting that intestinal flora may be a pivotal factor in the success or failure of allo-HSCT [14]. In a multicenter study, 8767 fecal samples obtained from 1362 patients undergoing allo-HSCT were characterized by a loss of diversity and domination by single taxa, identifying an association between a high diversity of intestinal microbiota at the time of neutrophil engraftment and low mortality [15]. Ingham et al. revealed that the diversity of the intestinal microbiota decreased within one month after transplantation, and the lowest alpha diversity (inverse Simpson) was observed from the day of HSCT to week +3 for the gut [16].
In this article, the role and changes in the intestinal flora during the occurrence and development of GVHD are introduced, which provides a theoretical basis for the prevention and treatment of intestinal GVHD and summarizes the most current treatment strategies.
Intestinal Flora, Intestinal Mucosa, and GVHD
2.1. The Structure and Physiological Function of the Intestinal Epithelium. In general, the intestinal epithelium consists of a single layer of tightly connected cells that effectively insulate the direct exchange of intestinal contents, which is called the "intestinal barrier" [17]. The presence of tight junctions in the intestinal epithelium effectively prevents microorganisms, intestinal contents, and antigen molecules from entering the body from the gastrointestinal lumen but allows digestive products and water to enter the body [18]. This is the physical intestinal barrier. In addition, the gut also has a mucus-antimicrobial peptide barrier that regulates epithelial permeability and an immune barrier comprised of the body's innate and specific immune systems [19,20]. These three barriers work together to maintain homeostasis in the intestinal environment. If the intestinal barrier is damaged due to various reasons, the basic intestinal functions cannot be carried out and the growth environment, distribution, and abundance of the intestinal flora changes [21]. This will interfere with many normal physiological processes of the body, such as immunity and metabolism, resulting in undesirable consequences [22]. The human gastrointestinal system is the largest digestive and endocrine organ in the human body and has a close relationship with other systems in the body. A large number of cells distributed in the intestinal epithelium play important roles in regulating absorption, the secretion of substances, immune activation, and flora ecology. These include enterocytes, goblet cells, Paneth cells, tuft cells, enteroendocrine cells, and M cells. Generally, goblet cells secrete mucus to form a protective barrier. Paneth cells secrete antimicrobial peptides and growth factors that stimulate the production of epithelial stem cells [23]. Recent studies have shown that congenital lymphoid cells type 3 (ILC3) can be activated by cytokines produced by mononuclear phagocytes IL-23, IL-1β, and TL1A [24], and they support the production of antimicrobial peptides and mucins by intestinal epithelial cells (IECs) through the secretion of IL-22 [24], ensuring the spatial isolation of microorganisms from intestinal tissue [25,26]. This effect also preserves signal transduction between the intestinal microbiota and IECs, maintaining intestinal immune and metabolic homeostasis [27].
2.2.
Intestinal Flora and Immunity. The intestinal flora remains a heated topic in recent years because it seems to have both subtle and obvious influences on the physiological and pathological state of various organs and systems of the body. Microbes, such as bifidobacteria, Lactobacillus, Escherichia coli, Enterococcus, Clostridium perfringens, and Pseudomonas, have been proven to colonize our guts [28,29]. According to previous studies, intestinal flora can not only directly promote the development of lymphocytes, such as B cells in intestinal-associated lymphoid tissue (GALT) [30], but also stimulate the expression of PRRs on intestinal mucosal epithelial cells or immune cells and induce the maturation of intestinal mucosal lymphoid tissue through pathogenesis-associated molecular patterns (PAMPs) [31,32]. 2 Oxidative Medicine and Cellular Longevity The balance of intestinal flora has been linked to the prevention of immune-mediated diseases such as inflammatory bowel disease (IBD) and asthma. One study found an increased incidence of IBD and allergic asthma in germ-free (GF) mice relative to pathogen-free mice [33]. This is due to increased expression of the chemokine ligand CXCL16 in the gut and lungs in the absence of intestinal flora [34]. Immobile natural killer T (iNKT) cells accumulate in the lamina of the colon and lung, and the mice become less resistant to environmental exposures. Coombes et al. also demonstrated that the intestinal microbiota could promote the expansion of CD4+ T cells produced by IFN-γ in colitis, thereby exacerbating the occurrence of intestinal inflammation [35].
New research has shifted the focus from the effects of gut microbiota on immune cells to proteins and genes, seeking to understand the nature of the effects of gut microbiota on the immune system. The Gimap5 gene is one of the regulators of hematopoietic integrity and lymphocyte homeostasis. Barnes et al. verified that in Gimap5-deficient mice, T-cell loss and B-cell immobility aggravated the occurrence of microbiome-dependent wasting disease and intestinal inflammation [36]. In IECs, C. rodentium promoted the differentiation of Th17 cells in the colonic lamina propria by upregulating Nos2, Duoxa2, and Duox2 [37]. However, the impact of the gut flora on genes still requires much research and data to tease out the connections. [39]. Treating G5 knockout mice with antioxidants could thus rescue the defect [40]. Disruption of the intestinal flora activates intestinal epithelial cells. In nod2-deficient or ATG16L1 mice, bacteria enter their intestinal epithelial cells and increase IL-8 production [39]. In other words, intracellular mitochondrial oxidative stress may change the bacterial survival rate and promote the occurrence and development of intestinal inflammatory diseases such as IBD.
In addition, Larabi et al. mentioned that the intestinal flora may also affect ROS production during the regulation of autophagy [41]. Yue et al. reported that trimethylamine N-oxide prime NLRP3 produced by intestinal flora could be involved in the development of IBD by inhibiting ATG16L1, SQSTM1, and LC3-II and increasing ROS production in a dose-and time-dependent manner [42]. Autophagy regulation of mitochondrial ROS plays a key role in the regulation of macrophage migration inhibitory factor (MIF) secretion. In the presence of bacterial LPS, drug inhibition or siRNA silencing of Atg5 can enhance MIF secretion by monocytes and macrophages in a mitochondrial ROS-dependent manner [43].
Intestinal Flora and Infection.
If the permeability of intestinal epithelial cells changes or the tight connections between cells are lost, the intestinal flora can enter the body through the intestinal wall. This can lead to ectopic intestinal flora and infection. In addition, abnormal secretion of antibiotics, abnormal immunity, changes in phage activity, and abnormal colonization of exogenous bacteria in the intestinal tract may change the growth environment and disrupt the balanced state of the normal intestinal flora, thus leading to infection [44,45]. Colicin FY is a bacteria-produced antibacterial agent that has a specific growth inhibition effect on Yersinia enterocolitica, the pathogen of gastrointestinal yersinia disease. E. coli-producing colicin FY inhibited the growth of pathogenic Yersinia enterocolitica. When mice were treated with streptomycin, the E. coli strain producing colicin-FY inhibited the progression of enterocolitis infection [46]. These results indicate that colicin FY has in vivo antibacterial activity and can be used for the treatment of enterocolitis infection.
The human intestinal flora can play a certain immune activation and antibacterial role. However, overuse of antibiotics gives rise to the abundance of the gut bacteria itself being disrupted, making infections more easily invading [44,47]. Becattini et al. demonstrated that infection with Listeria monocytogenes in immunodeficient or chemotherapy mice could cause sepsis and meningitis. Taking antibiotics can worsen these infections. However, if the symbiotic flora with in vitro antibacterial activity is transplanted into germfree mice, it will induce the mice to produce antibodies and start the immune response, thus playing a defensive role against infection [45]. Research by Corr et al. also found that Lactobacillus salivarius UCC118 produces bacteriocin Abp118 in vivo, which significantly protects mice from invasive Listeria monocytogenes infection. A nonproducing mutant of Lb. salivarius did not produce bacteriocin Abp118, nor did it protect mice from two strains of Listeria monocytogenes [48]. This finding indicates that the intestinal flora induces bacteriocin production as an important protective agent.
GVHD Causes Damage to the Intestinal Mucosa.
Recent studies have demonstrated that ISCs are a target of GVHD. With the development of GVHD, ISC is impaired. Takashima et al. found that injection of the Wnt agonist R-spondin1 (R-SPO1) prevented ISC injury, enhancing the repair of damaged intestinal epithelial cells [49]. It inhibits the subsequent inflammatory cytokine cascade reactions. IL-22 is an important regulator of tissue sensitivity to GVHD and a protective factor against ISC in inflammatory bowel injury [23]. Hanash et al. showed that intestinal IL-22 is increased under pretransplant regulation and is produced by IL-23-responsive innate lymphoid cells (ILCs) after bone marrow transplantation [50]. GVHD, however, reduced the ILC frequency and IL-22 abundance, leading to increased crypt cell apoptosis [50]. ISC depletion results in the loss of intestinal epithelial integrity.
L cells proved to be a target of GVHD. Acute GVHD reduced glucagon-like peptide-2 (GLP-2) levels produced by intestinal L-cells in mice and in patients with graftversus-host disease [51]. GLP-2 can promote the regeneration of ISCs and the production of antimicrobial peptides, 3 Oxidative Medicine and Cellular Longevity reduce the expression of apoptosis-related genes, and cause changes in the intestinal microbial community [52,53]. Norona et al. demonstrated that in a mouse model, treatment with a GLP-2 agonist alleviated emergent acute GVHD [54].
Goblet cells are also reduced in GVHD. Goblet cell loss as shown on patient biopsy is associated with severe GI GVHD and a poor prognosis [55,56]. Ara et al. showed that GVHD was aggravated in mice lacking the antibiotic Lypd8. IL-25-pretreated goblet cell preservation maintained the intestinal barrier and attenuated GVHD [57]. This demonstrates that the loss of goblet cells destroys the mucus layer in the colon and allows bacterial translocation.
The Interaction of Gut Microbiota and GI GVHD
AGVHD can occur in each segment of the digestive tract. If it occurs in the oropharynx, the main manifestations are oral pain, pain upon swallowing, loss of appetite, blisters, aphthosis, and gingivitis [58]. The probability of occurrence in the esophagus is low, predominantly manifested by nausea and vomiting [59]. If it occurs in the stomach and duodenum, latent symptoms may appear first, including loss of appetite, bloating, indigestion, and weight loss. The worsening symptoms can progress to nausea, persistent vomiting, upper abdominal pain and upper gastrointestinal tract bleeding, and melena. If it occurs in the small and large intestine, it mainly manifests as abdominal pain, diarrhea, and bloody stool. It is worth noting that the toxicity and opportunistic infections of chemotherapy drugs can also cause gastrointestinal symptoms similar to aGVHD before implantation. In addition, drug toxicity, aGVHD, and infection can coexist at the same time, which adds much difficulty to the clinical diagnosis of intestinal GVHD [60]. Studies have shown that intestinal bacteria are inextricably linked to the occurrence of intestinal GvHD and infectious complications after allo-HSCT. It has been found that allo-HSCT is associated with a significant reduction in intestinal microbial diversity and is considered to be a combined effect of multiple factors, such as pretreatment, antibiotic use, dietary changes, and intestinal inflammation [13,61,62], and these factors may also be related to the occurrence of intestinal GVHD.
Here, we focused on reviewing the mechanism of GI GVHD, changes in the gut microbiota after allo-HSCT, and the relationship between the gut microbiota and GVHD.
Related Mechanisms of aGVHD.
In the pretransplantation stage, host tissue damage caused by radiotherapy and chemotherapy can mediate damage-related molecular patterns (DAMPs) (such as adenosine triphosphate (ATP) [63], high-mobility group protein B1 (HMGB1) [64], NLRP3 inflammasome, uric acid, [65] heparan sulfate [66], and the release of inflammatory cytokines (such as IL-6 [67] and TNF [68]). Pathogen-related molecular patterns (PAMPs) include bacterial degradation products (lipopolysaccharides, lipoproteins, peptidoglycans, and flagellin), fungal degradation products (such as β-glucan and a-mannan), and viral nucleic acids [69]. After these substances are released, they can enter the blood through the damaged gastrointestinal mucosa, activate the immune system, and trigger a cascade of inflammatory cytokines, resulting in increased expression of major histocompatibility complex (MHC) antigens and adhesion molecules, thereby improving the ability of donor T cells to recognize host allogeneic antigens [70]. In addition, DAMPs and PAMPs can activate antigen-presenting cells (APCs) derived from recipients and donors, such as dendritic cells [71] and macrophages [72].
Under the costimulatory signal interacting with host antigen-presenting cells after allo-HSCT, donor T cells expressing TCR are activated after recognizing the APCs of allogeneic antigens in HLA class I and class II. Current research shows that donor CD8 T cells are mainly activated by the recipient's hematopoietic APC, while donor CD4 T cells are activated by the recipient's gastrointestinal nonhematopoietic APC [73,74]. After that, the activated donor T cells are expanded into the Th1, Th2, and Th17/Tc17 subtypes [75].
In the third stage of aGVHD, cytotoxic effects and cytokines directly and indirectly mediate tissue damage, respectively. Activated T cells migrate to the main GVHD target organs (intestine, liver, and skin), causing target tissue damage, whose histological manifestation is epithelial cell apoptosis. CD4+ Th cells, CTLs, and NK cells interact with Fas-FasL, TNF, or TNF-related apoptosis-inducing ligands or the release of perforin and granzyme stored in the cells, exerting cell-dependent cytotoxicity [76]. DAMPs and PAMPs stimulate recipient cells to secrete more cytokines (such as TNF, IL-1, IL-6, IL-33, IL-12, IL-23, type 1 IFN, and IFN-γ) and chemokines (such as CCL5 and CXCL2) to enhance the allogeneic antigen presentation of the receptor APC and the expression of costimulatory molecules and cytokines, thereby initiating an inflammatory cascade [77]. In addition, bacteria that penetrate the intestinal wall can activate neutrophils and recruit them to the site of bacterial infection. Neutrophils can directly cause tissue injury by releasing ROS, sequentially damaging the gastrointestinal tract [78]. CXCL2 plays a critical role in the recruitment of macrophages to target organs during the occurrence of GVHD; the higher the proportion of macrophages (M1/M2), the higher the incidence of grades II-IV aGVHD [79,80].
How GVHD Damages the Intestinal Mucosa. GI GVHD targets small intestinal stem cells (ISCs)
, leading to epithelial cell apoptosis, and its most significant histology is apoptotic bodies in the crypt regeneration cavity. The histological severity of clinical GI GVHD is classified according to the degree of crypt damage: isolated apoptotic bodies without crypt loss (level 1), crypt apoptosis with individual crypt loss (level 2), crypt apoptosis with loss of two or more contiguous crypts (level 3), and extensive crypt loss and epithelial denudation (level 4) [81,82]. However, the GI GVHD endoscopic manifestations are multifarious and can lack any characteristic changes, and different stages of progressive mucosal inflammation can be observed, including normal mucosa, mucosal edema and seepage, loss of blood vessels, 4 Oxidative Medicine and Cellular Longevity erythematous lesions, mucosal erosions, superficial ulcers, white plaques, severe mucosal collapse, and mucosal peeling [60]. During the occurrence of GI GVHD, the destruction of the intestinal mucosal barrier mediated by immune disorders and intestinal dysfunction often manifests as severe diarrhea. Hence, daily diarrhea volume, intestinal obstruction, and bleeding symptoms are commonly used in clinical practice for grading [83].
During the pretreatment process of allo-HSCT, the balance of intestinal bacteria in the recipient is disrupted by total body radiotherapy or chemotherapy, resulting in impaired intestinal mucosal barrier function. Intestinal bacteria, MAMPs (microorganism-related molecular patterns), and PAMPs are translocated to the lamina propria, which are recognized by Toll-like receptors and presented to T cells by DCs, activating effector T cells, further aggravating the intestinal mucosal injury [84]. At the same time, the infiltration of neutrophils into the small intestine promotes the development of GI GVHD through the production of reactive oxygen species, resulting in increased tissue damage [85]. In addition, GVHD can also attack B cells, goblet cells, Paneth cells, and mucous layers at the bottom of the crypts and aggravate intestinal inflammation and bacterial translocation, greatly increasing the susceptibility of the recipient to infection [13,84] (Figure 1). Studies have shown that Paneth cells [86] and the significant reduction in AMPs secreted by them are related to the severity of GVHD. Meanwhile, severe GI GVHD and Paneth cell counts are related to the loss of microbial diversity [87]. Paneth cells support Lgr5+ ISCs for normal epithelial cell regeneration through the epidermal growth factor (EGF), WNT3, and Notch ligand signaling pathways, while IL-22 produced by innate lymphoid cells (ILCs) after intestinal injury directly acts on ISCs, enhancing ISC-mediated epithelial cell regeneration [88].
In an ileal organoid model, an increased level of IL-22 limited the proliferation of ISCs but facilitated the proliferation of transit-amplifying TA progenitor cells [89]. Goblet cells and the mucin they secrete form a mucus barrier that prevents pathogens from invading the mucosa and causing intestinal inflammation (Figure 1). In a mouse model of post-allo-HSCT, GVHD targeting intestinal goblet cells destroys the double-layer structure of the colon mucus and induces bacterial migration. IL-25 can promote the differentiation and maturation of goblet cells in the large intestine. Prior to transplantation, IL-25, which depends on Lypd8 (an antibacterial molecule produced by intestinal cells in the colon that can inhibit the activity of flagella bacteria), can reduce IFN-γ and IL-6 in the plasma, protect goblet cells from GVHD, and prevent bacterial migration. Clinically, a low number of goblet cells in the colon and severe GI GVHD are related to a poor transplant outcome [57]. Moreover, goblet cells can also present antigens in the lumen to dendritic CD103+ cells to induce adaptive immune responses [90], indicating that goblet cells may be involved in GI GVHD before they become targets of attack.
The Relationship between Intestinal Bacteria and GI
GVHD. The microbial communities that colonize healthy intestines comprise a large and diverse community of bacte-ria, such as Firmicutes, Bacteroidetes phyla, Proteobacteria, Actinobacteria, Verrucomicrobia, and Fusobacteria, that have coevolved with the host intestinal immune system. In a healthy gut, the obligate anaerobes Clostridium and Bacillus usually maintain the advantages of facultative anaerobic Enterobacteriaceae. Doki et al. showed a significantly higher abundance of the phylum Firmicutes and a lower tendency for Bacteroidetes in aGVHD patients than in nonaGVHD patients [91]. Diet plays a vital role in the composition of the gut microbiota, and dietary changes can lead to disturbances in the structure of the gut microbial community. In addition, different dietary habits may produce differences in the gut microbiota among healthy individuals [92][93][94]. During allo-HSCT, due to the influence of antibiotics, intestinal inflammation, and dietary changes, a significant decrease in the diversity of gut microbiota can be observed [62]. The decrease in the diversity of the gut microbiota, colonization with multidrug-resistant bacteria, and infections are associated with an increased risk of GI GVHD [95]. It is worth noting that colonization with multidrug-resistant bacteria also significantly increased nonrelapse mortality and systemic infections, resulting in a decrease in the overall survival rate after allo-HSCT [96].
Gut commensal microbiota and its metabolites, such as lipopolysaccharides, short-chain fatty acids, and secondary bile acids, affect local and systemic immunity through direct and indirect interactions with immune cells, thereby aggravating or alleviating GVHD [97]. During allo-HSCT, Enterococcus, Streptococcus, and Proteobacteria can dominate the intestinal bacteria with a significant decrease in diversity [14]. The increase in enterococci belonging to Lactobacillus spp. can damage the integrity of the intestinal epithelial cell barrier and stimulate macrophages to produce TNF [13], which may aggravate inflammation and damage the intestinal barrier. As the most important source of nutrition for enterococci, lactose can promote the expansion of enterococci after allo-HSCT. Patients with lactose intolerance find it difficult to digest lactose, so it can accumulate in their intestines, and enterococcus can take advantage of the excessive nutrients in the intestinal lumen to grow in large numbers [98]. After transplantation, if vancomycinresistant enterococci are dominant, they are likely to cause VRE bacteremia, and the subsequent bacterial bloodstream infection greatly increases the posttransplant mortality rate [99].
Clostridium can produce SCFA (butyrate) to upregulate Treg cells to exert anti-inflammatory effects [100,101]. At the same time, butyrate also promotes the recovery of IEC damage after allo-HSCT, reduces cell apoptosis, and relieves GVHD [102]. However, recent studies have shown that after the occurrence of GVHD, and butyric bacteria are associated with the development of steroid-refractory GVHD. Colonic inflammation leads to the loss of the intestinal barrier or crypts, the proliferation of IECs in the colon exposed to butyric acid produced by microorganisms is inhibited, and the recovery of the colonic mucosa is delayed. Butyrogenic bacteria may help prevent the occurrence of aGVHD, but when aGVHD occurs in the intestine, butyrogenic bacteria may impair the recovery of the intestinal mucosa, depending 5 Oxidative Medicine and Cellular Longevity on the status of the colonic mucosa [103,104]. Additionally, the increase in the abundance of cyanobacteria (Blautia) in Clostridia is related to the reduction of lethal GVHD and the improvement of the overall survival rate, while the loss of Blautia is related to the use of antibiotics to inhibit anaerobes and receiving long-term total parenteral nutrition [105]. The decrease in Lachnospiraceae and Ruminococcaceae and the increase in Enterobacteriaceae are related to the imbalance of Treg/Th17, which induces the occurrence of aGVHD [106]. Wu et al. showed that TMAO enhanced M1 macrophage polarization via NLRP3 inflammasome activation, and polarized inflammatory macrophages promoted Th1 and Th17 differentiation, which aggravated GVHD [107].
Indoles and indole derivatives, produced by symbiotic bacteria expressing tryptophan, which may enhance the integrity of the IEC epithelial barrier and reduce inflammation, are a source of normal human fecal odors. Decreased urine indoxyl sulfate (IS) levels in early post-allo-HSCT recipients (+D1 to +D10) were significantly associated with transplant-related mortality 1 year after transplantation. Thus, IS may serve as a urine marker for monitoring GVHD [84,108]. Swimm et al. demonstrated that indoles produced by intestinal tryptophan had a protective effect against GVHD in a mouse model [109].
In healthy host immune system, diversity of intestinal microbiota is reasonable, and SIgA secreted by B cells can neutralize microbial antigens and cooperate with antimicrobial peptides (AMPs) produced by Paneth cells to prevent overgrowth of pathogens. The metabolite SCFA produced by intestinal microbiota can regulate the differentiation, recruitment, and activation of immune cells and improve the capacity to repair and protective effect of intestinal epi-thelial cells. Both chemotherapy and total body radiation can destroy the intestinal epithelial cells, leading to breakdown of the intestinal barrier. DAMPs released after intestinal epithelial cell death, bacterial translocation, and PAMPs can activate APC such as dendritic cells of the host, leading to the release of proinflammatory cytokines and the activation of donor T cells, thereby promoting the occurrence of GVHD. Bacteria that penetrate the intestinal wall activate and recruit neutrophils to the site of bacterial infection; then, neutrophils damage other intestinal epithelial cells by releasing ROS. B cells, Paneth cells, and mucous layer are thought to be the target cells of GVHD, and destruction of these cells exacerbates the disruption of intestinal barrier function, leading to increased mortality after allogeneic hematopoietic stem cell transplantation.
Prophylaxis and Treatment of GI GVHD after HSCT
During allo-HSCT, GI GVHD prophylaxis is based on the rational use of various immunosuppressive agents and chemotherapeutic drugs (Figure 2). If diarrhea or abdominal pain is present, the application of glucocorticoids should be considered. Clinically, the diagnosis of GI GVHD still requires the identification of intestinal infections caused by specific pathogens (e.g., primary or reactivated cytomegalovirus infection) or drug-related side effects (e.g., MMF) [82,110].
Immunosuppressor Combined with Chemotherapeutic
Drugs. Tacrolimus (Tac) and CsA have been widely used for the prophylaxis of GVHD in HSCT recipients. Tacrolimus plus methotrexate and cyclosporine plus methotrexate Oxidative Medicine and Cellular Longevity have similar GVHD and survival rates, and they can be used in the setting of sibling or matched unrelated donor transplants. After Tac or CsA is applied, the blood concentration of the drug needs to be monitored regularly. MMF can limit the proliferation of T and B lymphocytes and suppress the immune system, and this may be an effective alternative to MTX for patients who have contraindications to methotrexate or require rapid implantation (for example, patients with Aspergillus infection) receiving MAC treatment [111].
Corticosteroids.
With lympholytic and antiinflammatory properties, corticosteroids are the first-line treatment for grades II to IV aGVHD or moderate to severe cGVHD, as well as newly diagnosed chronic GVHD [112].
The first-line treatment of aGVHD is methylprednisolone with an initial dose of 2 mg/kg per day. Grade II aGVHD with isolated skin or upper gastrointestinal tract manifestations can be treated with lower steroid doses, such as 1 mg/ kg per day methylprednisolone or prednisone [111]. [114]. REACH2, a phase III multicenter trial, treated 309 patients with grades II-IV SR GVHD with ruxolitinib 10 mg twice daily or the investigator's choice of therapy and confirmed significant improvements in the efficacy outcomes of SR GVHD. Ruxolitinib had a significantly higher Day 28 ORR (62% vs. 39%, p < 0:001) and durable overall response at Day 56 (40% vs. 22%, p < 0:001) [115]. Vedolizumab is a monoclonal antibody that mediates the migration of T cells to the gastrointestinal (GI) endothelium and gut-associated lymphoid tissue by blocking the α4β7 integrin. In 29 patients who received 1 to 10 doses (median 3 doses) of vedolizumab 300 mg intravenous injection as the treatment of SR GI aGVHD, the total effective rate after 6 to 10 weeks was 64%, and the total effective rate at 6 months was 54%. There were 29 SAEs, including 12 infections; 3 SAEs were thought to be related to vedolizumab, and 2 of them were infections. Among the 8 patients with confirmed gastrointestinal infections, the timing of the infection did not have a clear pattern compared with the time of starting vedolizumab treatment [116].
Fecal Microbiota Transplantation.
FMT is a process of transferring feces from a healthy donor to a recipient whose gastrointestinal microbiota balance has been disrupted. By introducing a healthy microflora, the microbiota structure More frequent and faster platelet engraftment.
Maintenance of gut microbiota diversity. More vomiting and diarrhea episodes.
Good compliance.
Reductions of zinc and selenium. Higher incidence of grades III to IV aGVHD.
Higher oral energy, protein, and fluid intakes.
Parenteral nutrition
Enteral nutrition Lower risk of bloodstream infection.
Nutrition Figure 2: Clinical intervention used for preventing, treating and predicting GI GVHD. Clinically, DDP-4 inhibitor and probiotic are available for the prevention of GI GVHD and targeted therapy for the treatment. Compared with parenteral nutrition, enteral nutrition can prevent GI GVHD and causes fewer complications. In addition, immunosuppressor, chemotherapeutics, corticosteroid, FMT, and MSC-therapy can be used for both prevention and treatment of GI GVHD. Microbiota metabolism analyses and reduction in microbiota diversity can predict outcomes after transplantation and GI GVHD.
of the recipient returns to balance. FMT is a very effective method for the treatment of recurrent Clostridioides difficile infection (CDI). It can be administered by colonoscopy, nasogastric/duodenal tube, capsule, or enema [117]. Kakihana et al. performed FMT for the treatment of 4 patients with steroid-resistant (n = 3) or steroid-dependent gut aGVHD (n = 1), with 3 complete responses and 1 partial response [118]. In a single group pilot trial (NCT02733744), 13 patients who underwent allogeneic hematopoietic stem cell transplantation received thirdparty FMT capsules no later than 4 weeks after neurological engraftment. There was one treatment-related serious adverse event (abdominal pain), two patients subsequently developed acute gastrointestinal graft-versus-host disease (GI GVHD), and one patient also developed bacteremia. The median follow-up for survivors was 15 months (range, 13-20 months) [119]. In a prospective, single-center, single-arm study, after receiving FMT through the nasal duodenum to treat GI GVHD, 10 patients experienced a complete clinical response within 1 month after treatment [120]. For steroid-refractory or steroid-dependent GVHD, FMT has great therapeutic potential, but it remains to be seen whether the patient's gastrointestinal environment after transplantation can adapt to the gut microbiota of a healthy donor.
Cellular Therapy. Mesenchymal stromal cells (MSCs)
are a type of pluripotent stem cell. When MSCs are exposed to a proinflammatory environment, they produce more antiinflammatory cytokines, such as transforming growth factor-β and interleukin-10, so MSCs from the bone marrow can play an important role in regulating immune tolerance and autoimmunity and can be used to treat GVHD [121]. In a pilot study, six children were treated with decidua stromal cells (DSCs) for steroid refractory aGVHD. A complete response was observed in four children, and a partial response was observed in two children at 6 months [122]. A meta-analysis showed that infusion of MSCs prevents GvHD. The overall survival rate of the patients (95% CI, 1.02~1.33) was 17% higher than that of the control group, and the overall survival rate of the GVHD patients was positively correlated with the dose of bone marrow MSCs (p = 0:0214) [123]. Zhang et al. showed that mesenchymal stem cell-derived extracellular vesicles (MSC-EVs) enhanced Treg production, which is EV-and APC dose-dependent, through an APC-mediated pathway in vitro and in vivo, and MSC-EVs alleviated GVHD symptoms and increased survival in a mouse model [124]. As a soluble factor secreted by MSCs, MSC-EVs play an immunomodulatory role and may influence the bone marrow microenvironment through paracrine mechanisms [125,126]. MSC-EVs appear to be a promising noncellular therapy for the prophylaxis and treatment of GVHD after allo-HSCT in the future.
4.6. Dipeptidyl Peptidase 4 Inhibitor. Dipeptidyl peptidase 4 (DPP-4 or CD26) is a T-cell costimulatory molecule expressed on T cells with a costimulatory function in activating T cells, and downregulation of CD26 prevented GVHD but preserved graft-versus-leukemia effects in a mouse model [127]. In a phase 2 nonrandomized trial using sitagliptin, which is a DDP-4 inhibitor used in combination with tacrolimus and sirolimus, aGVHD occurred in 2 of 36 patients by Day 100, and the nonrelapse mortality was zero at 1 year, which suggested that DPP-4 is a viable target for the prevention of aGVHD [128]. 4.7. Probiotic Supplement. As the most frequently used alternative treatment, Lactobacillus spp. and Bifidobacterium spp. have been proven to be effective against a variety of diarrheal diseases, including antibiotic-associated diarrhea, infectious diarrhea or gastroenteritis, irritable bowel syndrome, ulcerative colitis, necrotizing enterocolitis, constipation, and cystitis, and can also be used to prevent Clostridium difficile infection [129]. Currently, the safety and feasibility of LBP in HSCT patients among high-risk children with impaired intestinal mucosal integrity have been verified by experiments. No cases of LBP bacteremia were observed in a total of 40 cases in the two trials [130,131]. Studies in mice have shown that administration of the probiotic Lactobacillus rhamnosus GG reduced the incidence of GVHD after HSCT [132]. However, the effectiveness of probiotics in the prevention or treatment of GVHD still needs to be verified by more clinical studies due to the current studies having small sample sizes and low frequencies of outcomes.
Microbial
Metabolites. Indole and indole-3carboxaldehyde (ICA) produced by tryptophan metabolism in intestinal microbiota can limit the development of GI GVHD via type I interferon signaling but preserve antitumor responses. Swimm et al. observed that the incidence and mortality of GVHD in allo-HSCT recipient mice colonized with tryptophanase-positive strains of Escherichia coli or given ICA were greatly reduced. In addition, colons from ICA-treated mice at Day 21 posttransplant showed fewer pathological changes, such as crypt loss, apoptosis, and inflammation [109]. Administration of exogenous butyrate or colonization by butyrate-producing bacteria can restore butyrate levels and promote histone acetylation in IECs, which results in increased expression of antiapoptotic proteins involved in barrier integrity, thereby mitigating GVHD [133]. ICA and butyrate treatment promises to be a therapeutic option for posttransplant patients at risk for GVHD, but more studies are required to demonstrate the safety of ICA and butyrate before they can be used in clinical trials.
Conclusions
Graft-versus-host disease (GVHD) is a disease caused by T lymphocytes in allogeneic donor grafts after transplantation, which undergoes a series of "cytokine storm" stimulations initiated by the recipient, greatly enhancing their immune response to the recipient antigen and launching cytotoxic attacks on the recipient target cells. At present, increasing evidence suggests that gut commensal microbiota and its metabolites, with changes in the intestinal environment, play a positive or negative role in GI GVHD immunology. Therefore, the intestinal microecology should not be ignored when 8 Oxidative Medicine and Cellular Longevity studying the interactions of immune cells with them.
Research on the role of the gut microbiota in the intestinal mucosa has gradually improved. Meanwhile, FMT and probiotic supplements have been tested in clinical trials with promising results. In the future, we should devote more effort to understanding the effects of bacterial metabolites on the intestinal mucosa to develop more effective methods for the prophylaxis and treatment of GI GVHD.
Data Availability
The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding authors. | 2022-05-10T15:26:20.260Z | 2022-05-05T00:00:00.000 | {
"year": 2022,
"sha1": "7d2631c7376554e7c8d504b1ad458f83910bb05f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2022/2124627.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dec084865574de012bccc7ff6c730efc4fc88e3b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
251511853 | pes2o/s2orc | v3-fos-license | Bioactivity‐guided isolation of anti‐inflammatory limonins from Chukrasia tabularis
Abstract Chukrasia tabularis is an economically important tree and widely cultured in the southeast of China. Its barks, leaves, and fruits are consumed as a traditional medicine and perceived as a valuable source for bioactive limonin compounds. The extracts from root barks of C. tabularis showed significant anti‐inflammatory effect. The aim of this research was to explore the material basis of C. tabularis anti‐inflammatory activity, and to purify and identify anti‐inflammatory active ingredients. By a bioassay‐guided isolation of dichloromethane fraction obtained two novel phragmalin limonins, Chukrasitin D and E (1 and 2), together with 12 known limonins (3–14). The chemical structure of these compounds is determined on the basis of extensive spectral analysis and chemical reactivity. In addition, the activities of these isolated limonins on the production of nitric oxide (NO), tumor necrosis factor alpha (TNF‐α), and nuclear factor kappa B (NF‐κB) in RAW264.7 cells induced by lipopolysaccharide (LPS) were evaluated. Limonins 1 and 2 indicated significant anti‐inflammatory activity with IC50 values of 6.24 and 6.13 μM. Compound 1 notably inhibited the production of NF‐κB, TNF‐α and interleukin 6 (IL‐6) in macrophages. The present results suggest that the root barks of C. tabularis exhibited anti‐inflammatory effect and the limonins may be responsible for this activity.
Previous chemical research on this plant provided a series of phragmalin limonins. Limonin is a kind of nortriterpene with diversified structure, which has a wide range of bioactivities, like antifeeding insects, antimalarial and anticancer activities (Liao et al., 2009;Tan & Luo, 2011;Fang et al., 2011).
In the physiologic responses to infection or damage, macrophages have a special impact on the progress of inflammatory processes (Alivernini et al., 2020). Both the production of pro-inflammatory mediator and the aggravation of inflammation are impossible to separate from the action of macrophage (Eissa et al., 2018). Many pro-inflammatory cytokines, like tumor necrosis factor alpha (TNFα), interleukin 1β (IL-1β), and interleukin 6 (IL-6), are originated from macrophages. Nuclear factor kappa B (NF-κB) is an example of signal transduction and gene modulation associated with macrophages' immersion and activation (Sae- Tan et al., 2020). When activated, NF-κB causes the production of pro-inflammatory cytokines like TNFα, IL-6, and IL-1β. Given the potential relevance of inflammation and macrophages, it is important to find a way to modulate the expression of inflammatory cytokines and control the activation of macrophages.
Lipopolysaccharide (LPS) has been widely used to stimulate macrophages in inflammatory models in experiments on anti-inflammatory mechanisms. After LPS stimulation, NF-κB signaling cascade was activated, resulting in changes in related protein expression (Ren et al., 2020).
In recent years, the anti-inflammatory, antitumor, and antioxidant activities of Chukrasia tabularis have been widely reported (Kaur et al., 2011). In our studies on the anti-inflammatory constituents of Meliaceae plants, two new phragmalin limonoid orthoesters Chukrasitin D and E (1 and 2) (Figure 1) were isolated and identified from the root barks of C. tabularis, together with 12 known limonoids (3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14). In this study, we report the separation, structure identification, and bioassay results of the extracts and isolated compounds. The in vitro anti-inflammatory assay of compounds 1-14 on LPS-mediated macrophages showed that limonins 1 and 2 displayed a significant inhibitory effect. In addition, the effects of limonin 1 on the production of nitric oxide, NF-κB, and TNFα in RAW 264.7 cells induced by LPS and their possible antiinflammatory mechanisms were also evaluated. Therefore, the current study focused on anti-inflammatory evaluation of C. tabularis extracts and isolated limonins.
| Reagents and materials
The optical rotation was obtained using a JASCO P-1020 polarimeter. Infrared (IR) spectra were measured on a Nicolet 170SX FT-IR spectrometer, ultraviolet (UV) spectra were detected on a 210A UV spectrum. The nuclear magnetic resonance (NMR) spectra were recorded on a 400 MHz Bruker spectrometer. Electrospray ionization mass spectrometry (ESIMS) and high-resolution electrospray ionization mass spectrometry (HRESIMS) were measured on a 2020 LCMS spectrum and Bruker APEX II mass spectrum, respectively.
| Preparation of extracts from C. tabularis and bioassay-guided separation
The anti-inflammatory test of xylene-induced ear edema in mice showed that the dichloromethane extract had significant antiinflammatory activity (Table 1), so the dichloromethane phase was selected for further separation. Subfractions of dichloromethane extracts Fr.C and Fr.D showed significant anti-inflammatory activity by mouse xylene auricle swelling experiments (Table 1), so isolation and purification focused on these two fractions.
The chipped root bark of C. tabularis (5.6 kg) was extracted three times with MeOH at room temperature for 7 days each (20 L). The obtained solution was evaporated in vacuo to gain a brownish extract (890 g). The residue was suspended in H 2 O and divided by petroleum ether (PE), dichloromethane (CH 2 Cl 2 ), ethyl acetate, and n-butanol.
| Laboratory animals
Male Institute of Cancer Research (ICR) mice (18 ± 2 g), specific pathogen free, were supplied by the experimental animal center of Fujian
| Xylene-induced ear edema in mice
The extracts were dissolved in 0.5% CMC-Na (sodium carboxymethyl cellulose) and Aspirin was applied as a positive control. After gavage of the extracts or control for 1 h, the right ear of each mouse was treated with 40 μl of xylene solution, and the left ear served as a control. One hour after xylene treatment, mouse was executed due to cervical dislocation. A circular part with a diameter of 6 mm of each ear was weighed with an electronic analytical balance, and its inhibitory activity on ear edema was calculated (Table 1).
| In vitro anti-inflammatory activities
RAW 264.7 cells obtained from the China Center for Cultivated Studies (Shanghai, China) were maintained in DMEM contained with 1% penicillin and streptomycin and 10% fetal bovine serum, and under 5% CO 2 at 37°C. Cells were stimulated with LPS. In brief, cells were placed on the 96-well plate (1 × 10 5 cells/well). After 2 h of preincubation, the LPS (2 μg/ml) and compounds were added and the samples incubated for 24 h. The supernatant of cell culture was collected 24 h later and NO was detected by the Griess reagent (Gasparotto et al., 2013).
| Measurement of NFκ B, IL-6, and TNFα production
The levels of NF-κB, TNFα, and IL-6 were determined by enzymelinked immunosorbent assay (ELISA) based on manufacturer's protocol. The standard solution and the antibody-bearing sample were placed at 37°C for 60 min, added to the working solution, incubated in 37°C for 30 min, and washed. Tetramethylbenzidine (TMB) was then added and the TMB termination solution was added after 20 min. In the end, the absorbance at 450 nm was recorded by ELISA.
| Statistical analysis
The data obtained were expressed as mean ± SD. All experiments had 3 replicates. The t-test was used to verify differences between groups by IBM SPSS Statistics 24.
| Bioactivity-guided abstraction and isolation of active components
The anti-inflammatory activities of methanolic, petroleum ether, dichloromethane (CH 2 Cl 2 ), EtOAc, and n-butanol extracts and fractions from the root barks of C. tabularis were assessed in vivo by xylene-induced ear edema in mice. The result showed that the dichloromethane extract displayed significant anti-inflammatory activities with an inhibition rate of 42.41% (400 mg/kg) ( Table 1).
The subfractions of dichloromethane extract Fr.C and Fr.D exhibited significant anti-inflammatory activities with inhibitory values of 43.65% and 42.93% (400 mg/kg). Two novel phragmalin limonins, Chukrasitin D (1) and E (2), together with 12 known limonins (3-14) were separated and identified from Fr.C and Fr.D (Figure 1). 10 methyl groups (three methoxys), seven methylene groups, five methane groups (two oxygenated), and 13 quaternary carbons (five oxygenated). In addition, a comprehensive analysis of its 1 HNMR (proton nuclear magnetic resonance) and 13 CNMR (carbon nuclear magnetic resonance) and data (Table 2) showed the presence of three methyl esters, one orthoacetate moiety, one propanoyl, and one 3-methylbutyryl group. In molecule 1, there are 11 unsaturates, of which 5 are occupied by 5 ester carbonyls, and the remaining 6
| Structural elucidation of isolated compounds
unsaturates require 1 to be hexacyclic in the center. The foregoing data indicated that 1 was a limonoid orthoester of phragmalin type (Lin et al., 2009).
Extensive 2DNMR (two-dimensional nuclear magnetic reso- 9, 14-, 8, 9, 30-, and 1, 8, 9-orthoacetate (Lin et al., 2011;Silva et al., 2008;Zhang, Fan, et al., 2008;. to the above results, the relative configuration of 1 was completed, as shown in Figure 1. By comparing experiments and computational ECD data, the absolute configuration of 1 was finally proved, which is a suitable method for solving the absolute configuration of natural TA B L E 2 1 H-NMR (proton nuclear magnetic resonance) (400 MHz) and 13 C-NMR (carbon nuclear magnetic resonance) (100 MHz) spectroscopic data for 1 and 2 Figure 2). Thus, the stereochemistry of 1 was constructed as shown in Figure 1 (Figure S1).
Chukrasitin E (2) was found to possess a quasimolecular ion peak was constructed as shown in Figure 1 (Figure S9).
| Anti-inflammatory effects of separated limonins from C. tabularis
Nitric oxide is a major bioinformatics molecule with dual roles of biological messenger and cytotoxic molecule. It is involved in the pathogenicity of inflammation, is overexpressed in LPS-mediated macrophages, and is an indicator of inflammation (Jeon et al., 2016;Keisuke et al., 2021). The in vitro anti-inflammatory effects of limonins (1-14) were determined by LPS-stimulated RAW 264.7 cells by evaluating the production of NO. Cell viability determination showed that limonins (1-14) have no toxicity to RAW 264.7 cells at a concentration of 100 μM. To determine whether limonins (1-14) suppressed NO production by LPS-mediated RAW 264.7 cells, the concentration of NO in medium containing these limonins was evaluated. Table 3, 14 limonins exhibited anti-inflammatory effects at the tested concentration. The result exhibited that D-ring-opened phragmalin limonoid orthoester (1-2) showed strong NO inhibitory activities, while limonins (3-14) showed potent to moderate activity. Limonins 1-2 displayed significant anti-inflammatory activities
TA B L E 2 (Continued)
F I G U R E 2 Calculated and experimental electronic circular dichroism (ECD) spectra of 1 with IC 50 values of 6.24 and 6.13 μM. Limonoids 3-14 showed effective anti-inflammatory effect with the inhibition rate between 12.30 and 50.19 μM. Considering that anti-inflammatory components are found in the root bark of C. tabularis, it can be determined that they are a source of natural anti-inflammatory molecules. It is worth noting that limonins 1 and 2 showed the strongest anti-inflammatory activity. Therefore, the potential anti-inflammatory activity and molecular mechanism of compound 1 were further studied.
Excessive production of pro-inflammatory cytokines exacerbates a variety of diseases, including allergies, autoimmune disease, and cancer (Benedetto et al., 2019;Guo et al., 2019). We investigated the activity of limonin 1 on LPS-stimulated pro-inflammatory cytokines in RAW 264.7 cells. The results in Figure 3 exhibited that the levels of NF-κB, IL-6, and TNFα in the LPS group were notably higher than those in the control group. As shown in Figure 3a-c, the addition of limonin 1 notably suppressed production of NF-κB, IL-6, and TNFα in a dose-dependent manner. The result indicated that limonin 1 could suppress the expression of NF-κB, IL-6, and TNFα in LPSstimulated macrophage, and achieved anti-inflammatory activity.
The regulation of anti-inflammatory activity on macrophages may be partly involved in the protective activity of Chukrasia tabularis on inflammatory diseases.
| DISCUSS ION
Macrophage is involved in most inflammatory responses, including LPS stimulation, and secretes pro-inflammatory cytokines like NF-κB, TNFα, and IL-6 (Kim et al., 2020). Modern studies have suggested that natural product may inhibit inflammation by modulating NO or inflammatory factor in macrophage (Fang et al., 2008).
Among these natural products, a thorough in-depth research on homologous medicinal and edible plants, it is found that limonin is the main anti-inflammatory active ingredient (Fan et al., 2019), mainly through inhibition of inflammatory mediators' NFκβ signaling cascade (Jin et al., 2018). C. tabularis bark and fruit extract was proven to have anti-inflammatory activities by inhibiting pro-inflammatory cytokines such as NO, TNFα, and IL-6 (Perianayagam et al., 2004).
However, few reports have focused on anti-inflammatory activities and mechanism of limonins in C. tabularis extracts (Yang et al., 2020).
In this study, a combination of octadecyl silica gel, Sephadex LH-20, and HPLC was used to separate anti-inflammatory limonins from effective trigger for inflammatory responses (Pandher et al., 2021).
Inflammation is the main risk element for many diseases, and macrophages are the primary immune cells and the first line of defense against pathogen invasion (Leseigneur et al., 2020). In the process of inflammation, macrophages produce excess induced nitric oxide synthase as inflammatory mediators and pro-inflammatory cytokines such as TNFα, IL-6, and IL-1β (Huang et al., 2019). NO is a biological signal and effect or element that modulates the expression of proinflammatory cytokine (Zamora et al., 2000). Inflammatory damage is thought to be caused by the excessive production of NO-induced pro-inflammatory cytokine (Kany et al., 2019;Zhang et al., 2017).
Excessive production of nitric oxide occurs in inflammation and various diseases, where NO plays a cytotoxic role in the pathological process (Lea et al., 2020;Shao et al., 2013). Therefore, suppression of NO production is important for the prevention of inflammatory disease. Among inflammatory stimulants, LPS induces macrophage activation leading to the release of pro-inflammatory cytokine in the inflammatory response (Bonizzi & Karin, 2004;Ronchetti et al., 2017). Cytokines arouse fever, stun, and various inflammatory diseases. Therefore, it is essential to suppress the overproduction (2) have been identified in C. tabularis root bark, and they have been proved to have strong anti-inflammatory activities, providing a theoretical basis for the application of C. tabularis in anti-inflammatory activity.
| CON CLUS IONS
Screening for anti-inflammatory activity of Chukrasia tabularis root bark extracts led to the separation of 14 limonins, including two novel phragmalin limonoids (1-2), 12 known limonoids (3-14). All isolated compounds were determined for NO production by LPSmediated RAW 264.7 cells. The result exhibited that D-ring-opened phragmalin limonoid orthoester (1-2) showed strong NO inhibitory activities while limonins (3-14) showed potent to moderate activity. Limonins 1-2 displayed significant anti-inflammatory activities with IC 50 values of 6.24 and 6.13 μM. Compound 1 inhibited the production of NO and TNFα in stimulated cells and reduced the secretion of NO and TNFα levels during inflammatory processes.
These results provide basic information for further research on utilizing C. tabularis as natural anti-inflammatory resource.
ACK N OWLED G EM ENT
This work was financially supported by the National Natural
CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T.
The data presented in this study are available in the Supplementary Materials. | 2022-08-12T15:02:34.736Z | 2022-08-09T00:00:00.000 | {
"year": 2022,
"sha1": "2eb2430eb7627420f9eb4d9c248f09c29490bcc7",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.3015",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c94538dd057155ab39b3c2d11f031c7fae968b5d",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262065992 | pes2o/s2orc | v3-fos-license | Uncertainty Quantification of Thermophysical Property Measurement in Space and on Earth: A Study of Liquid Platinum Using Electrostatic Levitation
A study of uncertainty analysis was conducted on four key thermophysical properties of molten Platinum using a non-contacting levitation technique. More specifically, this work demonstrates a detailed reporting of the uncertainties associated with the density, volumetric thermal expansion coefficient, surface tension and viscosity measurements at higher temperatures for a widely used refractory metal, Platinum using electrostatic levitation (ESL). The microgravity experiments were conducted using JAXA’s Electrostatic Levitation Furnace (ELF) facility on the International Space Station and the terrestrial experiments were conducted using NASA’s Marshal Space Flight Center’s ESL facility. The performance of these two facilities were then quantified based on the measurement precision and accuracy using the metrological International Standards Organization’s Guide to the Expression of Uncertainty Measurement (GUM) principles.
INTRODUCTION
Quantifying uncertainty in thermophysical properties of metals and alloys is a prerequisite for the manufacturing of reliable products and more importantly in development of new high performance alloys.Yet, rigorous uncertainty quantification is rarely found in most literature values reported for these properties.The present norm is to report mean and standard deviation of experimental values which fails to provide any insight on the underlying distribution of unknown residuals.Due to this poor variability reporting, the true extent of error propagation during uncertainty quantification are often overlooked-which hinders the true utilization of the significant potential of the quality of the metal additive manufacturing.Systematic reporting of uncertainty quantification and uncertainty management can lead to understanding of the variation in the quality of the manufactured parts.
A systematic facility performance evaluation (FPE) based on accuracy and precision developed by the author (Nawer & Matson 2023) was used to evaluate the reliability of the reported properties.Accuracy can be measured with respect to an accepted literature value and precision can be measured from uncertainties associated with each measurement.Uncertainty analysis was conducted using the widely accepted metrological International Standards Organization's Guide to the Expression of Uncertainty Measurement (GUM) (JCGM/WG1 2008) rules.This FPE technique has successfully been utilized to evaluate performances of several levitation facilities for pure Zirconium (Nawer & Matson 2023), pure Gold (Nawer et al. 2023a) and Ni-based superalloy CMSX-4 Plus (Nawer et al. 2023b).This FPE technique was extended to evaluate accuracy and precision of density, volumetric thermal expansion coefficient (CTE), surface tension and viscosity measurements of another pure lmetal, Platinum.
Platinum has been widely used in various industrial and commercial applications due to its inert nature, high corrosion and oxidation resistance, high melting point coupled with good catalytic activity and biocompatibility (Knapton 1979).Apart from its vast usage in the chemical industry as a catalyst and numerous medical and jewelry applications, Platinum-based superalloys are used in high temperature space applications (Behrends et al. 2000).It has been extensively studied using several groundbased levitation facilities due to its excellent physical characteristics and chemical stability (Wilthan et al. 2004;Ishikawa et al. 2006;Paradis et al. 2014;Watanabe et al. 2020) which makes it a perfect candidate for microgravity experiments.Microgravity testing is used to provide enhanced fidelity (Egry 2004) of results purportedly due to three main effects.First, without strong gravitational accelerations and with reduced levitation forces a more spherical sample allows for better analysis of experimental behavior.Second, better control of convection in space results in higher measurement precision.Third, sedimentation and buoyancy induced segregation are eliminated in reduced gravity.These unique advantages of microgravity environment provide the researchers with an excellent opportunity to quantify FPE across various levitation platforms.Platinum was chosen as a baseline reference material for this study along with Gold and Zirconium as a part of Microgravity Physical Sciences Communities international collaborations on property measurements (Matson et al. 2016).A series of tests has been conducted using containerless electrostatic levitation (ESL) technique to study the uncertainties associated with thermophysical property measurements of this noble metal.This paper provides insights into the FPE of two space and ground-based ESL facilities with an aim to encourage appropriate reporting on uncertainties across the scientific communities.
Experimental
Two ESL facilities were used for investigating thermophysical property of pure platinum (99.95%)Lot number 16470 from Surepure Chemetals Inc. for this study.Microgravity testing was conducted using Japanese Aerospace Agency's Electrostatic Levitation Furnace (ELF) facility on the KIBO module of International Space Station (ISS) (Tamaru et al. 2018).Experiments were conducted from the command control station at JAXA Tsukuba Space Station in Tsukuba, Japan.In parallel, terrestrial experiments were conducted using the NASA Marshall Space Flight Center's (MSFC) ESL facility in Alabama, USA (Nawer et al. 2020a).Both facilities use laser heating system, and both monitor sample surface temperature using infrared (IR) pyrometers.Sample shadow images were used for density and CTE measurement during free cooling of samples in both facilities.Platinum has higher work function, similar to gold (Sachtler et al. 1966), which made it quite difficult to remove electrons from the solid surface during heating and achieve a stable levitation during space testing.During the ground-testing under controlled slow heating conditions, droplet oscillation was successfully induced on the molten sample during several temperature holds.High-speed digital video recordings for these tests were used for surface tension and viscosity analysis.Fig. 1 shows typical images of the post-processed sample surfaces, one from each facility, obtained using scan electron microscopy (SEM) at the Institute of Materials Physics in Space German Aerospace Center (DLR), Cologne.
Data Analysis
Evaluation of video data of the molten sample was performed using a customized edge detection algorithm with sub-pixel precision.The radius (r) of the near-spherical sample was first fitted with a 6th order Legendre polynomial function (Bradshaw et al. 2005) and then volume (V) was measured using vertical axis of symmetry as a function of polar angle (θ): Dynamic sample mass was estimated after considering mass loss due to evaporation using Langmuir's equation (Nawer et al. 2020b).Density of the liquid sample was then measured from the dynamic mass (m) and apparent volume (V).Volumetric CTE (β) at constant pressure can also be measured from the volume of the liquid at melting by considering the change in the slope Droplet oscillation was induced on the molten sample by superimposing a sinusoidal voltage on the electrical field for a short duration (Cheng 1985) and then the sample oscillatory motion was allowed to dampen out by its viscosity once the excitation is removed.The natural frequency (f n ) of the sample at mode l = 2,0 was identified by conducting frequency sweeps during Faraday Forcing (Douady 1990) at a steady thermal hold as shown in Fig. 2(a).Two methods were used to confirm the findings from the frequency sweeps.The first method is maximum amplitude (MA) method (Egry et al. 1995) where the largest sample deformation, δ = ΔA / 6A 0 was observed with an increasing application of the forced frequency.The second method is the frequency crossover (FC) method (Nawer & Matson 2023) which as identifies the natural frequency at the intersection of the linear fits of forced and damped frequencies.From the frequency sweep shown in Fig. 2(a), the natural frequency identified by MA method is 154.39Hz and FC method is 154.81Hz.Both of these frequencies are in excellent agreement.
Surface tension (σ) of the liquid sample was measured by
Fig. 2.
Droplet oscillation data analysis. (a) Frequency sweep conducted at an excitation amplitude of 3.0 V at a hold temperature of 2,164.71± 6. 29 K and (b) damped oscillation of the sample observed at an applied frequency of 154 Hz.
using the identified mode l = 2,0 frequencies.Viscosity (η) was measured from the sample density, unperturbed radius (r 0 ) and time constant (τ n ) of the decaying signal when the forcing was ceased as shown in Fig. 2(b).
Density and Thermal Expansion Coefficient (CTE) Measurement
Platinum was successful levitated and melted on both ground and space facilities.Density in the liquid and undercooled phase measured from these facilities are shown in Fig. 3 The measured density at the melting point from ISS-ELF is within 1% of the density measured at NASA MSFC ESL.Both values are within 1% of ground-based ESL values reported by Ishikawa et al. (2006) and both are also within 3% of sessile drop method values reported by Dubinin et al. (1975), and pulse heating methods by Hixson & Winkler (1993) and Gathers et al. (1978).The slope of the individual datasets from NASA MSFC ESL shows slight deviation which can be attributed to the accuracy of the pyrometer data.The density measured in the ISS-ELF shows a deviation from the expected linear behavior above a temperature 2,200 K which is induced by the noise from sample movement during the experimentation.
Surface Tension and Viscosity Measurements
Droplet oscillation was successfully conducted at the NASA MSFC ESL facility.Surface tension was measured using natural frequencies identified by both FC and MA method as shown in Fig. 4(a).Unlike for other metallic melts (Nawer & Matson 2023), the linear trend of observed damped frequencies during the frequency sweep did not change significantly with the applied frequency as shown in Fig. 2(a).Thus, a third measurement approach known as the damped response (DR) method can be proposed for surface tension measurement as shown in Fig. 4(a).The apparent scatter seen in Fig. 2(a) indicates that the DR values are deemed to be of lower confidence compared to the higher confidence seen using FC method.Intercept regression error may be used to quantify systematic error introduced by a specific analysis technique, and the coefficient of variation, or the ratio of standard deviation to population mean was 11.8% and 12.55% for the DR and FC methods, respectively.The surface tension values are in good agreement with literature values as measured by Ishikawa et al. (2006) using ground-based ESL.Quantification of trends in surface tension were not conducted due to the smallsized dataset available in the current work.
Viscosity of liquid platinum was measured at the NASA MSFC ESL and are shown in Fig. 4(b).The measured viscosity values are in excellent agreement with values reported by Zhuchenko et al. (1977) and Ishikawa et al. (2006).
Facility Performance Evaluation (FPE) through Uncertainty Quantification
A detailed uncertainty analysis was conducted on the measured properties from both ESL facilities maintaining the GUM guidelines and the results of this evaluation are listed on Table 1.Platinum samples processed on the ground exhibited less noise on the recorded temperature compared to the space samples because of higher application of electrostatic force resulting into and enhancement on the restriction of sample movement.Ground-tests were conducted in ultra high vacuum environment which resulted in higher evaporation compared to the space-tests where the samples were tested in a shielding Argon gas environment reducing mass loss.This contributed to higher uncertainty in mass for ground tests.ISS-ELF samples exhibited slightly higher uncertainty in sample radius and volume measurement obtained from the image analysis.Uncertainty in density was measured by considering both uncertainties from mass and volume measurements.The estimated uncertainty in the measured melting point density for ISS-ELF is 19,234.62± 294.84 kg⸱m -3 and for NASA MSFC ESL 19,196.34± 287.7 kg⸱m -3 .Uncertainty in the measured CTE using ISS-ELF is estimated to be higher compared to the NASA MSFC ESL due to higher uncertainties in volume and slope.Uncertainty in surface tension was estimated by considering uncertainties from mass and frequencies.The estimated uncertainty in surface tension measured at the melting point (which was estimated from conducting a linear fit on the measured values) of 1.73 ± 0.02 N⸱m -1 .Uncertainty in melting point viscosity is estimated to be 6.63 ± 0.22 mPa⸱s which was measured by considering uncertainties from radius, density, and time constant.
The reported uncertainties from Table 1 were used to calculate coefficient of variation which can be used to represent precision and deviation from the literature value can be used to represent accuracy: Visualization of accuracy and precision for the measured properties can be demonstrated using a measurement of accuracy and precision (MAP) plot.Accuracy differs based on the use of literature value and previously reported literature values obtained using levitation were used for consistency.The smaller the deviation indicates the higher the accuracy is.The positive and negative shifts on the deviation also indicate if the measured properties are higher or lower compared to the literature values.Precision is estimated from the experimental values obtained from this study and are recommended for FPE.Similar to the accuracy, the lower values of coefficient of variation indicate higher measurement precision.Hence, lower values of deviation and coefficient of variation indicated higher accuracy and precision.
In Fig. 5, MAP plots of platinum for four measured properties at its melting point are shown along with the previously reported zirconium measured in ISS-ELF, ISSelectromagnetic levitation (EML) and NASA MSFC ESL (Nawer & Matson 2023).Accuracy of density and surface tension for platinum was displayed based on using Ishikawa's ESL measurement (Ishikawa et al. 2006) as the reference literature value.Accuracy of CTE was displayed using Dubinin's reported value of 0.000121 K -1 using Sessile drop method (Dubinin et al. 1975) as the reference value.Accuracy of viscosity was displayed using (Ishikawa et al. 2012) as the reference literature value.Density measurements from both facilities have similar accuracy and precision as shown in Fig. 5 in Fig. 5(b).They also have similar precision compared to the ground based Zr tests.Accuracy and precision of surface tension measurements are shown in Fig. 5(c), the NASA MSFC ESL has higher accuracy and precision compared to the Zr tested in ground and space ESL; It also has similar precision compared to the Zr processed on using ISS-EML.Viscosity measured using the NASA MSFC ESL has higher precision, but lower accuracy compared to the Zr processed both in ground and space as shown in Fig. 5(d).
SUMMARY
Density, CTE, surface tension and viscosity of platinum in liquid and undercooled state were successfully studied using ISS-ELF and NASA MSFC ESL facilities.Density measured from on both facilities are in excellent agreement with reported literature values.CTE values measured from on both facilities are higher compared to the reported literature value.Surface tension and viscosity values are in good agreement with the reported literature values.The associated uncertainties w ith all four propert y measurements were reported using GUM guidelines.Both facility performances were evaluated numerically using the FPE benchmarking MAP plots.In future, this FPE would be extended to other levitation facilities to truly understand the effectiveness of the measurements from these two facilities.This helps researchers define which classes of materials are best measured in a specific facility since each has unique individual strengths and weaknesses.
Fig. 3 .
Density of liquid Platinum as a function of temperature.ISS, International Space Station; ELF, Electrostatic Levitation Furnace; MSFC, Marshall Space Flight Center; ESL, electrostatic levitation.
Fig. 4 .
Fig. 4. Droplet oscillation results of liquid platinum. (a) Surface tension and (b) viscosity of liquid platinum as a function of temperature.MSFC, Marshall Space Flight Center; ESL, electrostatic levitation; DR, damped response; MA, maximum amplitude; FC, frequency crossover.
Table 1 .
Relative and combined standard uncertainty of Platinum at the melting point | 2023-09-20T15:05:15.455Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "6f2e99419f795812f391e44290772154c807cf1b",
"oa_license": "CCBYNC",
"oa_url": "http://www.janss.kr/download/download_pdf?pid=jass-40-3-93",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c8e1bd97c3cfd798f838806304620a44400acd04",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
255868876 | pes2o/s2orc | v3-fos-license | Are we there yet? A machine learning architecture to predict organotropic metastases
Background & Aims Cancer metastasis into distant organs is an evolutionarily selective process. A better understanding of the driving forces endowing proliferative plasticity of tumor seeds in distant soils is required to develop and adapt better treatment systems for this lethal stage of the disease. To this end, we aimed to utilize transcript expression profiling features to predict the site-specific metastases of primary tumors and second, to identify the determinants of tissue specific progression. Methods We used statistical machine learning for transcript feature selection to optimize classification and built tree-based classifiers to predict tissue specific sites of metastatic progression. Results We developed a novel machine learning architecture that analyzes 33 types of RNA transcriptome profiles from The Cancer Genome Atlas (TCGA) database. Our classifier identifies the tumor type, derives synthetic instances of primary tumors metastasizing to distant organs and classifies the site-specific metastases in 16 types of cancers metastasizing to 12 locations. Conclusions We have demonstrated that site specific metastatic progression is predictable using transcriptomic profiling data from primary tumors and that the overrepresented biological processes in tumors metastasizing to congruent distant loci are highly overlapping. These results indicate site-specific progression was organotropic and core features of biological signaling pathways are identifiable that may describe proliferative plasticity in distant soils. Supplementary Information The online version contains supplementary material available at 10.1186/s12920-021-01122-7.
Background
Metastasis accounts for 90% of cancer associated mortality [1]. While disease spread is a definitive turning point in patient pathology, metastasis is a long, arduous, and inefficient process for a primary tumor [1,2]. To establish an overt colonization in a distant organ, metastasis proceeds through multiple restrictive bottlenecks. Tumor sheds must first retain membrane integrity during a violent intravasation and successfully navigate the circulatory vasculature. Arriving in the new settlement, cells must elude immune response, retain activation of growth signals, and survive radiotherapies or putative ablation via chemotherapeutics [3][4][5]. The possible organs sites of metastasis are tumor type specific; and in part determined by primary lesion anatomic location, intratumor metabolic reprogramming, augmented protein functions and disrupted biological pathways driving tumor cell fitness in the distant organs [6][7][8][9][10]. The dissemination of successful metastases is an organized process known as metastatic organotropism.
Metastatic organotropism is a long-standing problem in cancer research and characterizing the metastatic patterns of primary tumors is a critical step towards treating patients with advanced disease [11,12]. Experimentally driven investigations have focused on characterizing the biological underpinnings of organotropic metastasis while computational approaches have developed tools attempting to predict the sites of metastases. Previous research has described the patterns of bone, liver, and lung tropisms. Bone tropisms arise primarily from breast and prostate cancers [13]. In prostate cancers, three major clusters of pathologies have evolved, one of which show high androgen receptor signaling and high bonetropism compared to the other clusters [14,15]. Liver tropisms primarily arise from breast, lung, and gastrointestinal cancers [13]. A 17-gene signature has been shown to indicate adverse outcomes for breast cancer patients and has some correlative evidence suggesting liver progression from breast tumors [16]. Lung tropisms are observed most commonly in breast, melanoma and thyroid cancers [13,17]. Similar to liver tumors, a 54 gene panel expression signature has been developed for showing correlation for organotropic metastasis from breast tumors progressing to the lung [18].
Studies using molecular information for retrospective analyses of tumor metastatic sites have been xenograft selection studies that extrapolated organotropic features from metastasis microarray data. Studies leveraging RNA transcript profiling data have been designed for single tumor type progressing to a single site. We have found no significant study has been developed on classifying site-specific metastasis from human primary tumor transcriptomic profiling data [5,[19][20][21][22][23][24][25][26][27][28]. The most recent work investigating organotropic progression used no molecular data and instead used deep data mining of patient clinical data to model temporal patterns of tumor type site-specific progression and established a powerful co-occurrence based network but did not extract any biological determinants of tumor plasticity in distant organs [24].
Despite the significant progress made from previous modeling methods, a unified approach to predict site specific metastasis in multiple cancer types that learns the biological determinants of dissemination has not been resolved. We have leveraged the publicly available omics data and clinical annotations in the TCGA database to investigate metastatic organotropisms of multiple cancers. In this study, we build off the previous work and establish a machine learning architecture that models organotropic metastases by distinguishing the tumor type and in multiple cancer types predicts the loci of distant tumor metastases. We detail a migration from the canonical pipelines using differential expression for feature assessment and use statistical machine learning for feature selection to optimize classification. Our model systematically predicts site-specific metastases of primary tumors and our methods captured conserved core biological processes overrepresented in tumors of varying origin that seeded in concordant anatomic locations.
Review of data download of TCGA transcriptomic and clinical annotation data
The TCGA data portal has the clinical data commons that are publicly available for data mining in the clinical databank [29]. These data are accessible in multiple ways including Bulk/Batch API access, TCGA Biolinks software via Bioconductor, and Cart-Building on the portal website in a patient-by-patient search [29]. Currently, no unified patient disease progression information is directly available for bulk data mining on the portal website. Our progression annotation was built by text mining clinical files of progression annotations project by project using the batch query function in the TCGA Biolinks package. Each patient has multiple unique identifiers. In a project-by-project manner, each Case ID was cataloged. Each case ID query produced a case UUID that was used across the data types including the gene expression counts, VCF files, FASTQ files, images from slides, and clinical annotation for each experiment for each patient. Each UUID produces a patient summary. Each summary was broken down into: Data category, Experimental strategy, clinical annotations, and clinical supplemental files. The transcriptome counts files for each project were downloaded, normalized and analyzed. Each project has between 53 and 261 clinical annotation columns. The stringr and dplyr software packages were used for clinical annotation, data cleaning, and anatomical annotation [30]. Metastatic tumors identified in the clinical annotation file were drawn from the "metastatic tissue", "sites of metastases" or "metastatic tissue site" column(s). Tumor progression labeled as "synchronous" were not included in the metastatic data as the clinical timeline of diagnosis was ambiguous. The diagnosis allows for tumors to be classified as synchronous ranging between the time of diagnosis up to 6 months following the diagnosis in varying tumor types.
Review of synthetic sample generation
Synthetic samples were generated to balance positive and negative classes using the SMOTE algorithm; where positive classes were tumors that developed a metastasis in the tested location and negative classes were tumors that did not develop a metastasis in the tested locaiton [31]. Briefly, the Synthetic Minority Oversampling Technique (SMOTE) is an algorithm to increase the representation of a minority class in machine learning classification problems. The objective function for this approach sits on top of a distance based KNN algorithm. The synthetic oversampling technique begins by selecting a minority class instance. Then finds the instance's k nearest neighbors. One of the minority class neighbors is chosen at random. A line is drawn between these two instances and a synthetic sample is generated along the line as a convex combination of the two real instances. This process repeats until it has created the desired number of synthetic samples. The number of synthetic samples generated was specific for each binary comparison. The authors suggest that the SMOTE algorithm can be used to generate a large sum of representative synthetic samples, however how large that sum is without over fitting the model is unknown. We employed an overfit prevention method during sample balancing. We measured 80% of the majority class and increased the representation of the minority to the match approximately 80% of the majority class rounded to the closest integer.
Review of feature selection
Feature selection is a method in model building to reduce the dimensionality of a dataset. Overfitting can occur when the number of columns (features) outnumber the rows (instances) we can use for the model. To reduce the dimensionality of the problem we have employed three kinds of feature selection methods: Filter based, Wrapper-based and Embedded feature selection. Chi-square filtering calculates the chi-square metric between the target and the numerical variable and only reduces the features for the variables with the maximum chi-squared values. The SelectKBest, Chi2 and MinMaxScaler Libraries from Sklearn and feature_selection module were used [32]. A Recursive feature selection estimator iteratively reduced the dimensionality of the data set by recursively considering smaller and smaller subsets of each feature block. The RFE was trained on each initial block of features and the importance of each feature was obtained through the feature_importances attribute. The RFE and LogisticRegression libraries from Sklearn and feature_ selection module were used [32]. For embedded methods, Random forest classifier, random forest regression and lasso regression with a logistic regression estimator and L1 penalty were employed. These algorithms have an embedded feature selection method to stratify and rank features. The SelectFromModel, RfC and RfR libraries were imported from Sklearn [32,33]. We cross validated these approaches by extracting support values in each using the get_support methods, summing the true feature support Booleans for each feature in each block across all five methods and sorting features by selection support.
Iterative Feature selection was conducted by splitting the 60,483 transcript features into 100 blocks of approximately 600 features to be assessed by the above algorithms. We extracted support values for each feature from each selection method. Each block was assessed independent of all other blocks in each classification. Transcripts were filtered for features that showed the highest cross-validated support in multiple or all algorithms. Dimensionality was finally reduced by filtering out co-linear features. The top 10% of highest scoring features were kept from each block for a total number of approximately 5000 candidate transcripts (50 transcripts × 100 blocks). The remaining transcripts were used as the input features in each binary classification. Tree-based models were selected as the best fit for the classification to account for the variability in number selected features in each classification and to allow model attributions to be extracted post-hoc.
Review of model building
Random Forest classification and Gradient boosted tree classifiers were built to classify site specific progression from primary tumors. The selected features in each binary classification were used as input attributes into model classification. The model is set to report rounded value for classification but is capable of posterior probability for class likelihood. The code and the pretrained models are available through the documented Github. Model building and usage is documented on the Github wiki page.
Review of feature recapture
Feature recapture was the final phase of model building and analysis. Testing the statistical significance of feature recapture in independently generated lists following bioinformatic analysis is an indirect however well documented technique to determine non-random enrichment [34]. Two sets of feature recapture were analyzed and displayed in Additional file 1: Table S7. The tests were conducted; within cancer class seeding loci and the between cancer classes metastasizing in matching locations. The Fisher's exact was used to evaluate the significance of recapture between lists, as the significance of deviation from the null hypothesis can be directly calculated. Our null hypothesis was that the feature recapture when analyzing matched seeding locations across cancer types was by chance; therefore, no biological meaning can be drawn from the phenomena. Our alternative hypothesis was that recapture of features within class and between matching seeding locations indicates similar distant metastatic potential and offers candidate biomarkers for organotropic metastasis, respectively. The contingency table was set as; the background of the search space for the information gain algorithm. The starting feature selection space for each classification was the entire human transcriptome. As all of the binary compassions initially began considering all 60,483 transcripts, and each set of selected features were independently generated, the total transcriptome remained the background for all tests. In list A of each contingency table, we place the top 1000 features for each classification of primary tumor seeding location. In list B, we assess a second primary tumor type and/or metastatic location feature list. We test the significance of the intersection of the two lists considering the list sizes, background and overlap in contingency table. The GeneOverlap package on Bioconductor was used to conduct the Fisher's exact tests [35].
Gene set overrepresentation and semantic analysis
The clusterprofiler package was used to conduct an overrepresentation test in the GO database [36]. The selected features for each metastatic location in each cancer type were translated into their associated GO biological process IDs using the bitr function in the clusterProfiler package [36]. The overrepresented GO biological pathways were passed to into the GoSemSim package and simplify enrichment package [37]. A similarity matrix of biological functions was made using the simplfyEnrichment package in R [38]. A heatmap was produced by clustering the similarity scores of the biological functions using the package default binary cut function. A Fisher's exact test was conducted using the base GeneOverlap in R [35]. The background was changed from the human transcriptome to the GO database to account for the change in the search space [39]. The UpsetR package in R was used to display the bar graph of overlapping biological processes in the tumors seeding in matched locations [40]. All overlaps were tested between cancers metastasizing in similar organs.
Data availability and code
We used public data sets drawn from the TCGA database using the GDC data commons for this project and its analyses [41,42]. We have provided all the custom computer code to produce these models.
Our code is currently available for view and use in a public Github repository: https:// github. com/ micha elSka ro/ Class ifica tion_ of_ organ otrop ic_ metas tases. The docker image containing all relevant environment variables, dependencies and a demo test data set is also made publicly available on docker hub and integrated into the Github actions. We have a documented wiki page that is available, demonstrating the installations, displays visualization and describes script usage within the pipeline. We have provided a general usage script that runs the entire metastatic classification pipeline. At the command line it can be ran using the metastasis_pipeline.py script within the built docker container. We have provided a general usage feature selection pipeline Feature_selection.py. We have provided the organotropic features sets for all cancer types selected in this study in the Additional file 1: data tables. We have provided all enrichment and recapture code in the source code.
Classification of tumor type
Each tumor type is unique and potential metastatic sites of progression are limited based on the tumor gene expression profile, anatomic location, and blood circulation [24]. We hypothesized that each tumor type has subsets of features associated with tissue specific progression. Therefore, classifying tumor type was considered a critical step towards extracting patterns of organotropic metastasis. Thirty-three tumor types were considered by the model and are annotated by their four-letter code in the tumor type column in all figures and tables. Figure 1 displays the confusion matrix of the model as a heatmap and displays the model precision, recall and f1-score with normalized performance for population size classifying 33 cancer types in the TCGA database. Our model performs in the excellent range on thirty of the cancer classes, Cholangiocarcinoma (CHOL) showed the worst performance as the population of 45 was too small to develop a strong model for cancer type classification. Esophageal carcinoma and stomach adenocarcinoma showed some misclassification in between the types, given these tumors have been shown to be pathologically very similar in previous research this was unsurprising [43]. Colorectal adenocarcinoma (COAD) showed considerable misclassification specifically misidentifying COAD for Renal adenocarcinoma (READ) and vice-versa. The COAD and READ classes are combined in the UCSC genome browser database, and combined COAD and READ in further analyses as the metastatic progressions showed a considerable overlap.
Overall, the cancer type classification model performed in the excellent range with a macro average precision of 94.2, macro average recall of 91.98 and macro average F1 score of 92.77. The classified results were used to carry forward for site specific metastases prediction. The classification of the primary tumor type significantly decreased the complexity of predicting possible sites of metastatic progression for each primary tumor. We annotated 125 metastatic locations in the ten thousand patient samples separated in twentythree TCGA projects containing transcriptomic and clinical data (Fig. 2). The most observed sites of metastasis were Bone, Liver, Lung and Lymph Node (Fig. 2). We filtered for metastatic sites with at least eight clinical annotations of progression for a given site and an overall total population of over fifty patients with documented non-synonymous progression of disease arising from the primary tumor. Following filtering we were able to analyze 35 tumor metastatic site pairs.
Classification of organotropic progression
Thirty-three cancer types in TCGA were analyzed in this study, based on the availability of annotated metastatic progression in the TCGA clinical data. For sixteen cancer types, we predicted site specific organotropic metastases. The classification of the organotropic metastases in the sixteen cancer types occurred in three phases. First, synthetic sample generation, followed by feature selection, and finally classification of progression. Synthetic sample generation was used to increase the representation of tumors that metastasized to each of the tested locations. Feature selection was used to reduce the dimensionality of the data and to find transcripts that best separated the tumors that metastasized to a tested locations from negative cases. We combined five feature selection algorithms to assess feature value discriminating between positive and negative classes in each classification independent of all other comparisons [44].
In Fig. 3 we show the performance of classification in sixteen cancer types. We report four metrics for the classification of site-specific progression in each cancer; precision, recall, F1 Measure and Model Accuracy. We observed an overall average precision of 0.82, average recall of 0.82, average F1 Measure of 0.82 and average accuracy of 0.82 considering all sites and all predictions. We performed in the excellent range on twenty six of 35 classification pairs. The projects with the fewest errors were the larger projects; Bladder cancer, Breast cancer, Colorectal cancers, and lung cancers. Sites with the strongest model support for prediction were Bone, Liver, Lung and Lymph Node. Cancer type specific performance is detailed in Table 1. Considering all progressions for each cancer type.
After the classification of the organotropic metastases, we predicted tumors metastasizing to congruent loci may exhibit similar biological changes in the primary tumor endowing proliferative plasticity in the distant organ locations. To this end, we used the top 1000 selected features from each feature selection to conduct pathway enrichment. In Fig. 4A. We simulated the number of expected biological processes to overlap if 1000 randomly selected transcripts were enriched in Fig. 1 Classification of tumor type. Classification of Cancer type. The confusion matrix detailing sample type specific performance for the GBT classification of tumor transcriptomes. 33 cancer types were considered by the model as annotated by their four letter TCGA code. The scale bar on the right-hand vertical axis denotes the density for each tile where dark tiles indicate low number of predicted values and red/white values indicate high numbers of predicted values. The major diagonal denotes the cancer type match between predicted and true labels where true labels are annotated along the left-side vertical axis and predicted labels are annotated across the horizontal axis the GO database. It is known that Ensemble transcript IDs map to multiple GO biological process IDs and therefore there is a high probability of false discovery due to random chance. To establish that our observed overlap between lists of GO BP IDs were significant, we modified previously published gene overlap protocols and conducted a weighted simulation of our feature selection methods where IDs with the least amount of mapping match GO IDs are given priori over IDs with many matches [31]. The weighted simulation was conducted by randomly selecting two sets of 1000 transcript features, conducting a GO over representation test within each list, filtering for significantly overrepresented processes in the feature sets followed by testing the simulated overlap of the two independently generated GO:ID lists. We conducted this simulation a total of 750,000 times using 50,000 simulations for each possible intersection combination. We tested all pairwise combinations of 5 possible lengths of GO:ID lists ranging from 100 GO:IDs to 500 GO:IDs. The simulated results are stratified by the colored lines in Fig. 4A. Our simulation shows that the feature selection method consistently produced significantly higher overlap than in random simulation. In Fig. 4B-E we show the number of overrepresented biological processes in the tumors metastasizing to bone, liver, lung, and Lymph Node, respectively. We reported the list overlaps, odds ratio and adjusted p.value after Bonferroni adjustment in the Additional file 1: Table S7.
In Fig. 5A-D we cluster the sematic similarity of the GO:ID terms that passed the selection and filtering. We display four heatmaps that describe the biological processes found to be overrepresented in primary tumors metastasizing to concordant locations. The largest cluster common among all the comparisons was regulation Fig. 2 Observed sites of metastatic progression in the TCGA database. Thirty-three cancers in the TCGA database have recorded RNA sequencing data. Within twenty-three projects 125 anatomic locations have clinically annotated metastatic progression. Unique metastatic sites of progression found within the population are annotated on the vertical axis. The cancer type four letter codes are annotated on the horizontal axis. The heatmaps are stratified by log frequency of occurrence in the data set. The right heatmap are were locations with the greatest frequency amongst all sites. COAD and READ have been combined in this section of the analysis of morphogenesis and migration. This is a significant result as collective cell migration is a hallmark of metastatic cancer and further suggests a progressive tumors may be identified by the expression profiles [45].
Discussion
The capacity to accurately determine the site-specific metastases of patients' primary tumors is directly applicable to clinical actions for patients. Following tumor resection; transcriptomic analysis of a patient's tumor can provide valuable insight into disease progression and can aid clinician's treatment interventions [46]. We present an accurate and precise machine learning architecture that can classify the tumor type and can identify if and where a primary tumor will metastasize. Embedded in our model we offer potential users the opportunity to report the locations of the metastases and additionally retain the posterior probabilities of metastatic progression to each location. This offers users the ability to integrate investigation specific calibration for their data and report the confidence of the classification in the clinical setting.
The model improves on previous work in two fundamental ways. The model increases the scope and performance comparison to previous work modeling either a single cancer type or single metastatic location and identifies biological feature determinants of organotropic metastasis from unified transcript profiling data. The model was shown to be broadly applicable in 16 different cancer types. Our feature selection method is uncommon amongst canonical bioinformatics or biomedical pipelines. The differentiation of the positive class feature space was only discernable from the negative class feature space following statistical machine learning centered feature selection methods. The features that are represented in the Additional file 1: data tables were produced cross validating five feature selection method and extracting model attribution support for the best features in each comparison. Our model is not without clear limitations. By breaking down a multi-label, multi-output experiment into NxM binary classification experiments we sacrificed detecting possible features that may be present in nonmutually exclusive progression. An example of this break down occurs when one patient's tumor metastasized to the liver and the lung. The model will fail to find features that may be dictating the multi-organ expansion of the patient's disease. We justify this sacrifice with an opportunity cost. While we will not find these coalescent features as there are not enough coalescent cases to properly model these phenomena, we do produce a model with very high sensitivity and specificity to detect if and where both metastases will arise in a given case. Further, the model is built in a way, upon receipt of more data, we can make the necessary modifications from a binary comparisons list to an All vs. All classification. The transition to an All vs. All classification presents the clear second limitation of this model; the very costly overhead of data production. Our model relies on the largest ever unified conglomerate of tumor transcriptome data to produce the level of precision and recall we achieved on only 16 cancer types of the 33 TCGA projects we investigated. This model is reliant on the high-quality data production pipeline in TCGA. The transcript profiling data for each tumor were produced from sequencing of patient tumors of extremely high purity which is very uncommon in most studies. If this model is to be broadly incorporated into the medical community it will need a very deep and diverse set of transcriptomes to train on that is much larger than our current TCGA dataset.
Next steps
Our next steps will be to include more cancer types. As the publicly available data continue to grow as a super set of TCGA and the International Cancer Genome Consortium (ICGC), more projects will have clinically annotated tumor and normal transcriptomes. Further, the TCGA database documentation has become more unified and is continuously growing in its clarity. This will allow us to incorporate multiple data types into a multiomic approach that may illuminate genetic, genomic, epigenetic and transcriptomic features working to provide proliferative plasticity in metastatic soils. Finally, if the public data grows by a significant margin, we can approach characterizing organotropic metastasis with an All vs. All model.
Conclusion
Our machine learning architecture expands the understanding of the cancer metastasis. The leading cause of cancer associated death is metastatic progression of disease, however incorporating this tool into the clinical timelines for patients may offer clinicians opportunities for pre-metastatic therapeutic interventions. We demonstrate our model can detect if and where metastases will arise. Our methods of synthetic sample generation and feature selection produced a clear and concise biological data-based model of metastatic progression in multiple tumor types. Our recaptured features are offered as candidate biomarkers of sitespecific metastatic organotropism. | 2021-11-25T14:13:28.195Z | 2021-11-24T00:00:00.000 | {
"year": 2021,
"sha1": "bdb136f6d30d58e076516f47684afce9433d6de3",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenomics.biomedcentral.com/counter/pdf/10.1186/s12920-021-01122-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdb136f6d30d58e076516f47684afce9433d6de3",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
} |
232774606 | pes2o/s2orc | v3-fos-license | Synthesis and Characterization of Silver–Strontium (Ag-Sr)-Doped Mesoporous Bioactive Glass Nanoparticles
Biomedical implants are the need of this era due to the increase in number of accidents and follow-up surgeries. Different types of bone diseases such as osteoarthritis, osteomalacia, bone cancer, etc., are increasing globally. Mesoporous bioactive glass nanoparticles (MBGNs) are used in biomedical devices due to their osteointegration and bioactive properties. In this study, silver (Ag)- and strontium (Sr)-doped mesoporous bioactive glass nanoparticles (Ag-Sr MBGNs) were prepared by a modified Stöber process. In this method, Ag+ and Sr2+ were co-substituted in pure MBGNs to harvest the antibacterial properties of Ag ions, as well as pro-osteogenic potential of Sr2 ions. The effect of the two-ion concentration on morphology, surface charge, composition, antibacterial ability, and in-vitro bioactivity was studied. Scanning electron microscopy (SEM), X-Ray diffraction (XRD), and Fourier transform infrared spectroscopy (FTIR) confirmed the doping of Sr and Ag in MBGNs. SEM and EDX analysis confirmed the spherical morphology and typical composition of MBGNs, respectively. The Ag-Sr MBGNs showed a strong antibacterial effect against Staphylococcus carnosus and Escherichia coli bacteria determined via turbidity and disc diffusion method. Moreover, the synthesized Ag-Sr MBGNs develop apatite-like crystals upon immersion in simulated body fluid (SBF), which suggested that the addition of Sr improved in vitro bioactivity. The Ag-Sr MBGNs synthesized in this study can be used for the preparation of scaffolds or as a filler material in the composite coatings for bone tissue engineering.
Introduction
Millions of medical devices are being implanted nowadays in patients related to bone diseases and accidental surgeries, thanks to the advancement of biomaterials. During the last few decades, biomaterials have focused on the following issues: (a) establish a material with a suitable mechanical strength, and (b) improve in-vitro activity by increasing the surface area of the bio-ceramics [1,2]. The number of accidents worldwide are increasing periodically. Moreover, the percentage of people over 50 years that are suffering from diseases like osteoporosis, osteoarthritis, osteomalacia, bone cancer, and other musculoskeletal diseases has increased [3]. It is stated that only in the USA, annually, more than 500,000 primary arthroplasties including total joint replacement, total hip arthroplasty (THA), and total knee arthroplasty (TKA) are done, and more than 1.3 million people live with artificial joints [4].
Bioactive glasses (BGs), new generation of bio-ceramics, are preferred biomaterials in a wide range of biomedical applications such as the regeneration of hard tissues (bones Gels 2021, 7, 34 3 of 15 application of Ag-Sr-based MBGNs for scaffold fabrication as well as antibacterial coatings on metallic substrates. Osteogenic properties of other ions like Cu, Mn, and Zn, as well as their antibacterial studies, have also been studied and their cytotoxic effects are also highlighted in the literature [20].
Morphological Characterization
The morphology of the synthesized MBGNs was investigated by SEM analysis. Figure 1 shows that all types of MBGNs have spherical morphology regardless of the addition of metallic precursor. Figure 1 depicts that average particle size of synthesized MBGNs was 130 ± 15 nm. The microemulsion-assisted sol-gel method favors the dispersion of nanoparticles, which explains the homogeneous size and shape of the obtained nanoparticles [28]. mation of hydroxyapatite crystals upon immersion in simulated body fluid (SBF). The controlled release of Ag-Sr ions also induced the antibacterial characteristics without affecting the bioactivity of MBGNs. The antibacterial effect correlates with the release of metallic ions in a critical concentration of ions (Ag), which works against the relevant pathogen or bacteria in physiological conditions. Therefore, the results presented in this article are anticipated to be used for a way forward in development of third-generation biomaterials with the application of Ag-Sr-based MBGNs for scaffold fabrication as well as antibacterial coatings on metallic substrates. Osteogenic properties of other ions like Cu, Mn, and Zn, as well as their antibacterial studies, have also been studied and their cytotoxic effects are also highlighted in the literature [20].
Morphological Characterization
The morphology of the synthesized MBGNs was investigated by SEM analysis. Figure 1 shows that all types of MBGNs have spherical morphology regardless of the addition of metallic precursor. Figure 1 depicts that average particle size of synthesized MBGNs was 130 ± 15 nm. The microemulsion-assisted sol-gel method favors the dispersion of nanoparticles, which explains the homogeneous size and shape of the obtained nanoparticles [28]. Figure 2 (right) depicts the nitrogen adsorption and desorption isotherm of Sr-MBGNs, Ag MBGNs, and Ag-Sr MBGNs. Textural properties of Ag, Sr, and Ag-Sr MBGNs derived from nitrogen adsorption-desorption isotherm analysis depicts a type IV isotherm according to IUPAC (International Union of Pure and Applied Chemistry) which confirms the mesoporous structure [26]. Uptake of a high amount of nitrogen at relative pressure (P/Po) ≈ 0.99 indicates the nano-sized particles. It was deduced from Figure 2 that all the particles exhibit wide pore size range with the average pore size of 2.8 nm. Relatively high porosity in the synthesized MBGNs may lead to the high surface area [29]. This porous nature opens up other biomedical applications such as drug delivery and microbial cell encapsulation [25]. and Ag-Sr MBGNs derived from nitrogen adsorption-desorption isotherm analysis depicts a type IV isotherm according to IUPAC (International Union of Pure and Applied Chemistry) which confirms the mesoporous structure [26]. Uptake of a high amount of nitrogen at relative pressure (P/Po) ≈ 0.99 indicates the nano-sized particles. It was deduced from Figure 2 that all the particles exhibit wide pore size range with the average pore size of ~2.8 nm. Relatively high porosity in the synthesized MBGNs may lead to the high surface area [29]. This porous nature opens up other biomedical applications such as drug delivery and microbial cell encapsulation [25].
Compositional Analysis
The EDX analysis was conducted to confirm the addition of Ag and Sr in MBGNs. For EDX analysis, powder samples were used. MBGNs powder was dispersed in ethanol and then ultra-sonicated for half an hour in order to avoid agglomerates. After drying, EDX analysis was conducted. Figure 3A represents the peaks of Ag and Sr, which confirmed the substitution of Ag and Sr in MBGNs. Figure 3B represents the qualitative elemental EDX analysis of the synthesized MBGNs prior to the substitution of Ag and Sr ions. It was observed that the Ca and Si peaks are present in MBGNs, which indicated the formation MBGNs [30]. The results of EDX analysis are in good qualitative agreement with the nominal composition of the synthesized MBGNs.
Compositional Analysis
The EDX analysis was conducted to confirm the addition of Ag and Sr in MBGNs. For EDX analysis, powder samples were used. MBGNs powder was dispersed in ethanol and then ultra-sonicated for half an hour in order to avoid agglomerates. After drying, EDX analysis was conducted. Figure 3A represents the peaks of Ag and Sr, which confirmed the substitution of Ag and Sr in MBGNs. Figure 3B represents the qualitative elemental EDX analysis of the synthesized MBGNs prior to the substitution of Ag and Sr ions. It was observed that the Ca and Si peaks are present in MBGNs, which indicated the formation MBGNs [30]. The results of EDX analysis are in good qualitative agreement with the nominal composition of the synthesized MBGNs. high surface area [29]. This porous nature opens up other biomedical applications such as drug delivery and microbial cell encapsulation [25].
Compositional Analysis
The EDX analysis was conducted to confirm the addition of Ag and Sr in MBGNs. For EDX analysis, powder samples were used. MBGNs powder was dispersed in ethanol and then ultra-sonicated for half an hour in order to avoid agglomerates. After drying, EDX analysis was conducted. Figure 3A represents the peaks of Ag and Sr, which confirmed the substitution of Ag and Sr in MBGNs. Figure 3B represents the qualitative elemental EDX analysis of the synthesized MBGNs prior to the substitution of Ag and Sr ions. It was observed that the Ca and Si peaks are present in MBGNs, which indicated the formation MBGNs [30]. The results of EDX analysis are in good qualitative agreement with the nominal composition of the synthesized MBGNs. The molecular structure of as-synthesized Ag-Sr MBGNs and the effect of doping on a network of glass were studied by FTIR spectroscopy, as shown in Figure 4. The results depicted that no major difference occurs upon doping metallic precursors [30]. The bands around 455 and 1067 cm −1 can be assigned to Si-O-Si rocking and Si-O-Si stretching modes, respectively [31]. The broad band at 1200 to 1000 cm −1 depicts Si-O-Si vibrations [32]. The peak around 800 cm −1 is assigned to the Si-O-Si bridging bonds in the SiO 4 tetrahedrons [33].
The XRD diffraction pattern of as-synthesized MBGNs, Ag MBGNs, Sr MBGNs, and Ag-Sr MBGNs confirmed the amorphous nature (broad peak at 2θ = 20 • -32 • ) for all types of MBGNs, as shown in Figure 5 [34]. Furthermore, the diffraction pattern of Ag-Sr MBGNs shows no peaks ascribed to the silver and strontium, which suggests the incorporation of Ag and Sr into MBGNs as well as the chemical homogeneity of Ag-Sr-containing MBGNs. It was concluded that Ag-Sr MBGNs were successfully synthesized using the microemulsion- The molecular structure of as-synthesized Ag-Sr MBGNs and the effect of doping on a network of glass were studied by FTIR spectroscopy, as shown in Figure 4. The results depicted that no major difference occurs upon doping metallic precursors [30]. The bands around 455 and 1067 cm −1 can be assigned to Si-O-Si rocking and Si-O-Si stretching modes, respectively [31]. The broad band at 1200 to 1000 cm −1 depicts Si-O-Si vibrations [32]. The peak around 800 cm −1 is assigned to the Si-O-Si bridging bonds in the SiO4 tetrahedrons [33]. The XRD diffraction pattern of as-synthesized MBGNs, Ag MBGNs, Sr MBGNs, and Ag-Sr MBGNs confirmed the amorphous nature (broad peak at 2θ = 20°-32°) for all types of MBGNs, as shown in Figure 5 [34]. Furthermore, the diffraction pattern of Ag-Sr MBGNs shows no peaks ascribed to the silver and strontium, which suggests the incorporation of Ag and Sr into MBGNs as well as the chemical homogeneity of Ag-Srcontaining MBGNs. It was concluded that Ag-Sr MBGNs were successfully synthesized using the microemulsion-assisted sol-gel approach presented here, with silver nitrate and strontium nitrate being effective precursors for incorporating Ag and Sr into the silica network of MBGNs.
Zeta Potential
The zeta potential measurements of MBGNs, Ag MBGNs, Sr MBGNs, and Ag-Sr MBGNs were performed in ethanol, and the results are given in Table 1. It was deduced that silver and strontium ions changed the surface charge of MBGNs. Strontium substitution resulted in an increase in positive surface charge, while Ag substitution led to a decrease in surface charge. The variation in zeta potential by the incorporation of Sr and Ag in MBGNs may be associated with the pH change (zeta potential is a function of pH). It is also reported that the addition of Sr in bioactive glass (Sr substitution with Ca) may lead to the pH change and eventually increase the zeta potential compared to the MBGNs [35]. Positive zeta potential (of Sr-MBGNs) increases the solubility of the nanoparticles and may lead to aggregation. However, it also promotes the adsorption of negatively charged proteins on the surface and improves the efficacy of imaging, gene transfer, and drug delivery [36]. Ag ion reduces the surface charge of MBGNs due to its relatively high electronegativity (1.93) compared to calcium (1.0), which facilitates the deposition of Ca 2+ ions on the surface and enhances the bioactivity [37].
Zeta Potential
The zeta potential measurements of MBGNs, Ag MBGNs, Sr MBGNs, and Ag-Sr MBGNs were performed in ethanol, and the results are given in Table 1. It was deduced that silver and strontium ions changed the surface charge of MBGNs. Strontium substitution resulted in an increase in positive surface charge, while Ag substitution led to a decrease in surface charge. The variation in zeta potential by the incorporation of Sr and Ag in MBGNs may be associated with the pH change (zeta potential is a function of pH). It is also reported that the addition of Sr in bioactive glass (Sr substitution with Ca) may lead to the pH change and eventually increase the zeta potential compared to the MBGNs [35]. Positive zeta potential (of Sr-MBGNs) increases the solubility of the nanoparticles and Gels 2021, 7, 34 6 of 15 may lead to aggregation. However, it also promotes the adsorption of negatively charged proteins on the surface and improves the efficacy of imaging, gene transfer, and drug delivery [36]. Ag ion reduces the surface charge of MBGNs due to its relatively high electronegativity (1.93) compared to calcium (1.0), which facilitates the deposition of Ca 2+ ions on the surface and enhances the bioactivity [37].
Ion-Release Profile
The synthesized MBGNs were tracked for ion-release study in order to understand the effect of ion release on the biological properties, for example, antibacterial activity, in vitro bioactivity, and cell biology. Figure 6A represents the release of Si and Ca ions from MBGNs. It was observed that Si showed a rapid release in all samples in the first 7 days, followed by a relatively slow release up to 21 days. Ca 2+ ions were released at a rapid rate from all types of MBGNs. However, the absolute release of Ca ions decreases with the increase in the incubation time. The release of Ca ions is beneficial for the osteoconductive properties of the bioactive glasses. The burst release of both ions (Ag, Sr) was observed from Ag-Sr MBGNs, which might be due to the concentration gradient between the particles and physiological solution. Ag ions released in the physiological medium play an important role in the antibacterial activity. The antibacterial properties (discussed in Section 2.5) of the synthesized MBGNs were in good agreement with the ion-release data. Ag ions released from Ag-Sr MBGNs samples were within the concentration range of 2-48 ppm, which has been proven to induce significant antibacterial properties against Gram-positive and Gramnegative bacterial strains [38]. The sustained release of Sr ions will be beneficial for in vitro bioactivity [39]. Furthermore, the initial burst release of silver will be useful in pre- Figure 6B shows the release profile of Ag and Sr ions from co-substituted MBGNs under dynamic condition in SBF solution at 37 • C over a period of 21 days. A burst release of Ag ions was observed in the first 24 h in Ag-Sr MBGNs and Ag MBGNs samples followed by a steady-state release, indicating long-term sustained release, which will be beneficial for long-term antibacterial effect. Figure 6C shows the release profile of Si, Ca, and Ag ions from the Ag MBGNs. We observed a burst release of Ag ions during the first week. Afterwards, the sustained release of Ag ions was observed. The ion release profile of Si, Ca, and Ag from Ag MBGNs was similar to the Ag-Sr MBGNs. Furthermore, the release of Sr, Si, and Ca from Sr MBGNs was similar to that of Ag-Sr MBGNs. Thus, it was concluded that the co-substitution of Ag and Sr did not affect the release of Si and Ca ions, which will be helpful in obtaining bioactive properties while keeping the antibacterial effect associated with the release of Ag ions.
The burst release of both ions (Ag, Sr) was observed from Ag-Sr MBGNs, which might be due to the concentration gradient between the particles and physiological solution. Ag ions released in the physiological medium play an important role in the antibacterial activity. The antibacterial properties (discussed in Section 2.5) of the synthesized MBGNs were in good agreement with the ion-release data. Ag ions released from Ag-Sr MBGNs samples were within the concentration range of 2-48 ppm, which has been proven to induce significant antibacterial properties against Gram-positive and Gram-negative bacterial strains [38]. The sustained release of Sr ions will be beneficial for in vitro bioactivity [39]. Furthermore, the initial burst release of silver will be useful in preventing the formation of biofilm and the sustained release of Ag will be effective in providing a long-term antibacterial effect. It was observed that after 21 days of incubation, the silver release was in the range of minimum inhibitory concentration level [10]. Furthermore, in the future, it would be interesting to analyze the release of P ions because the consumption of phosphate ions from SBF confirms the Hydroxyapatite (HA) formation.
Antibacterial Study (Turbidity Test)
To investigate the antimicrobial effect of synthesized nanoparticles of different compositions, a turbidity test was done. The change in OD 600 after 1, 2, 3, 4, 6, and 24 h of incubation is presented in Table 2. It was observed that the measured OD 600 value for the Ag-Sr MBGNs and Ag MBGNs showed a strong decrease after 6 h of incubation compared to the control samples (MBGNs and Sr MBGNs). Since Ag ions released a substantial amount after 6 h of incubation, which resists the growth of E. Coli and S. carnosus, after 24 h of incubation, the cumulative release of Ag ions from Ag MBGNs and Ag-Sr MBGNs was sufficient to completely hinder the growth of E. Coli and S. carnosus. Moreover, it was observed that the control samples allowed the growth of E. Coli and S. carnsus after 24 h of incubation. Thus, it can be concluded that the Ag-Sr MBGNs and Ag MBGNs strongly retarded the growth of E. Coli cells [40].
Disc Diffusion Test (Inhibition Halo Method)
The antibacterial properties of the Ag-Sr MBGNs and MBGNs were also investigated by the disc diffusion method (Figure 7) to further validate the antibacterial results. The antibacterial effect was tracked against Gram-negative (E. coli) and Gram-positive (S. carnosus) bacteria. The growth of E. coli and S. carnosus was prominent on the reference and pure MBGNs sample after 24 h of incubation. The growth for both types of bacteria was strongly inhibited by the Ag-Sr MBGNs samples. Figure 7 shows that the zone of inhibition developed across the MBGNs sample against S. carnosus and E. coli. The strong antibacterial effect associated with the Ag-Sr MBGNs was due to the release of Ag ions (as shown in Figure 6). The Ag ions interact with nucleic acids and they interact preferentially with the bases in DNA, thus inhibiting the DNA replication activity and eventually leading to the death of bacteria. Furthermore, Ag in an ionic form is highly reactive (generation of reactive oxygen species) and can rupture the walls of bacteria and lead to the death of bacteria cells [1,11]. In the current study, Ag was successfully doped in the network of MBGNs ( Figure 6) and the Ag was released in an ionic form rather than the particulate form. Ag in the form of particles is toxic to the osteoblast cells. However, the controlled release of silver ions <100 ppm (as the case in the present study, see Figure 6) provided a potent antibacterial effect against a wide spectrum of bacteria. The release of silver ions was <100 ppm, which is below the cytotoxic limit of Ag [10,35].
The cytotoxicity of Ag-doped MBGNs depends on the concentration of Ag in MBGNs and the release profile of Ag [33,41]. However, the cytotoxic effect associated with the release of Ag ions can be co-doped by the addition of Sr ions. In our previous study, we have shown that the toxic effect of Ag can be minimized by the co-substitution of Sr and Mn along with the Ag [1,6,21]. Therefore, this study presents a new frontier in the field of biomedical materials by the use of co-substituted Ag and Sr ions. The co-substitution of Ag and Sr is a challenging task because Ag tends to oxidize readily and form AgO. However, in this study, we developed MBGNs doped with Ag in its pure form (XRD results indicate no crystalline peak of silver oxide), due to which it was possible to release silver in an ionic form rather than particulate form [10].
In Vitro Bioactivity Analysis
Bioactivity is one of the most desired attributes for bone tissue engineering (BTE). The ability of the coating to form a bond with the bone is crucial for an implant [42]. Figure 8 represents the EDX analysis of the Ag-Sr MBGNs after immersion in SBF (simulated body fluid, by Kokobu et al. [42]). The decrease in the intensity of Si peak over the immersion time in SBF may indicate the degradation of Ag-Sr MBGNs or the formation of a thick layer of hydroxyapatite (HA) [5]. Moreover, it was observed that the intensity of calcium and phosphate peaks increased over the incubation time, which indicates the formation of HA crystals on the surface of the Ag-Sr MBGNs [43]. In vitro bioactivity of pure MBGNs is illustrated in our previous studies [26]. Furthermore, the toxic effect of Ag MBGNs on the bioactivity was also illustrated in our previous studies [1,6]. Figure 9 shows the SEM images of the synthesized Ag-Sr MBGNs after immersion in SBF. Figure 9 depicts the change in the morphology of nanoparticles. After 7 days of immersion in SBF, nanostructure and porous HA crystals formed on the surface of the particles. It was further observed that the plate-like HA crystals form on the surface of incubated HA. The plate-like structure indicated the calcium-enriched apatite crystals [12]. The FTIR analysis of Ag-Sr MBGNs after immersion in SBF was not investigated in the current study. However, in our recent study, we presented the FTIR analysis of Ag-Sr MBGNs incorporated in the chitosan/gelatin matrix after immersion in SBF. It was observed that the carbonate-and phosphate-related peaks appear after immersion in SBF [1]. Studies show that bone reformation is pH-sensitive. During bone remodeling around the border of osteoclast, pH is 4.0 and the pH of surrounding body fluid is 7.4 [44,45]. Moreover, it is known that the physiological environment of initial fracture hematoma is acidic, and during healing, it becomes alkaline, which aids bone differentiation [46]. To study the pH changes, Ag-Sr MBGNs samples were immersed in SBF and then incubated. SBF solution was changed after every 3 h. Initially, pH of the SBF was set at 7.40 ± 0.02, and later, the pH was checked at the 3rd, 7th, 14th, and 21st days. Figure 10 shows that the pH became slightly basic as the immersion time increased, which Figure 9 shows the SEM images of the synthesized Ag-Sr MBGNs after immersion in SBF. Figure 9 depicts the change in the morphology of nanoparticles. After 7 days of immersion in SBF, nanostructure and porous HA crystals formed on the surface of the particles. It was further observed that the plate-like HA crystals form on the surface of incubated HA. The plate-like structure indicated the calcium-enriched apatite crystals [12]. The FTIR analysis of Ag-Sr MBGNs after immersion in SBF was not investigated in the current study. However, in our recent study, we presented the FTIR analysis of Ag-Sr MBGNs incorporated in the chitosan/gelatin matrix after immersion in SBF. It was observed that the carbonate-and phosphate-related peaks appear after immersion in SBF [1]. Figure 9 shows the SEM images of the synthesized Ag-Sr MBGNs after immersion in SBF. Figure 9 depicts the change in the morphology of nanoparticles. After 7 days of immersion in SBF, nanostructure and porous HA crystals formed on the surface of the particles. It was further observed that the plate-like HA crystals form on the surface of incubated HA. The plate-like structure indicated the calcium-enriched apatite crystals [12]. The FTIR analysis of Ag-Sr MBGNs after immersion in SBF was not investigated in the current study. However, in our recent study, we presented the FTIR analysis of Ag-Sr MBGNs incorporated in the chitosan/gelatin matrix after immersion in SBF. It was observed that the carbonate-and phosphate-related peaks appear after immersion in SBF [1]. Studies show that bone reformation is pH-sensitive. During bone remodeling around the border of osteoclast, pH is 4.0 and the pH of surrounding body fluid is 7.4 [44,45]. Moreover, it is known that the physiological environment of initial fracture hematoma is acidic, and during healing, it becomes alkaline, which aids bone differentiation [46]. To study the pH changes, Ag-Sr MBGNs samples were immersed in SBF and then incubated. SBF solution was changed after every 3 h. Initially, pH of the SBF was Studies show that bone reformation is pH-sensitive. During bone remodeling around the border of osteoclast, pH is 4.0 and the pH of surrounding body fluid is 7.4 [44,45]. Moreover, it is known that the physiological environment of initial fracture hematoma is acidic, and during healing, it becomes alkaline, which aids bone differentiation [46]. To study the pH changes, Ag-Sr MBGNs samples were immersed in SBF and then incubated. SBF solution was changed after every 3 h. Initially, pH of the SBF was set at 7.40 ± 0.02, and later, the pH was checked at the 3rd, 7th, 14th, and 21st days. Figure 10 shows that the pH became slightly basic as the immersion time increased, which aids bone differentiation, as mentioned earlier. The overall curve progression is stable except for a few midterm fluctuations. On the basis of in vitro bioactivity and antibacterial studies, it was inferred that the Ag-Sr MBGNs provided a potent antibacterial effect while maintaining the typical bioactivity associated with the MBGNs. According to Reference [41], Ag may affect the in vitro bioactivity of the MBGNs. Thus, the addition of Sr improved the in vitro bioactivity and provided an antibacterial effect due to the Ag doping.
Conclusions
In this study, we synthesized Ag-doped, Sr-doped, and Ag-Sr-doped MBGNs via modified Stöber method and sol-gel process. SEM images confirmed the spherical morphology of all the synthesized particles. BET results confirmed the mesoporous nature of all the synthesized MBGNs. It was deduced that the addition of metallic ions did not affect the morphology of MBGNs. Furthermore, XRD results confirmed the doping of Ag and Sr in the silica network of MBGNs. The XRD patterns confirmed the amorphous nature of the synthesized MBGNs with all the different concentrations. The release of Ag and Sr ions was tracked by the ICP studies. The results confirmed that during the first day of incubation, Ag and Sr showed a burst release. However, with the increase in incubation time, Ag and Sr were released in a sustained manner, thus providing a longterm therapeutic effect. The controlled release of Ag provided a potent antibacterial effect, while the release of Sr ions improved the in vitro bioactivity. The peculiar morphological features of the synthesized Ag-Sr MBGNs and the feasibility of functionalizing these MBGNs with active ions or biomolecules suggest that the synthesized MBGNs based on SiO2-CaO in this study are a promising material for biomedical applications, including bone regeneration and wound cure.
Conclusions
In this study, we synthesized Ag-doped, Sr-doped, and Ag-Sr-doped MBGNs via modified Stöber method and sol-gel process. SEM images confirmed the spherical morphology of all the synthesized particles. BET results confirmed the mesoporous nature of all the synthesized MBGNs. It was deduced that the addition of metallic ions did not affect the morphology of MBGNs. Furthermore, XRD results confirmed the doping of Ag and Sr in the silica network of MBGNs. The XRD patterns confirmed the amorphous nature of the synthesized MBGNs with all the different concentrations. The release of Ag and Sr ions was tracked by the ICP studies. The results confirmed that during the first day of incubation, Ag and Sr showed a burst release. However, with the increase in incubation time, Ag and Sr were released in a sustained manner, thus providing a long-term therapeutic effect. The controlled release of Ag provided a potent antibacterial effect, while the release of Sr ions improved the in vitro bioactivity. The peculiar morphological features of the synthesized Ag-Sr MBGNs and the feasibility of functionalizing these MBGNs with active ions or biomolecules suggest that the synthesized MBGNs based on SiO 2 -CaO in this study are a promising material for biomedical applications, including bone regeneration and wound cure.
Synthesis of Ag-Sr-Containing MBGNs (Stöber Process)
Ag-Sr MBGNs were prepared by a modified Stöber process [33,47]. Firstly, 0.56 g CTAB was dissolved in 26 mL of distilled water under continuous stirring for 30 min at 40 • C. Then, 8 mL ethyl acetate was added dropwise into the solution. Thirdly, 26 mL of diluted solution of ammonium hydroxide (32 vol.%) was added to maintain pH at 9.5 and 6 mL of TEOS and was added dropwise into the above solution under continuous stirring. Then, 2.24 g calcium nitrate, 0.0834 g silver nitrate, and 0.42 g strontium nitrate was added depending upon the required composition and followed by magnetic stirring for 30 min. Afterwards, the solution was allowed to pursue reaction between the reactants. Subsequently, the suspension was centrifuged at 7830 rpm for 10 min to separate particles from the parent solution, followed by washing the sedimented particles with ethanol. This step was repeated three times. Finally, the precipitates were dried in an oven at 75 • C for 12 h, followed by calcination at 700 • C for 5 h. Figure 11 illustrates the synthesis of Ag-Sr-doped MBGNs.
Synthesis of Ag-Sr-Containing MBGNs (Stöber Process)
Ag-Sr MBGNs were prepared by a modified Stöber process [33,47]. Firstly, 0.56 g CTAB was dissolved in 26 mL of distilled water under continuous stirring for 30 min at 40 °C. Then, 8 mL ethyl acetate was added dropwise into the solution. Thirdly, 26 mL of diluted solution of ammonium hydroxide (32 vol.%) was added to maintain pH at 9.5 and 6 mL of TEOS and was added dropwise into the above solution under continuous stirring. Then, 2.24 g calcium nitrate, 0.0834 g silver nitrate, and 0.42 g strontium nitrate was added depending upon the required composition and followed by magnetic stirring for 30 min. Afterwards, the solution was allowed to pursue reaction between the reactants. Subsequently, the suspension was centrifuged at 7830 rpm for 10 min to separate particles from the parent solution, followed by washing the sedimented particles with ethanol. This step was repeated three times. Finally, the precipitates were dried in an oven at 75 °C for 12 h, followed by calcination at 700 °C for 5 h. Figure 11 illustrates the synthesis of Ag-Sr-doped MBGNs. In this study, MBGNs with three different compositions were synthesized, i.e., MBGNs doped with 5 mol% Sr (5Sr-MBGNs), 1 mol.% Ag (1Ag-MBGNs), and 5 mol.% Sr and 1 mol.% Ag (5Sr-1Ag MBGNs). Table 3 illustrates the nominal composition of the synthesized MBGNs. In this study, MBGNs with three different compositions were synthesized, i.e., MBGNs doped with 5 mol% Sr (5Sr-MBGNs), 1 mol.% Ag (1Ag-MBGNs), and 5 mol.% Sr and 1 mol.% Ag (5Sr-1Ag MBGNs). Table 3 illustrates the nominal composition of the synthesized MBGNs. The surface morphology of the as-prepared nanoparticles as well as those obtained after bioactivity tests was investigated using scanning electron microscopy (SEM; LEO 435VP, Carl Zeiss™ AG, Jena, Germany). First, to make the samples conductive and reduce the effect of charging, the samples were coated with a thin layer (around 10 nm) of gold via the sputtering technique (Q150/S, Quorum Technologies™, Lewes, UK). The SEM images were taken at different magnifications.
To further investigate the morphology of MBGNs, the BET (Brunauer−Emmett−Teller) analysis was carried out. Nitrogen adsorption/desorption was used to measure the pore volume (porosity).
Compositional Analysis
Energy-dispersive X-ray spectroscopy (EDX; X-MaxN Oxford Instruments, Abingdon, UK) was used to determine the composition of the as-synthesized particles. Furthermore, the ratio of the different elements in the hybrid nanoparticles was evaluated. In order to conduct EDX analysis, the synthesized Ag-Sr MBGNs (10 mg) were pressed into the pellets and then EDX analysis was conducted by using the working distance of 6 mm and energy of 25 KV. In order to conduct the EDX analysis after immersion in SBF, the samples were coated with a thin layer (around 10 nm) of gold via the sputtering technique (Q150/S, Quorum Technologies™, Lewes, UK).
Fourier transform infrared spectroscopy (FTIR) measurements were carried out on pellets of Ag-Sr-, Ag-, and Sr-doped MBGNs using the potassium bromide (KBr) disk method on a Shimadzu IRAffinity-1S (Shimadzu Corp, Kyoto, Japan), equipped with Lab Solution IR software and a Quest ATR GS10801-B single-bounce diamond accessory (Specac Ltd. London, UK) at room temperature. In order to prepare samples for FTIR studies, the MBGNs were grounded to a fine powder, followed by mixing with KBr powder in the MBGNs:KBr ratio of 1:100. The mixture was subjected to further grinding to achieve a homogeneous mixture, followed by pressing using a hydraulic pressure of 5 tons/cm 2 to form the disk samples. The IR transmission spectra were recorded immediately after preparing the discs. For optimal results, the device was cleaned with ethanol before the sample was applied. Furthermore, a background scan was conducted with 128 runs. Every sample was measured with 128 transmittance scans with a resolution of 4 cm −1 in Happ-Genzel apodization using wavelengths from 400 to 4000 cm −1 . To reduce the signal noise, the spectra were smoothed by 15 points.
In addition to FTIR, the samples of Ag-Sr-doped MBGNs were tested with X-ray diffraction (XRD) (MiniFlex 600, Rigaku Corporation, Tokyo, Japan) to characterize the doped MBGNs and pure MBGNs. The diffraction pattern was recorded using Ni-filtered Cu K α radiation (λ = 1.54 Å) operated at 40 kV and 40 mA over the 2θ angular range of 20-80 • (with 0.02 • step and a speed of 2 • per minute).
Zeta Potential
To measure the zeta potential, the Zetasizer Nano zsp (Malvern Panalytical, London, UK) was used. The analyzed suspensions (powder samples in absolute ethanol) were diluted to the ratio of 0.1 g.L −1 in particles. Three measurements per suspension at standard pH were taken at a maximum of 100 runs each and averaged. After each measurement, the cell was flushed out with ethanol again.
Ion-Release Profile
To investigate the Ag and Sr ion release from prepared Ag-Sr MBGNs, 75 mg of powder sample was dispersed in SBF solution (50 mL) for different time intervals (day 1, day 3, day 7, day 14, and day 21) in an orbital shaking incubator at 37 • C. Ag and Sr ions released in the medium under dynamic conditions were measured, using ICP-OES (inductively coupled plasma/optical emission spectrometry) from IRIS Advantage, Thermo Jarrell Ash. Approximately 1 g of the sample was dissolved in a 5% HNO 3 solution and heated gently to ensure complete dissolution. The solution was made up to 50 mL volumetrically and analyzed by ICP-OES (IRIS Advantage, Thermo Jarrell Ash, Waltham, US) against a calibration traceable under ISO: 17,025 guidance.
Antibacterial Studies
To test the antimicrobial properties of synthesized particles, an antibacterial test (called turbidity test) was conducted [48,49]. To grow E. coli and S. Carnasous bacteria in test tubes, a sterile wooden tip was used to scratch bacteria from a frozen sample and dropped into Lysogeny broth (LB-medium). After an incubation of 24 h, the medium had a high concentration of bacteria and was ready to use. Ag-Sr MBGNs, Ag MBGNs, Sr MBGNs, and Ag-Sr MBGNs were added in the test tubes and incubated for 24 h. Then, an optical density (OD) measurement was performed with absorbance at 600 nm (OD 600 ). For each measurement, OD 600 medium was taken as a reference.
Disc Diffusion Test
Petri dishes were spread homogenously with heated agar inoculated with bacteria (E. coli and S. Carnasous). Afterwards, the prepared pellets of different concentrations of MBGNs were placed and then incubated at 37 • C for 24 h. After 24 h of incubation, the petri dishes were taken out and digital images were taken to track the zone of incubation.
In Vitro Bioactivity Test
In vitro bioactivity of the synthesized Ag-Sr MBGNs was investigated following Kokubo et al. [50]. The composition of SBF was adopted from Reference [33] and the pH was set at 7.4. The synthesized Ag-Sr MBGNs were pressed into the pellets by using an electrohydraulic pressing device (Mauthe Maschinenbau, Salem, Germany). The prepared pellets were immersed in SBF, and the volume of SBF was set to 1 mg/mL. The SBF solution was changed every three days to simulate a refreshing system. The different sets of samples were taken after 1, 7, and 30 days. The samples were gently washed with (De-ionized) DI-water to prevent salt crystals on the surface and put into the heating stove at 60 • C for drying. After drying, the samples were weighed for the degradation studies and characterized using SEM and EDX analyses. | 2021-04-04T06:16:26.550Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "91b2477e921adfbc970bda3411bebf9cb1dd3fcd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2310-2861/7/2/34/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0df70a757327256b65cfd4d29100297e74d47b7",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237283539 | pes2o/s2orc | v3-fos-license | Gallic Acid Alleviates Neuropathic Pain Behaviors in Rats by Inhibiting P2X7 Receptor-Mediated NF-κB/STAT3 Signaling Pathway
Neuropathic pain is a complex disease with high incidence. Adenosine triphosphate (ATP) and its activated P2X7 receptor are involved in the signal transmission of neuropathic pain. Gallic acid (3,4,5-trihydroxybenzoic acid) is a traditional Chinese medicine obtained from natural plants that exhibit anti-inflammatory, analgesic, and antitumor effects. However, the underlying mechanism for gallic acid in analgesia remains unknown. This study aims to reveal how gallic acid alleviates neuropathic pain behaviors in a rat model with chronic constriction injury (CCI). Real-time PCR, western blotting, double-label immunofluorescence, molecular docking, and whole-cell patch clamp technology were used to explore the therapeutic action of gallic acid on neuropathic pain. The results showed that after CCI rats were treated with gallic acid for 1 week, the mechanical withdrawal threshold and thermal withdrawal latency were increased, accompanied by inhibition of the upregulated expression of P2X7 and TNF-α at both mRNA and protein levels, and reduced NF-κB and phosphorylated-STAT3 in the dorsal root ganglia. At the same time, gallic acid significantly decreased the coexpression of P2X7 and glial fibrillary acidic protein in the dorsal root ganglia. In addition, gallic acid could suppress ATP-activated current in human embryonic kidney 293 (HEK293) cells transfected with the plasmid expressing P2X7 but had no effect on ATP activation current of P2X7-mutant plasmid (with the point mutation sequence of the key site where gallic acid binds to the P2X7 receptor). Therefore, our work suggests that gallic acid may alleviate neuropathic pain in CCI rats by inhibiting the P2X7 receptor and subsequent activation of the TNF-α/STAT3 signaling pathway.
INTRODUCTION
The latest definition for pain by the International Association for the Study of Pain (IASP) is displayed as an unpleasant feeling and emotional experience, which is related to actual or potential tissue injury (Raja et al., 2020). Neuropathic pain is attributed to pathological changes or injuries of the somatosensory nervous system, which is commonly and easily disabled (Calvo et al., 2019). Neuropathic pain can cause the activation of satellite glial cells in dorsal root ganglia (DRG) and promote signal transduction between neuronal synapses and the release of cytokines, chemokines, and various inflammatory factors, eventually leading to an increase in the abnormal discharge of neurons and resulting in hyperalgesia or allodynia . Neuropathic pain is a complex heterogeneous syndrome that makes treatment very difficult.
Tumor necrosis factor-α (TNF-α) is a pleiotropic inflammatory factor (Kalliolias and Ivashkiv, 2016). When the extracellular adenosine triphosphate (ATP) concentration increases, the P2X7 receptor is activated and acts on TNF-α converting enzyme (TACE). Sheared membrane-bound TNF-α becomes soluble free TNF-α, which may induce inflammation and neuropathic pain (Gogoi et al., 2020). Signal transducer and activator of transcription 3 (STAT3) can be activated by different cytokines. There is evidence that the TNF-α/STAT3 signaling pathway is activated in neuropathic pain (Ding et al., 2019). In a rat model of neuropathic pain established by chronic constriction injury (CCI), TNF-α can activate nuclear factor kappa-B (NF-κB), and activation of the NF-κB/STAT3 signaling pathway may participate in pain regulation Chu et al., 2020).
Purinergic receptors include P1 and P2 subfamilies. P2 subfamily contains P2X (1-7) and P2Y (1,2,4,6,(11)(12)(13)(14) receptors (Burnstock, 2017). When organs or tissues are damaged, ATP can be released into the inflammatory microenvironment. ATP acts as a signaling molecule to activate the P2X7 receptor through paracrine or autocrine signaling, which affects the homeostasis of the internal environment and the development of the inflammatory response (Di Virgilio et al., 2018). Thus targeting the P2X7 receptor may provide a new direction for anti-inflammatory therapy. Upregulation of glial fibrillary acidic protein (GFAP) expression indicates the activation of satellite glial cells (SGCs). When satellite glial cells are activated, large amounts of ATP and inflammatory cytokines can be released to activate the P2X7 receptor (Hu et al., 2020). Gallic acid (3,4,5-trihydroxybenzoic acid), as a kind of traditional Chinese medicine, can be obtained from gallnuts, sumac, and many other natural plants. Gallic acid has analgesic, anti-inflammatory, hypoglycemic, and lipid-lowering effects (Kong et al., 2018;Sohrabi et al., 2021). However, the underlying analgesic mechanism of gallic acid remains unknown. Therefore, the purpose of this study was to explore whether gallic acid could alleviate neuropathic pain behaviors in rats with CCI by inhibiting the P2X7 receptor mediated NF-κB/ STAT3 signaling pathway.
Animal and CCI Model Establishment
Standard Sprague Dawley male rats, weighing 200-220 g, were provided by the medical animal center of Nanchang University. All experiments were conducted in accordance with the animal ethics committee of Nanchang University and followed the IASP guidelines on animal pain research. The CCI model was established according to a previously published method (Li et al., 2017). After anesthesia by pentobarbital sodium (40 mg/kg), the biceps femoris was bluntly dissected to expose the nerve, and the proximal end of the sciatic nerve was ligated 4 times with 4-0 intestinal thread. The distance between each node was 1 mm. The surgical treatment of the sham group was the same as that of the CCI group, except that the free nerves were not ligated. CCI model establishment and medication time are shown in Figure 1A.
To examine the effects of gallic acid on neuropathic pain, rats were randomly numbered and divided into five groups: sham operation group (Sham group), sham operation + gallic acid group (Sham + Gallic acid group), CCI model group (CCI group), CCI model + normal saline group (CCI + NS group), and CCI model + gallic acid group (CCI + Gallic acid group). Gallic acid (Shanghai Macklin Biological Co. Shanghai, China) was dissolved in normal saline. From the first day after CCI, the sham plus gallic acid group and the CCI plus gallic acid group were injected intraperitoneally with gallic acid (100 mg/kg) for 1 week (Mirshekari Jahangiri et al., 2020). Meanwhile, the CCI plus NS group was given the same dose of saline for 1 week.
Molecular Docking
The protein sequences for P2X3, P2X4, and P2X7 receptors were downloaded from http://www.UniProt.org/, and the P2X.pdb file was obtained by homologous modeling with https://swissmodel. expasy.org/. The gallic acid. sdf file was downloaded from https:// pubchem.ncbi.nlm.nih.gov/. The P2X receptor was pretreated with pyMOL software to remove small molecule ligands, dehydrate, and hydrogenate. Then, the gallic acid. sdf file was converted to pdb format. Finally, Autodock Tools soft was employed for molecular docking .
Mechanical Withdrawal Threshold and Thermal Withdrawal Latency
Mechanical and thermal hyperalgesia was measured at 2, 4, 6, 8, 10, 12, and 14 days after CCI. The von Frey monofilament test was used to detect the threshold of the MWT (Jia et al., 2017). A BME-410c fully automatic thermal radiation simulator was used to measure the TWL . Each rat was measured 6 times with a five-minute interval. All experiments were conducted by experienced researchers in blind manner.
Western Blotting
The procedure for protein extraction was the same as in a previous study (Yi et al., 2018). Protein samples of 20-30 μg were subject to electrophoresis on SDS polyacrylamide gels. After transferring the proteins to membranes, 5% skimmed milk was used to block at room temperature for 2 h, followed by incubating at 4°C overnight with primary antibodies: anti-P2X7 (Alomone Labs, Jerusalem, Israel), anti-TACE (Novus Biologicals Co., Littleton, United States), anti-TNF-α (Boster Biological Technology, Wuhan, China), anti-NF-κB (Affinity Biosciences, Ohio, United States), anti-STAT3 (Cell Signaling Technology, Beverly, MA, United States), anti-phosphorylated (p)-STAT3 (Cell Signaling Technology, Beverly, MA, United States) or anti-β-actin (ZSGB-BIO, Beijing, China). After washing with TBST for 3 × 10 min, the membranes were incubated at room temperature for 2 h with the second antibodies: goat anti-rabbit IgG (Proteintech, Rosemont, United States), or goat anti-mouse IgG (Proteintech, Rosemont, United States). After 10 min wash with TBST thrice, the membranes were exposed and developed in a gel imaging system. Image-ProPlus 6.0 was used to analyze the results.
Enzyme-Linked Immunosorbent Assay
The contents of TACE, TNF-α, and NF-κB in the serum of rats were determined by enzyme-linked immunosorbent assay. The prepared sample and standard were added to the orifice plate and reacted at 37°C for 30 min. Wash the plate five times, add HRP-Conjugated reagent and react for 30 min at 37°C. Wash the plate 5 times, add chromogenic reagent A and B, and react at 37°C for 10 min. Add the stop solution and read the OD value within 15 min.
Human Embryonic Kidney 293 Cell Culture and Transfection
The culture and transfection of HEK293 cells were conducted as described previously . FuGENE6 (Shanghai Promega Biotech Co., Shanghai, China) was added to the Opti-MEM medium and the mixture was incubated for 5 min to make a transfection reagent. The pcDNA3.0-EGFP-hP2X7 (P2X7-WT) recombinant plasmid (2.5 μg) labeled with a green fluorescent protein (GFP) was mixed with transfection reagent and incubated for 15 min. Finally, the plasmid-containing medium was added to the culture dish with cells and cultured in an incubator for 24-48 h. The transfection efficiency was observed by fluorescence microscopy and the suitable cells were used for whole cell patch clamp experiments.
Whole-Cell Patch-Clamp Test
The whole cell patch clamp experiments were performed as described previously . HEK293 cells transfected with pcDNA3.0-EGFP-hP2X7 wild type (P2X7-WT) and P2X7-pEGFP-C1-MUT mutant (P2X7-Mutant) recombinant plasmid were placed under a microscope, and the perfusion delivery system was modulated at low magnification to get as close to the cell surface as possible. The glass electrode was drawn, filled with about 1/3 of the intracellular fluid, and placed on the electrode gripper. Positive pressure was applied to the electrode which was entering the liquid, sealing and breaking the membrane, thus forming a whole cell mode. At this point, the testing drug was administered through a multi-channel perfusion delivery system and the current was recorded. The concentrations of ATP and gallic acids were 100 μM (Ivetic et al., 2019) and 10 μM (Du et al., 2020), respectively. The current was recorded using Clampex10.3 software.
Statistical Analysis SPSS 21.0 software (SPSS, Chicago, IL, United States) and GraphPad Prism7 (GraphPad Software, Inc., La Jolla, United States) were used. Two-way ANOVA and Tukey's honestly significant difference test were used to analyze the MWT and TWL. Pearson coefficient analysis was used for the results of the double-label immunofluorescence. The other experimental results were analyzed by one-way ANOVA, and the differences among groups were compared by least significant difference (LSD). All results are shown as the mean ± SEM, and p < 0.05 indicated that the difference was statistically significant.
Molecular Docking of Gallic Acid and P2X Receptors
The molecular docking results showed that the binding affinity of gallic acid to P2X3, P2X4, and P2X7 were −5.6 (kcal/mol) ( Table 1), −5.5 (kcal/mol) ( Table 2), and −6.4 (kcal/mol) ( Table 3), respectively. The binding ability of gallic acid to P2X7 was better by taking the absolute value of affinity >6 kcal/mol as the standard. We chose the P2X7 receptor as the target of gallic acid for subsequent studies.
Effect of Gallic Acid on Pain Behaviors in CCI Rats
Smaller mechanical withdrawal threshold and thermal withdrawal latency indicate greater pain sensitivity in rats. The results in Figures 1B,C showed that 1 week after CCI, the sensitivity of both mechanical and thermal hyperalgesia in CCI group was significantly higher than in the sham group (p < 0.001). After treatment with gallic acid, the mechanical and thermal hyperalgesia in CCI rats was significantly reduced (p < 0.001). However, no significant difference was observed between the negative control group and the CCI group (p > 0.05). These results revealed that gallic acid could significantly alleviate mechanical and thermal hyperalgesia in CCI rats.
Effect of Gallic Acid on Expression of P2X7 Receptor
Quantitative real-time PCR used β-actin and GAPDH as the housekeeping gene, respectively. Quantitative real-time PCR and western blotting results showed that compared with the sham group, the mRNA and protein levels of the P2X7 receptor in the CCI group were significantly increased (p < 0.001). The mRNA and protein levels of P2X7 in the CCI plus gallic acid group were Frontiers in Pharmacology | www.frontiersin.org August 2021 | Volume 12 | Article 680139 significantly lower than those in the untreated group (p < 0.001). However, no significant difference was observed between the CCI plus NS group and the CCI group (p > 0.05) (Figures 2A-D).
These results suggested that gallic acid could significantly inhibit the expression of the P2X7 receptor at both mRNA and protein levels in CCI rats. Upregulation of GFAP expression indicates the activation of satellite glial cells (SGCs), thus promotes the release of ATP and inflammatory cytokines, and activates P2X7 receptor. The results of double-label immunofluorescence showed that P2X7 was expressed on SGCs in DRG. Compared with the sham group, the coexpression of P2X7 and GFAP in DRG was significantly increased in the CCI group. The coexpression of P2X7 and GFAP was decreased in CCI plus gallic acid group compared with the untreated group. There was no significant difference between the CCI group and the negative control group. Pearson's correlation coefficient is a measure of the degree of correlation between two variables. Rr is a value between 1 and −1, where one means the variable is completely positively correlated, 0 means irrelevant, and -1 means completely negatively correlated. Pearson coefficient analysis was used to detect the correlation between P2X7 and GFAP, the colocation scatter plots of P2X7 and GFAP were synthesized by Image-Pro-Plus 6.0 ( Figures 2E,F). The fluorescence intensity of the coexpression of P2X7 and GFAP was analyzed by Image-Pro-Plus 6.0 software ( Figure 2G). Thus gallic acid could inhibit the coexpression of P2X7 and GFAP in DRG in CCI rats. Explanation: The predicted binding affinity is in kcal/mol (Energy). *rmsd: RMSD was used for cross-sectional comparisons of a set of structurally approximate or related proteins to obtain differences in structural stability under these conditions. Two variants of RMSD metrics are provided, rmsd/lb (RMSD lower bound) and rmsd/ub (RMSD upper bound), representing upper and lower limits of distance or Angle, respectively.
Effect of Gallic Acid on Expression of TACE and TNF-α
The results of the enzyme-linked immunosorbent assay showed that the contents of TACE and TNF-α in serum were significantly increased in the CCI group compared with the Sham group (p < 0.001). The levels of TACE and TNF-α in serum were significantly decreased in the gallic acid treatment group compared with the untreated group (p < 0.001). There was no significant difference between the CCI group and the CCI plus NS group (p > 0.05) (Figures 3A,H). The TACE protein content of DRG in the CCI group was significantly upregulated compared to the sham group (p < 0.001), while it was significantly lower in the CCI plus gallic acid group than that in the untreated CCI group (p < 0.001). There was no significant difference between the CCI group and the CCI plus NS group (p > 0.05) (Figures 3B,C). In addition, compared with the sham group, TNF-α mRNA and protein levels in DRG of the CCI group were significantly higher (p < 0.001); such enhanced expression levels of TNF-α mRNA and protein in DRG were significantly diminished after CCI rats were treated with gallic acid (p < 0.001). No significant difference was observed between the CCI group and the CCI plus NS group (p > 0.05) ( Figures 3D-H). These results indicated that gallic acid could decrease the expression of TACE and TNF-α in CCI rats. Each group consisted of eight rats. One-way ANOVA was used to detect the expression of NF-κB and STAT3. Data are presented as mean ± SEM. ** p < 0.01 and ***p < 0.001versus Sham group, ## p < 0.01 and ### p < 0.001 versus the CCI group.
Effect of Gallic Acid on Expression of NF-κB and STAT3
The results of the enzyme linked immunosorbent assay showed that the content of NF-κB in the serum of rats in CCI group was significantly higher than that in the Sham group (p < 0.001). The content of NF-κB in serum of rats in the CCI plus Gallic acid group was significantly decreased compared with that in the untreated CCI group (p < 0.001) ( Figure 4A). Compared with the sham group, the protein levels of NF-κB and p-STAT3 in the CCI group were significantly increased ( Figures 4B-E). Compared with those in the untreated group, the protein levels of NF-κB and p-STAT3 in the CCI plus gallic acid group were significantly decreased. No significant difference was seen between the CCI group and the negative control group (p > 0.05) (Figures 4A-E). There was no significant difference in the expression of STAT3 between the CCI group and the sham group (p > 0.05) ( Figure 4F). These results suggested that gallic acid could inhibit the expression of NF-κB and p-STAT3 in CCI rats.
Effect of Gallic Acid on ATP-Activated Current in HEK293 Cells Expressing P2X7
Molecular docking results showed that gallic acid is bound to a binding pocket composed of P2X7 receptor B and C chains by hydrogen bonding, thus producing interactions with P2X7. As is shown in Figure 5, green represents chain A, purple represents chain B, and blue represents chain C, and A, B, C, and D represent the binding patterns of gallic acid and P2X7 receptors in different fields of vision ( Figure 5A). Pymol and AutoDock Tools were used in combination to predict the effects of mutation of the binding sites of gallic acid and P2X7 on the binding affinity of both and the binding affinity of ATP and P2X7. Combined with the results of molecular docking, we finally selected the mutation of Leu97 to Gly97 and constructed the P2X7 mutant for subsequent experiments ( Figures 5E,F). ATP-activated currents were recorded by the whole-cell patch-clamp technique. The results showed that gallic acid (10 μM) could significantly inhibit the ATP activation current of HEK293 cells expressing wild-type P2X7 receptor but had a little inhibitory effect on HEK293 cells expressing mutant P2X7 receptor. After washing with extracellular fluid, the recording current returned to the state before gallic acid administration. In addition, the results showed the concentration effect curve of different gallic acid concentrations on the inhibition of ATP activation current in HEK293 cells expressing wild-type P2X7 receptor (IC50 4.261 μM) ( Figures 5G-I). These results suggest that gallic acid might alleviate neuropathic pain behaviors in CCI rats by inhibiting the P2X7 receptor.
DISCUSSION
Molecular docking can predict the interaction between drug ligands and receptors . To determine the direct interaction between gallic acid and P2X7 receptors, we carried out a molecular docking test. The results showed that compared with P2X3 and P2X4 receptors, gallic acid displayed a better affinity to the P2X7 receptor. The docking score (−6.4 kcal/mol) of gallic acid and P2X7 receptor was within a credible range, revealing that there is an interaction between them. Therefore, functional studies for the effects of gallic acid on P2X7 receptors were conducted in an animal model of neuropathic pain. Neuropathic pain is caused by various central and peripheral injuries. In this study, a classic CCI rat model was used to verify the therapeutic effect of gallic acid on neuropathic pain. Our results show that the MWT and TWL in CCI rats were decreased, and their sensitivity to injury stimulation was increased, which was consistent with previous observations . Neuropathic pain is closely related to high levels of proinflammatory cytokines. Gallic acid has analgesic and anti-inflammatory effects (Cao et al., 2019). Indeed, the MWT and TWL in CCI rats were significantly increased after treatment with gallic acid, suggesting that gallic acid relieved the pain behaviors in CCI rats.
The underlying molecular mechanism by which gallic acid alleviates neuropathic behaviors in CCI rats was explored. P2X receptors are involved in neuropathic pain. In particular, P2X3, P2X4, and P2X7 play crucial roles in the treatment of pain (Jacobson et al., 2020). P2X7 is widely expressed in the SGCs of DRG (Neves et al., 2020). Upon activating SGCs after nerve injury, the released ATP and various cytokines from SGCs may act on the P2X7 receptor to affect the pathophysiological processes of neuropathic pain. In this study, mRNA and protein levels of P2X7 were significantly higher in CCI rats, whereas gallic acid treatment could effectively downregulate such enhanced P2X7 expression. In addition, double-label immunofluorescence showed the increased coexpression of P2X7 and GFAP in DRG of CCI rats, and this effect was inhibited after gallic acid treatment. Moreover, Pearson coefficient analysis showed that P2X7 was well correlated with GFAP expression. Therefore, gallic acid reduced the MWT and TWT in CCI rats probably by inhibiting the expression of P2X7 in the activated SGCs of DRG.
TNF-α is an important cytokine and contributes to the pathogenesis of neuropathic pain . Activated mature TACE can cleave membrane bound TNF-α and convert it into free soluble smaller molecules, which participate in various inflammatory responses and cell signal transduction (Lambertsen et al., 2019). P2X7 promotes the release of mature TACE through exosomes, thus inducing the release of TNF-α ( Barbera-Cremades et al., 2017). In this study, the levels of TACE protein, TNF-α mRNA, and protein were upregulated in CCI rats. The contents of TACE and TNF-α in serum were significantly increased in CCI rats. Significantly, these alterations could be downregulated by gallic acid treatment. Thus, gallic acid might inhibit the activation of TACE by interfering with the function of the P2X7 receptor, leading to decreased release of TNF-α to alleviate neuropathic pain behaviors in CCI rats.
NF-κB is an important transcription regulator that exists in almost all mammalian cells. Activated NF-κB can participate in the inflammatory responses (Jimi et al., 2019;Peng et al., 2019;Wu H. et al., 2020). TNF receptor related-factors (TAFRs) are intracellular adaptor proteins that include seven family members (TRAF 1-7). TNF-α can activate NF-κB after binding to TAFR, thereby regulating gene transcription and participating in neuropathic pain (Dou et al., 2018). NF-κB can regulate the transcription of STAT3 and synergistically affect the progression of inflammation (Callejas et al., 2019). Inhibition of the NF-κB/STAT3 signaling pathway can also inhibit acute skin inflammation (Wu J.-Y. et al., 2020). Additionally, the P2X7 receptor can regulate the NF-κB signaling pathway (Cai et al., 2016). In our study, the expression of NF-κB and p-STAT3 was increased in CCI rats, the content of NF-κB in serum of rats in the CCI group was significantly higher than that in the Sham Frontiers in Pharmacology | www.frontiersin.org August 2021 | Volume 12 | Article 680139 group, indicating the activation of the NF-κB/STAT3 signaling pathway. In contrast, gallic acid treatment could counteract the upregulated expression of NF-κB and p-STAT3 in CCI rats. Hence, reversing the activation of NF-κB and STAT3 signaling pathways subsequent to inhibiting the expression of P2X7 receptor and the release of TNF-α would contribute to the beneficial effects of gallic acid on alleviating mechanical and thermal hyperalgesia in CCI rats. Electrophysiological experiments can observe the function of receptors. To this, the whole-cell patch-clamp experiments were carried out to analyze the effect of gallic acid on the P2X7 receptor. Moreover, the 3D structures of gallic acid and P2X7 were obtained, and molecular docking results showed that gallic acid binds to a binding pocket composed of six amino acid residues of the P2X7 receptor. The P2X7 receptor mutants were simulated by pymol, and molecular docking was conducted with gallic acid and ATP, respectively. The sites that could reduce the binding affinity of gallic acid but not affect ATP were selected for the construction of the P2X7 receptor site-directed mutant plasmid. The results showed that gallic acid had an inhibitory effect on ATP activation current of HEK293 cells transfected with P2X7-WT plasmid but had no effect on ATP activation current of P2X7-mutant plasmid, indicative of reduced activity of P2X7 receptor. These data further demonstrated that gallic acid could act on the P2X7 receptor to downregulate its function, inhibited the TNF-α/NF-κB/ STAT3 signaling pathway in CCI rats.
In conclusion, gallic acid is able to inhibit the activation of SGCs in DRG and alleviate mechanical and thermal hyperalgesia in CCI rats. The underlying molecular mechanisms involve the downregulation of P2X7 receptor expression, reduction of mature TACE release, inhibition of TNF-α expression, and suppression of the NF-κB/STAT3 signaling pathway.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
All experiments were conducted in accordance with the animal ethics committee of Nanchang University and followed the IASP guidelines on animal pain research. | 2021-08-25T13:22:36.293Z | 2021-08-25T00:00:00.000 | {
"year": 2021,
"sha1": "62fa43821b0db1d77f084e2f6ceca836c6c25f29",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.680139/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62fa43821b0db1d77f084e2f6ceca836c6c25f29",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14590361 | pes2o/s2orc | v3-fos-license | a
The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, becomes a key factor to trigger true user-driven innovation. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The goal of this paper is to introduce the i-SCOPE (interoperable Smart City services through an Open Platform for urban Ecosystems) project methodology and implementations together with key technologies and open standards. Based on interoperable 3D CityGML UIMs, the aim of i-Scope is to deliver an open platform on top of which it possible to develop, within different domains, various "smart city" services. Moreover, in i-SCOPE different issues, transcending the mere technological domain, are being tackled, including aspects dealing with social and environmental issues. Indeed several tasks including citizen awareness, crowd source and voluntary based data collection as well as privacy issue concerning involved people should be considered.
INTRODUCTION 1.1 Motivation
An increasing amount of people are living in cities and, by 2030, the number will be close to 5 billion (United Nations 2008).Therefore, it is essential to develop efficient techniques to assist the management of modern cities.The European Commission, within the so-called Digital Agenda, is paying significant attention to smart cities and, as technologies associated to smart cities can bring to an improved knowledgebased economy, to better social inclusion and, in more general term, to a more liveable environment.The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, becomes a key factor to trigger true user-driven innovation.In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure).Similar to the 2D cartographic maps, the 3D city models will be used to integrate various data from different sources for public accessible visualization and many other applications.The latest generation of 3D Urban Information Models (UIM), created from accurate urban-scale geospatial information, can be used as basis to create smart web services based on geometric, semantic, morphological and structural information at urban scale level.CityGML (Open-GIS 2008) represents a very attractive solution that combines 3D information and semantic in-formation in a single data model.The goal of this paper is to describe the i-SCOPE (interoperable Smart City services through an Open Platform for urban Ecosystems) project challenges and results together with key technologies and open standards.The aim of i-SCOPE is to deliver an open platform, based on interoperable 3D CityGML UIMs, on top of which it possible to deploy various 'smart city' services.
The main challenge of the work is to develop into the i-SCOPE framework an effective way to exploit the CityGML potentiality to provide Smart Cities services.Indeed while for whom to concern the 2D, the capability to use maps and images as basis for several web-applications is well consolidated, provisioning of 3D Geographic Information via web is yet oriented to simple visualization with limited possibility to interact and exploit the semantic content of the dataset.In this way the 3D City model handily enriched with the information to support specific services became the tools to visualize and better understand such phenomena at the city level.However-city itself is complex, and it is impossible for a standard to specify every detail of the city.For these reasons i-SCOPE aims at providing a significant contribution in domain related to project smart city services, through to extension and wider adoption of CityGML as key enabling open standard for 3D smart city services.At this purpose, for each service specific application fields, implemented using the Application Domain Extensions (ADE), will be defined and proposed.The smart services proposed address the following three scenarios: • Improved inclusion and personal mobility of aging people and diversely able citizens; Location data is now commonly regarded as the fourth driver in the decision-making process.The location provides more intelligent data analysis due to improved analytical and visualisation capa-bilities.
The geographical dimension (space) of smart communities is varied; it can be extended from a city district up to a multimillion metropolis but generally not overcame the dimension of the city.Considering this dimension Georeferenced 3D models represent an increasingly accepted solution for storing and displaying information at urban scale (Döllner et al. 2006).As the 3D models be-come more accurate, more smart cities can be a human interface for new broadband network ser-vices (Ishida 2002).The main feature of these models is the capabilities to store all information of a city in a single data model, facilitating the use and the interoperability of the same by different agents.For this reason in this paper will be investigated the usage of a 3D city model to represent, handle and manage urban data in a smart city services platform.CityGML (OpenGIS 2008) repre-sents a very attractive solution that combines 3D information and semantic information in a single data model.
CityGML as base services
The 3D visualization provides operators, administrators and citizens with a reproduction of the city environment as much realistic as possible in order to improve information retrieval capabilities and to enhance the e!ectiveness of the smart services.In this way the 3D city model became the framework to aggregate a list of available free sources of information that are collected for some transactional purpose, such as road tolling, energy or water consumption billing.In order to achieve this result the concept of the 3D city model shift respect to the most recent and known 3D maps systems (McCarra, Darren 2012).These platforms are excellent in terms of visualization aspect and performance but they are not suitable for the smart city services deployment, indeed their capability to consider also the semantic aspect of the modelled object is quite limited.Apple 3D maps represent cities as a very dense and textured mesh while Google 3D buildings, available by Google maps and google earth platforms, are geometrically modelled using KML, which is not fully compliant for these purposes.
CityGML is a common information model for the representation of 3D urban objects.It is realised as an open data model and XML-based format for the storage and exchange of virtual 3D city models.As an OGC standard, CityGML plays a leading role in the modularisation of urban geospatial information.
Figure 1: CityGML LoD2 automatically generated by novaFactory platform starting from buildings footprint and DSM In the foreseeable future, 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure).Similar to the 2D cartographic maps, the 3D city models will be used to inte-grate various data from different sources for public accessible visualisation and many other applications.The main objective of the i-SCOPE project is to use the Urban Information Models (UIMs) en-coded into CityGML format as reference system for a series of smart services deployed in a so called "smart city" context.A main advantage of CityGML is a hierarchical geometrical and semantic structure, which allows to define each single component of the cities object (i.e.roof, wall, window) and associate it to the "father object" (i.e.building.).However despite the great advantages, CityGML is a quiet new data-model and it is no so spread at the local level.The most common situation, considering the i-SCOPE pilots, was that the municipalities could pro-vide at the disposal of the project this kind of information: 1) Digital Terrain Model (DTM), 2)Digital Surface Model (DSM) and 3) building footprint (2D).
The first service developed into i-SCOPE project is a webservice to allowing the generation of 3D CityGML building objects into different LODs, starting from the abovementioned data (Fig. 1).The footprint data allows identifying the area in which the DSM and DTM are analysed.A routine allows to identify whether cluster of points have a shape such as to identify a flat surface.These surfaces, identified within the area belonging to the footprint of a building, are compared with the planes composing some standard roofs' models.A good detection of the roofs composition is achieved having a minimum of 6 dots per roof area, therefore to recognize a gambled roof a minimum of 12 is necessary, meanwhile for a hipped roof a minimum of 24 points are needed.
The service is asynchronous and realized with novaFACTORY software solution, the user once finished the generation can validate, both downloading and visualizing the final result, both through a summary report the quality of the model.After validation, CityGML data will be stored in 3DCityDB ( 2011), a free and Open Source 3D geo database to store, represent, and manage virtual 3D city models on top of the Oracle 10G R2 spatial (or 11G), developed by Institute for Geodesy and Geoinformation Science of the Berlin University of Technology.The building model generated by i-SCOPE services is at the basis for the fruition of several smart-services focused on specific domain (energy, mobility, environment).Two main aspects have to be taken in account: 1) how to make the 3D model an interoperable container for the services information and 2) how to provide in an effective mode the 3D model to the final services users.
The first item will be managed through the Application Domain Extensions (ADEs), for each services domain a specific ADE will be defined, conceptual model and implemented.The web provisioning and visualization of 3D city model is a bottleneck on the huge diffusion of these technologies.
An overview of how i-SCOPE will deal the abovementioned issues will be shown in the next paragraphs.
CityGML extensions and ADEs
CityGML make available data concerning the cities For instance of the key services provided by i-SCOPE will be the inclusive routing.Into CityGML standard the transportation model provides the needed information to describe the transportation object however the specifications suggest using other encoding for routing purpose (Fig. 2).The i-SCOPE project investigates the possibility to extend the core part of CityGML including routing capabilities.However considering: the already existing frameworks and standards for routing and the modularity of CityGML even using ADE, the final solution foreseen the extension of the standard developing and ADE and not changing the core objects.The routing functionalities will be ensured using existing platform (i.e.PGrouting) considering the barrier elements defined into disable routing ADE.
System Architecture and CityGML visualizations
Visualisation is a complex and important issue in 3D city model applications.Efficient visualisation of 3D city models in different levels of detail (LODs) is one of the pivotal technologies to support these applications and it is fundamental to visualise the urban environment in different scales, e.g. from overview scale like a region down to detailed scale like a building or even a room.Furthermore Internet has become a basic information infrastructure all over the world even for the deployment of new smart cities technologies.Therefore, it is necessary to develop methods to visualise 3D city models through the Internet.
Currently, many plugins of Internet browsers have been developed to display 3D scenes such as Adobe Flash, Microsoft Silverlight and Java3D etc.But sometimes 3D content cannot be opened since the user did not install the right plugins (Fassi and Parri, 2012).
Challenges of 3D city model visualisation include how to create the 3D scenes for multiple platforms through Internet and how to automatically generate the multiple representations for different scales (fig.1).
Streaming three-dimensional geographic data is a relatively new topic and no standard has yet been taken hold, OCG proposed three different services able to stream three-dimensional data to a client.One can be considered the extension of the other, differing in the amount of computational power required by the client and the server.
I-SCOPE project implements the possible services to stream the data to the client considering two different features: a) obtain optimum performance and very short response time avoiding the complex queries that can be made with the WFS; b) transmit to the client the geometries plus the semantic information in a single stream.
The way to obtain this kind of results is to stream data directly in the CityGML format.The method consists in a downloading service, which provides to the client the CityGML data following a classic tile-based approach.To achieve these objectives first a tiled based CityGML is generated in a structured tree and second, a progressive visualization of the model into client dynamically asking the server for new visualization or level of detail when the user changes the viewpoint or zoom level in the 3D viewer.A common tile grid is defined, and well knows both on server and client side.All the needed CityGML data is stored on the file system of the server, avoiding the access to a database that can slow down the entire operation.The data is stored in tiles according the selected grid and all the files are compressed to use less bandwidth in the transmission phase.Due to the verbose format of the CityGML an incredible compression rate can be achieved, reducing the size of the dataset up to the 90% if no textures are included.The stored compressed files must follow the name schema pre-defined and corresponding to the tile grid name schema.This approach can be classified into the Thick Client/Thin Server described into fig.4, but avoiding the issues related to the complex operation on server side typical of WFS (fig.5).Within the iSCOPE project the client has been developed on top of Nasa World Wind java SDK.Thanks to this approach many useful features can be implemented in parallel: different services like the WMS, WFS and the proposed approach can run concurrently allowing a great flexibility of the entire system.
SMART CITIES SERVICES
In this section will be presented an overview of the smart services provided by the i-SCOPE platform and based on 3D CityGML model.Indeed considering i-SCOPE perspective CityGML is the base container in on which a series of smart city services will be provided.The services foreseen by the platform are three: • Service for accurate assessment of solar energy potential at building level.
•
Improved inclusion and personal mobility of aging and diversely able citizens through an accurate city-level disable-friendly personal routing service which accounts for de-tailed urban layout, features and barriers.
•
Environmental monitoring through a real-time environmental noise mapping service-leveraging citizen's involvement will who act as distributed sensors city-wide measuring noise levels through their mobile phones.• These services will be piloted and validated, within a number of EU cities, which will be actively en-gaged throughout the project lifecycle and covering different aspects of the urban environment they would demonstrate as a platform smart-city platform could be more effective if the services are provided on a common base infrastructure.
Solar potential assessment
Today, different methodologies for the estimation of PV potential at an urban scale are pro-posed.Accurate 3D modelling is very important to accurate photovoltaic system simulation however advanced feature as including mutual shading of buildings in urban settings is still quite weak (Alam, 2010).From a computational point of view i-SCOPE provides a service for sun irradiation calculation and estimation of Photovoltaic potential.In order to achieve this the "r_SUN" function -module of GRASS will be customized and integrated in the platform's functionality by creating a WPS interface.The CityGML building models through an algorithm based on Raycasting is rasterized and used as input for the solar irradiation calculation.The irradiation values are insert into the ADE for each corresponded roof elements.
The user access the i-SCOPE web services platform and setting up some parameters such as: panels area (percentace of entire roof), efficiency, orientation (in flat roof case) can estimate the solar potential of each roof in a given period.
Crowd Source noise services
Currently over 50% of the world's population lives in urban areas.
Being continuously surrounded by traffic jams, construction sites and urban events, city dwellers are typically exposed to a considerable amount of noise.In order to assess the distribution of this noise pollution (in time and space) the i-Scope project proposes the integration of participatory techniques for monitoring pollution by means of NoiseTube.NoiseTube has been designed to facilitate sound measuring at any place and any time through a mobile app that exploits basic smartphone functionalities, namely microphone, wireless connectivity and localisation through GPS.Through these three components NoiseTube transforms already ubiquitous smartphones into highly port-able, accessible sound measurement devices, enabling all citizens to measure ambient sound levels whenever and wherever they please.
Figure 6 NoiseTube data acquired visualized into i-SCOPE web portal
Next to the mobile app a second pillar of our participatory approach is the NoiseTube website which collects all user measurements and visualises them on i-SCOPE platform (fig.6).The successful collection of information by masses of volunteering individuals enabled by Web technology (otherwise referred to as Web 2.0) does not halt before the realm of geographic in-formation.Even more remarkable in the given context, several projects concentrating solely on the collection of geographic information have formed.Goodchild (2007) gives an overview of these global collaborations and calls the phenomenon Voluntary Geographic Information (VGI).Given enough measurements for a particular area, we can construct noise maps of comparable quality to those produced by governments today, which are of a very different kind.Indeed, pollution maps are typically created through computer simulations based on general statistics, such as the average number of cars in the city.They are backed up only by limited amounts of sound measurements because current measuring methods are expensive and thus not very scalable.The resulting maps give an average but not at all a complete view on the situation, entirely missing local variations due to street works, neighbour noise &etc.The collected information are made available within the i-SCOPE platform so the combined visualization of noise data and 3D CityModel became a powerful instrument to understand the city "noise feeling".
CONCLUSION
This paper reports the results of the first years work of EU project i-SCOPE which based on interoperable 3D UIMs CityGML, would to deliver an open platform on top of which it develops, within different domains, various 'smart city' services.In terms of products and services i-SCOPE delivers an open source toolkit for 3D smart city services that will be deployed and made available through a 3D smart city services portal.Furthermore i-SCOPE aims at providing a significant contribution to standards in the domain of smart city services, through contribution to extension and wider adoption of CityGML as key enabling open standard for 3D smart city services.Many challenges in terms of encoding standards, efficient visualization of 3D geometries, data and services generation and providing and privacy model will be deal during this phase.For whom concern the standardization one specific ADE for a particular service is on going definition using UML specification.The needed information collected by pilot partners are used to automatically generate the 3D cityModel of buildings into LoD1 and LoD2.Furthermore information for disabled routing are not already available are collected using the OSM model and enriched according the specific ADEs using the available tools.At the same way campaign to acquire large scale of noise measurements using noiseTube application are started.The way to exchange 3D information between i-SCOPE server and web-client has been defined considering a percalculated tiling approach in order to improve the performance of the system.The security and privacy model will be trialled and optimised in both the i-SCOPE and i-Tour projects and in a number of ITS trials from early 2013 pending its full scale launch in 2016 (planned) for C-ITS and slightly earlier in the pilots at the heart of i-SCOPE.Many issues have to be solved to obtain an effective platform to provide smart-cities services into a 3D city model environment, however the good results obtained demonstrate that combining technologies for 3D visualization, open standards and VGI it is possible to deploy a smart-city spatial data infrastructure.
according to the open data model.The CityGML information model includes: • Digital Terrain Models as a combination of triangulated irregular networks (TINs), regular raster, break and skeleton lines, mass points; • Sites (buildings, bridges, tunnels…); • Vegetation (areas, volumes, and solitary objects with vegetation classification); • Water bodies (volumes and surfaces); • Transportation facilities (both graph structures and 3D surface data); • City furniture; • Generic City objects and attributes; Objects, which are not yet explicitly modelled in the current version of CityGML, can be represented using the concept of generic objects and attributes.Furthermore, the possibility of creating extensions to the standard pattern exists.This allows the description of object and proprieties not included in the standard characterized by specific properties and features.Extensions to the CityGML data model applying to specific application fields can be implemented using the Application Domain Extensions (ADE).Generic objects/attributes allow extensions during runtime, but may cause arbitrarily and name conflicts because different user defined objects may have the same name.i-SCOPE aims at providing a significant contribution to standards in the domain of smart city services, through contribution to extension and wider adoption of CityGML as key enabling open standards for 3D smart city services, in
Figure 2 :
Figure 2: UML diagram of the i-SCOPE inclusive routing ADE
Figure 3
Figure 3 LoD1 model of Trento visualized into i_SCOPE 3D environment
Figure 5
Figure 5 System architecture of the proposed approach Several methodological and technological matters are related to the set up of the services: • Definition of the ADEs in according to the services functionality requirements; • To Develop the process and interfaces between different components in order to provide services; • Design a web user interface for the citizen in order to use the services in a very easily way.•These services will be piloted and validated, within a number of EU cities, which will be actively en-gaged throughout the project lifecycle and covering different aspects of the urban environment they would demonstrate as a platform smart-city platform could be more effective if the services are provided on a common base infrastructure. | 2014-10-01T00:00:00.000Z | 2013-05-06T00:00:00.000 | {
"year": 2013,
"sha1": "e3f44d23571daa35fabe073e292c0d20c040acff",
"oa_license": "CCBY",
"oa_url": "https://isprs-archives.copernicus.org/articles/XL-4-W1/87/2013/isprsarchives-XL-4-W1-87-2013.pdf",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "3926d4e5932358815594df191f25aece77a30a5f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
20688215 | pes2o/s2orc | v3-fos-license | 25-Hydroxyvitamin D and Its Relationship with Autonomic Dysfunction Using Time- and Frequency-Domain Parameters of Heart Rate Variability in Korean Populations: A Cross-Sectional Study
Previous studies have demonstrated that reduced heart rate variability (HRV) and hypovitaminosis D are associated with cardiovascular disease (CVD). However, few reports have investigated the effects of vitamin D on HRV. This cross-sectional study analyzed serum 25-hydroxyvitamin D (25(OH)D) and HRV indices using 5-min R-R interval recordings with an automatic three-channel electrocardiography in healthy subjects (103 males and 73 females). Standard deviation of N-N interval (SDNN), square root of mean squared differences of successive N-N intervals (RMSSD), total power (TP), very low frequency (VLF), low frequency (LF), and high frequency (HF) were reported. The mean age of subjects was 55.3 ± 11.3 years and the mean 25(OH)D level was 21.2 ± 9.9 ng/mL. In a multiple linear regression model, 25(OH)D was positively correlated with SDNN (β = 0.240, p < 0.002), and LF (β = 0.144, p = 0.044). Vitamin D deficiency (25(OH)D < 15 ng/mL) was associated with decreased SDNN (<30 m/s) (OR, 3.07; 95% confidence interval (CI), 1.32–7.14; p = 0.014) after adjusting for covariates. We found that lower 25(OH)D levels were associated with lower HRV, suggesting a possible explanation for the higher risk of CVD in populations with hypovitaminosis D.
Introduction
In addition to its role in bone and calcium metabolism, vitamin D has important prohormone functions in a wide range of clinical processes, including antiproliferative, prodifferentiative and immunomodulatory actions [1]. Vitamin D deficiency has become the most widespread nutritional disorder in the modern world because of decreased sunlight exposure, increasing obesity, and changes in dietary habits [2]. Numerous studies have demonstrated that hypovitaminosis D is related to increased risks of cardiovascular diseases (CVD), metabolic dysfunctions, and high all-cause mortality in the general population [3]. Specifically, vitamin D metabolites and its metabolism gene CYP24A1 are associated with coronary atherosclerosis and calcification [4]. These observations support the hypothesis that low vitamin D levels influence the activity of the cardiovascular system and result in a dysfunctional cardiac autonomic nervous system (ANS). Chan et al. reported that patients with chronic kidney disease (impaired vitamin D synthesis) showed poor cardiosympathovagal activity characterized by a withdrawal of inhibitory vagal activity [5]. However, few studies exploring the association between vitamin D and cardiac autonomic function in healthy people have been reported.
Heart rate interval changes are the result of the ANS dynamically regulating the body's response to internal and external stimuli. The balance of the ANS activity reflects physiological, hormonal and psychological stability [6]. Heart rate variability (HRV) analysis is based on the measurement of interval variability between R waves (RR intervals) and qualitative and quantitative assessments that represent the balance of the cardiovascular system via ANS control [7]. As an established tool in cardiology studies, HRV is used currently for a wide range of clinical conditions from psychiatric illnesses to internal organ pathologies. Increased HRV reflects a healthy ANS able to react appropriately to changing environmental circumstances [8], whereas decreased HRV is a sign of autonomic inflexibility and heart disease that may precede systemic problems (e.g., inflammatory-mediated atherosclerosis and ventricular fibrillation) [9]. Recent research has shown that a decreased HRV is associated with risk factors for CVD, heart failure, and sudden cardiac death (SCD) [10].
To understand the effects of vitamin D deficiency on the heart, the association between vitamin D deficiency and HRV indices must be examined in terms of clinical importance. To date, few studies have explored the effects of vitamin D on HRV in healthy individuals. Therefore, in the present study we examined the relationship between serum vitamin D levels and HRV and hypothesized that lower serum vitamin D levels are associated with lower HRV parameters.
Study Population
We conducted a cross-sectional study based on data extracted from our hospital medical records. Data on healthy subjects over 20 years of age who underwent a comprehensive medical examination including HRV and serum vitamin D levels from July 2012 through February 2014 (n = 176) were collected. We selected participants who underwent both HRV and blood test including serum vitamin D level on the same day. Criteria for exclusion were as follows: missing data about vitamin D level and HRV; chronic diseases that can influence the ANS, including diabetes mellitus (DM), hypertension (HTN), arrhythmia, heart failure, coronary heart disease, depression, and panic disorder; receiving medications such as angiotensin-converting enzyme inhibitors, β-receptor agonists or antagonists, calcium channel blockers, or anticholinergics, which can influence the ANS; mean heart rate of more than 100 or less than 50 beats per minute; presence of other health conditions that can affect the vitamin D level, such as cancer, parathyroid gland disease, liver disease, epilepsy, inflammatory bowel disease, malabsorption, celiac disease, gastric bypass, bowel surgery; and regular administration of vitamin D supplements within the previous 3 months. The study protocol was approved by the Institutional Review Board of Pusan National University Hospital (IRB No. E-2014064).
Data Collection
Subjects were interviewed by a physician regarding their medical history, smoking status, alcohol consumption and exercise habits. The subjects were divided into nonsmokers or current smokers. The frequency of drinking per week, beverage type, and amount consumed were recorded. An alcohol drinker was defined as a subject consuming >20 g of alcohol per day [11]. Regular exercise was defined as subjects exercising more than once per week at moderate or greater intensity [12]. A trained examiner measured the height and body weight of the patients wearing a light gown without shoes to the nearest 0.1 cm and 0.1 kg, respectively, using an HM-300 (Fanics Co. Ltd., Busan, Korea). Body mass index (BMI) was calculated by dividing the weight in kg by the height in meters squared (kg/m 2 ). Waist circumference (WC) was measured at the smallest distance between the lower margin of the rib cage and the iliac crest, at the end of normal expiration and to the nearest 0.1 cm. Blood pressure (BP) was assessed twice while the subjects were seated using an automated BP measurement device (BP-203RV II, Colin Corp., Aichi, Japan) with a 5 min rest in between and the two results were averaged. The blood sample was drawn from the antecubital vein between 8 and 9 AM after a 12-h overnight fast, and was subsequently analyzed at a certified laboratory using an automatic blood analyzer (Hitachi 7600-110 chemical analyzer, Hitachi Co. Ltd., Tokyo, Japan). Fasting plasma glucose (FPG) was evaluated using the glucose oxidase method with a Synchron LX 20 (Beckman Coulter, Fullerton, CA, USA). Total cholesterol (TC) was calculated using an autoanalyzer with the enzymatic colorimetric method Toshiba TBA200FR (Toshiba Co. Ltd., Tokyo, Japan). Serum creatinine (sCr) was analyzed by kinetic colorimetric assay based on a modified Jaffe method using a commercial enzymatic kit (Modular-DP, Roche, Basel, Switzerland). Glomerular filtration rate estimated (eGFR) from sCr was reported for determination of kidney function because serum 25(OH)D level is dependent on kidney function. eGFR was inspected using the following Equation (1) from the Modification of Diet in Kidney Dysfunction Study (MDRD) [13].
Measurement of Serum 25-Hydroxyvitamin D
Vitamin D status is commonly assessed by the serum 25-hydroxyvitamin D (25(OH)D) level because it can reflect vitamin D derived from both dietary intake and dermal production [14]. To examine the serum 25(OH)D level, blood samples were drawn from the antecubital vein after a 12-h overnight fast during a routine health examination. The serum 25(OH)D level was assessed as total 25(OH)D (vitamin D2 + vitamin D3) with a chemiluminescence immunoassay using the LIAISON ® 25 OH Vitamin D TOTAL Assay (DiaSorin Inc., Stillwater, MN, USA) at the Eone Reference Laboratory (Seoul, Korea), which guaranteed intra-assay and inter-assay coefficients of variation less than 10%. According to recent clinical guidelines, vitamin D deficiency was defined as a serum 25(OH)D level < 15 ng/mL. This threshold is based on a study showing serum 25(OH)D < 15 ng/mL is correlated with an increased risk of incident CVD [15].
Heart Rate Variability Measurements
To measure the HRV parameters, the subjects did not consume caffeine (i.e., tea or coffee) and rested for 30 min before the study. Subsequently a three-channel (both wrists and left ankle) electrocardiographic recording was conducted for 5 min with the subjects sitting in a quiet room and was automatically computed using SA-6000P (Medicore Inc., Seoul, Korea). This device meets the assessment standards and physiological interpretation as well as bio-signal processing algorithms created by the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology [16]. During the test, subjects were instructed to breathe naturally without any conscious respiratory manipulation for a more accurate analysis. As the time domain index, mean heart rate (MHR), standard deviation of the N-N interval (SDNN) and the square root of the mean squared differences of successive N-N intervals (RMSSD) were examined. SDNN reflects the overall cyclic components of HRV during the recording period and RMSSD reflects an estimated parasympathetic regulation of the heart [16,17]. As the frequency domain index, total power (TP; total power for 5 min including VLF, LF and HF), very low frequency (VLF; frequency strength of 0.04-0.15 Hz), low frequency (LF; frequency strength of 0-0.04 Hz), high frequency (HF; frequency strength of 0.15-0.4 Hz) and the low-frequency/high-frequency ratio (LF/HF ratio) were reported. TP reflects mainly the level of the autonomic nervous activities and VLF band is an additional indicator of sympathetic function. The LF component reflects the complex interaction between sympathetic and parasympathetic activities of heart rate and baroreceptor activity. The HF parameter assesses parasympathetic activity and the LF/HF ratio is an overall relative balance estimate of the autonomic activity [16,17].
Statistical analysis
SPSS version 18.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analyses. Unless stated otherwise, continuous variables were expressed as means ± standard deviations. Categorical variables were presented as frequencies and proportions. Because TP, HF, LF and LF/HF ratio were right-skewed, they were log-transformed to gain a normal distribution. The subjects were divided into deficiency and non-deficiency groups based on serum 25(OH)D levels. To compare variables between the 25(OH)D non-deficiency group and 25(OH)D deficiency group, the chi-square test was computed to investigate categorical variables and the independent t-test for continuous variables. Seasonal variations in 25(OH)D, SDNN, and RSMMD values were assessed using analysis of variance (ANOVA). In addition, the Mann-Whitney U-test was used for comparison of SDNN and RSMMD values according to 25(OH)D status in each season. A linear regression analysis was conducted to assess relations between HRV parameters and 25(OH)D levels. In regression test, p-values were corrected by the Bonferroni correction for multiple comparisons. Therefore, a p-value ˂ 0.00625 (0.05/8) was considered to indicate statistical significance. Age, sex, and seasons of 25(OH)D measurement were included as covariate factors in a multivariate analysis. In addition, an SDNN cutoff value separating better and worse outcomes in the order of 30 m/s was used [18]. The odds ratio (OR) of SDNN < 30 m/s was calculated using a multiple logistic regression model among the 25(OH)D status groups after adjusting for confounders.
Patients' Characteristics
A total of 176 subjects (103 males and 73 females) 20-80 years of age (average age 55.3 ± 11.3 years) participated in this study. The mean 25(OH)D value was 21.2 ± 9.9 ng/mL. Twenty-eight percent of all subjects were deficient in vitamin D (25(OH)D < 15 ng/mL); only 14.7% of subjects were vitamin D sufficient (25(OH)D ≥ 30 ng/mL). Table 1 shows the subjects' clinical characteristics according to the 25(OH)D status. The 25(OH)D deficient group had a higher proportion of females (62% vs. 33.3%, p = 0.001) and lower SDNN value (25.3 ± 8.4 m/s vs. 30.2 ± 16.2 m/s, p = 0.044) than the non-deficient 25(OH)D group. The RMSSD, TP, VLF, LF, and LF/HF ratio were also slightly lower in the 25(OH)D deficient group compared to the 25(OH)D non-deficient group. However, these differences were not statistically significant (p > 0.05). When evaluating seasonal variation, the 25(OH)D level was highest in the fall and lowest in the spring (23.26 ± 11.9 vs. 18.55 ± 6.7 ng/mL). Similarly, SDNN and RMSSD values were the lowest in the spring, although not statistically significant ( Figure 1). No evidence of significant seasonal variation in the SDNN and RMSSD values according to 25(OH)D status was detected. Abbreviation: BMI = body mass index, DBP = diastolic blood pressure, FPG = fasting plasma glucose, HF = high frequency, HRV = heart rate variability, LF = low frequency, LF/HF ratio = low frequency/high frequency ratio, RMSSD = Square root of the mean of sum of the square of differences between adjacent N-N interval, SBP = systolic blood pressure, SDNN = standard deviation of normal to normal, TC = total cholesterol, VLF = very low frequency, WC = waist circumference, 25(OH)D = 25-hydroxyvitamin D; Values are expressed as frequencies (%) or means ± standard deviation (SD) unless otherwise indicated; * Calculated by chi-square test or t-test. Table 3 shows a linear regression analysis between 25(OH)D levels and HRV frequency domain indices. In a univariate analysis, 25(OH)D levels showed a significantly positive relation with LF (β = 0.234, p = 0.002). However this association became non-significant after adjusting for covariates (p > 0.00625). In contrast, 25(OH)D levels did not show a significant association with TP, HF and LF/HF ratio. After adjusting for age, gender, seasons, WC, BMI, alcohol consumption, smoking status, regular exercise, BP, TC, FPG, and eGFR, vitamin D deficiency (25(OH)D < 15 ng/mL) was independently associated with low SDNN (<30 m/s), with an OR of 3.07 (95% confidence interval (CI), 1.32-7.14). However, vitamin D deficiency was not associated with low RMSSD (<10 m/s), with an OR 1.86 (95% CI, 0.70-4.96) ( Table 4). Table 4. Association between vitamin D status and low heart rate variability.
25(OH)D Status SDNN (<30 m/s) RMSSD (<10 m/s) OR (95% Confidence Interval) OR (95% Confidence Interval)
Model 1 Abbreviation: RMSSD = square root of the mean of sum of the square of differences between adjacent N-N intervals, SDNN = standard deviation of normal to normal, 25(OH)D = 25-hydroxyvitamin D. Model 1, unadjusted; Model 2, adjusted for gender and age; Model 3, adjusted for gender, age, seasons, alcohol use, current smoking status, regular exercise, waist circumference, body mass index, systolic blood pressure, diastolic blood pressure, total cholesterol, fasting plasma glucose and estimated glomerular filtration rate.
Discussion
The present study examined the relationship between serum vitamin D levels and HRV in healthy individuals. The SDNN was low in subjects with 25(OH)D deficiency, and 25(OH)D levels were associated positively with SDNN, and LF. This suggests that low 25(OH)D serum levels are associated with cardiac autonomic dysfunction, which may trigger a pathophysiological mechanism that increases CVD risk in healthy populations with vitamin D deficiency.
Vitamin D has direct effects on numerous cell types via actions on the vitamin D receptor (VDR) [1]. Although the heart is not considered a traditional target organ, growing evidence suggests that vitamin D plays crucial roles in heart structure and function. In animal studies, 1,25-dihydroxyvitamin D (1,25(OH)2D) impacted cardiac autonomic activity [19,20]. These studies demonstrated that 1,25(OH)2D deficiency resulted in accelerated rates of cardiac contraction and relaxation. In addition, VDR ablation led to cardiac fibrosis, hypertrophy and dysregulation of the renin-angiotensin system (RAS). The direct applicability of these findings in animals to humans is unclear, but VDR has been found in human cardiac tissue as a 55-kDa protein [21,22]. Patients with chronic kidney disease had a reduced capacity for converting 25(OH)D to 1,25(OH)2D due to decreased 1-α hydroxylase activity. These patients showed chronic RAS upregulation [23] and altered cardiac autonomic activity defined mainly by extreme vagal insufficiency [5]. Moreover, Adriana J et al. reported that 25(OH)D level was cross-sectionally related with higher B-type natriuretic peptide (BNP) in subjects with eGFR < 60 mL/min/1.73 m 2 , suggesting low 25(OH)D may be associated with growth and hypertrophy of cardiac cell, therefore may result in stimulated BNP secretion [24]. Studies of healthy populations have also shown that lower 25(OH)D levels were associated independently with a higher risk of SCD or CVD [3,15,25,26], suggesting that low vitamin D levels may also be an important and potentially treatable risk factor in populations without established pathologies. However, the molecular mechanisms responsible for the vitamin D deficiency associated with cardiac morbidity and mortality have not been fully elucidated. Our finding of a positive association between 25(OH)D levels and HRV suggests a possible mechanism for this phenomenon.
HRV depends on the sympathetic and parasympathetic effect on the sinus node and reflects changes in ANS activity and function. RMSSD and HF are the predominant responses to variations in parasympathetic tone. By contrast, SDNN and LF are influenced by both adrenergic and cholinergic activities and other physiological inputs. SDNN depends on a change in all HRV parameters and its decrease is associated with reduced function of the left ventricle [27]. TP level is similar to SDNN by affecting the control of the ANS and is generally decreased in individuals under chronic stress or with disease. Nolan et al. found prospectively that SDNN was an independently strong prognostic factor for CHF patients [28]. LF is an indicator of sympathetic activity regulation in the sinus node. Recent research has suggested that the LF component is reduced in patients with CHF; this decrease is related to a higher risk of sudden death, advanced disease, and progression to heart failure [29]. In our study, 25(OH)D levels were positively related with SDNN and LF, but not HF, indicating diminished sympathetic tone in subjects without pre-existing risk factors for CVD. The sympathetic nervous system has an important role in the regulation of energy homeostasis in humans [30]. Therefore, differences in sympathetic nervous system activity can cause variations in 24-h energy expenditure among individuals. Reduced activity of sympathetic tone associated with 25(OH)D deficiency may contribute to changes in cardiomyocyte energy expenditure. As mentioned previously, low vitamin D status may result in elevated RAS system activity, causing myocardial hypertrophy and arterial hypertension. In addition, vitamin D affects directly cardiomyocytes, including modulation of contractility, regulation of extracellular matrix turnover and anti-hypertrophic actions [31]. This may explain the higher risk of CVD in patients with vitamin D deficiency. By contrast, we found no evidence of a significant association between 25(OH)D levels and RMSSD and HF. Parasympathetic effects exert through rapidly dynamic control by acetylcholine influencing muscarinic receptors and are hereby reflected in the HF component of HRV. In cardiac disease, parasympathetic activation and its physiological effects decrease such as attenuation of vagal ganglionic transmission, change of muscarinic receptor composition and density, and reducing of acetylcholinesterase activity [32]. Because both RMSSD and HF component represent cardiac vagal nerve activity in the sinus node and electronic stability, a decrease in the parasympathetic nerve activity in the heart results in a decrease in RMSSD and HF component [16]. According to previous studies, decreased parasympathetic tone becomes a significant factor at more advanced stages of heart dysfunction [33][34][35]. Because we excluded patients with established risk factors for CVD, our findings suggest the 25(OH)D levels influence the early stage of pathophysiological changes in the heart. There are several evidences that vitamin D may be important in early process of atherosclerosis disease. Wang et al. and Giovannucci et al. reported an increased risk of CVD incidence among subjects with vitamin D deficiency in large prospective studies involving population without pre-existing CVD [15,26]. In contrast, prospective studies conducted in patients with stable coronary disease or advanced type 2 diabetes reported that baseline vitamin D levels did not predict cardiovascular events [36][37][38].
Limited studies of the link between vitamin D and HRV have been published. Only one study examined a relationship between vitamin D metabolites and modulation of the cardiac ANS in a healthy population [39]. Their findings of a significant association between low 25(OH)D levels and decreased baseline cardiac autonomic activity, low 1,25(OH)2D levels and unfavorable cardiosympathovagal changes during acute angiotensin II challenge are consistent with our results. Unfortunately, the findings could not be generalized to other studies due to the small sample size (n = 34). In addition, Metin Cetin et al. examined the relationship between vitamin D deficiency and autonomic imbalance in patients who had ischemic and non-ischemic dilated cardiomyopathy [40]. Surprisingly, they reported a stronger association between 25(OH)D levels and HRV, which reflects the activity of the ANS, in patients with non-ischemic rather than ischemic dilated cardiomyopathy. This finding suggests that vitamin D may play an important role in cardiomyocyte pathophysiology and that its deficiency may be more closely associated with the pathogenesis of non-ischemic rather than ischemic myocardial disease.
Our results also suggest that the positive association between 25(OH)D levels and ANS activity is involved in SCD pathogenesis. Interestingly, the association between vitamin D deficiency and risk for SCD was stronger in the population without than with CVD, as reported by Pilz et al. [41]. Altered myocardial calcium flux increased the risk of SCD related to vitamin D deficiency, suggesting a link to cardiac arrhythmia [42]. This hypothesis is supported by the positive association between 25(OH)D levels and corrected QT interval (QTc) in non-ischemic dilated cardiomyopathy patients [41]. Kim et al. also reported that calcitriol treatment decreases a prolonged QTc dispersion [42].
The present study had several limitations. First, our results do not represent the general population because we enrolled healthy individuals who visited a local university hospital. Secondly, determining the causal relationships between vitamin D and HRV parameters is difficult due to the cross-sectional nature of the study. Additionally, we were unable to assess the serum PTH level, which is an important determinant of vitamin D status. Hyperparathyroidism is linked to hypertrophy of cardiomyocytes and arterial stiffness and vitamin D deficiency may be predisposed to increased BP via elevated PTH and disturbed calcium homeostasis [43][44][45]. Moreover, the analysis of the VLF component could not be used to evaluate clinical implications because we examined HRV parameters only in the short-term (5 min). In such a short-term analysis, VLF does not provide adequate data as this band often reflects meaningless noise signals. HRV has been examined using electrocardiographic signals evaluated during short (2-5 min) and long (24-h) duration periods. We used a short-duration period because long-term electrocardiographic recording inhibits comparison of HRV parameters obtained during various activities such as exercise, sleep, and deep breathing. Finally, we could not apply the standardized forms of autonomic load, such as head-up tilt test, orthoclinostatic or orthostatic tests and deep breathing, in the examination of the HRV component.
Although the interest in vitamin D and its relationship to CVD risk has increased recently, evaluation of the risk of cardiac events in a healthy population with hypovitaminosis D but not established CVD risk factors, such as HTN, DM and dyslipidemia, is easily overlooked. Currently, many commercial devices that automate HRV measurements for research and clinical studies are available. These devices are simple and important tools for assessment of autonomic heart control and autonomic dysfunction.
Conclusions
In this cross-sectional study, vitamin D deficiency was associated independently with a risk of low HRV in a healthy population. This association remained after adjusting for age, gender, and season of 25(OH)D measurement. In addition, LF was lower in the 25(OH)D deficiency group than the non-deficient 25(OH)D level group. These observations suggest that sympathetic activities are attenuated in vitamin D deficiency. Although the study included a small population at a single center, it increases our understanding of the etiology and pathophysiology of the heart in patients with hypovitaminosis D. Therefore, 25(OH)D levels may contribute to autonomic dysfunction by a pathophysiological mechanism that may increase the risk of cardiac adverse events in healthy populations with vitamin D deficiency. Maintaining a sufficient 25(OH)D level may reduce the risk of CVD through favorable changes in cardiac autonomic function in populations with hypovitaminosis D. Further experimental studies are needed to identify the effect of vitamin D supplementation on HRV in healthy populations. | 2016-03-14T22:51:50.573Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "23e18176e8b63b46d432168e0fef5c80fbcbde13",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/6/10/4373/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23e18176e8b63b46d432168e0fef5c80fbcbde13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234342676 | pes2o/s2orc | v3-fos-license | Bayesian inference and superstatistics to describe long memory processes of financial time series
One of the standardized features of financial data is that log-returns are uncorrelated, but absolute log-returns or their squares namely the fluctuating volatility are correlated and is characterized by heavy tailed in the sense that some moment of the absolute log-returns is infinite and typically non-Gaussian [20]. And this last characteristic change accordantly to different timescales. We propose to model this long-memory phenomenon by superstatistical dynamics and provide a Bayesian Inference methodology drawing on Metropolis-Hasting random walk sampling to determine which superstatistics among inverse-Gamma and log-Normal describe the best log-returns complexity on different timescales, from high to low frequency. We show that on smaller timescales (minutes) even though the Inverse-Gamma superstatistics works the best, the log-Normal model remains very reliable and suitable to fit the absolute log-returns probability density distribution with strong capacity of describing heavy tails and power law decays. On larger timescales (daily), we show in terms of Bayes factor that the inverse-Gamma superstatistics is preferred to the log-Normal model. We also show evidence of a transition of statistics from power law decay on small timescales to exponential decay on large scale with less heavy tails meaning that on larger time scales the fluctuating volatility tend to be memoryless, consequently superstatistics becomes less relevant.
Introduction
In many fields such as hydrodynamic, fluid mechanics, meteorology, traffic flows, progression of cancer cells, quantum turbulence etc. [15], there is strong evidence of existence of a phenomenon over both time and space named long memory processes. Since the late 1990's, long memory models have been playing an important role in empirical works in finance in order to emphasize substantial evidence that long memory describes well financial data characteristics such as long-term dependence and effect of shocks. However, if this phenomenon is well known by academicians, a question raised by Grabchak and Samorodnitsky remains, do financial returns follows a Gaussian law with finite variance or an infinite variance stable law [20,23]? To answer this previous question, two type of timeindependent descriptive models have been studying. The first one has finite-variance such as the normal distribution proposed by Osborne (1959) and Bachelier [1], the Student-t distribution proposed by Blattberg and Gonedes [6], the compound normal model proposed by Kon [19], and the mixed diffusion jump model by Merton [22]. The second one has infinite-variance symmetric and asymmetric stable 1 Geoffrey Ducournau, PhDs, Institute of Economics, University of Montpellier; G.ducournau.voisin@gmail.com Paretian distributions such as the Lévy-stable distribution proposed by Mandelbrot [2,3,4,5,11] and Fama [11]. However, since the end of the twentieth century alternative solutions had emerged to answer this question by the application of econophysics concepts especially the statistical mechanics theory applies to financial problems. Particularly, numerous analogies between price dynamics and dynamics in fluid turbulence have aroused many physicians and mathematicians [25,26]. S.M. Duarte Queiros and C. Tsallis [9,29] are the precursor in proposing an alternative way to describe the complex behaviors of stock returns by applying the superstatistics theory, a branch of statistical mechanics developed for describing the statistics of complex system far from equilibrium characterized by large fluctuations of intensive quantities such as fluctuating financial volatility. They have been followed by Cohen, Beck and Straeten [12,13,14,15] in demonstrating that different complex system in physics and economics exhibit the same spatiotemporally inhomogeneous dynamics that can be described by a superposition of several statistics on different time scales, leading to the emergence of the terms superstatistics in Finance for the first time and having the advantage of relying on fundamental physical theory.
The main principle behind superstatistics rely on a strong assumption that the complex dynamics of a studied system is a superposition of two distinguishable dynamics separated by different time scales [12]: a slow dynamic related to the fluctuation of an intensive quantity and a fast dynamic related to the velocity of the system. For instance, in thermodynamics, the slow dynamic can be characterized as a slowly change in environment of the system due to a slowly fluctuating temperature 1/ ( representing the inverse temperature) [13,7], and the fast dynamic is determined by the change of the velocity of the studied system (such as a Brownian particle). In finance, the slow dynamic is the fluctuating volatility [12] that we will call , and the fast dynamic is the change in velocity of the logarithmic price. And, when the slow dynamics of the fluctuating parameter is so slow that the velocity dynamics of the studied system (logarithm returns) has time to relax to a Gaussian distribution, after a long time, the stationary velocity distribution of the nonequilibrium statistical mechanics system (logarithm returns) becomes a superposition of infinite local statistical mechanics equilibrium characterized by the existence of respective parameter indexed to different time scales. If different statistical mechanics statistics can describe the variations of , for a given system, the parameter dynamics will be governed by a stable law of probability with a probability density distribution called ( ).
Indeed, it has been shown that both the lognormal superstatistics and the ! superstatistics were a good way to approximate the distribution of log returns [27,28,29]. In their article published in 2008, Biro and Rosenfeld [30] showed that the Tsallis distribution also called the q-statistics and known as being equivalent to the ! superstatistics is suitable to model the dynamics of the daily closing price of American indexes (Down Johns and SP500). Finally, Beck, Cohen and Swinney [8] demonstrated that the inverse ! superstatistics was also another superstatistics more suitable to describe daily time scales of price changes.
Theoretically, Tao Ma and R.A. Serota [31] or Gadjiev [16] show that superstatistic distribution function can be obtained from the generalized Fokker-Planck equation and proved that the Generalized Inverse Gamma distribution describes the best stock volatility dynamics and that the Students's tdistribution provides one of the better fits to returns of S&P.
In this article, we propose a Bayesian approach to determine which superstatistics among inverse gamma (IGa) and log-Normal (logN) model maximized the probability of describing the best the long memory phenomenon of financial volatility. The paper is organized as follows: in the second section we demonstrate by Bayesian Inference that the posterior probability of the fluctuating volatility is most likely to be proportional to an inverse gamma superstatistics. In the third section, we consider and compare two models (IGa and logN) on different time scales (minutes, hours, 4hours and daily): we first describe the empirical data used and perform timeseries analysis, we next describe the methodology and step procedures to estimate the superstatistical fluctuating parameters for both superstatistics on every time scales drawing on Metropolis-Hasting algorithm, we then emphasized the Bayes factor method to compare both models and determine which one is most suitable to describe fluctuating volatility respective to given timescales. Finally, the last section illustrates our conclusion on previous results.
Conjugate distribution estimate with ( ) considered as the prior probability distribution: Bayesian inference
If we consider the previous work of Christian Beck and Eric Van der Straeten [12], it is well mentioned in their paper that in equilibrium statistical mechanics such as a conservative system, we can perfectly know how to obtain the most likely probability distributions, or the probability distributions that maximized our credence regarding the one that describe the best the long-term behavior of a system; this distribution is actually an ensemble of statistics (a canonical ensemble) governed by the famous Gibbs principle of maximum entropy [17,21]. However, when considering financial assets' behaviors as the studied system, we are no more dealing with a conservative but a dissipative system. From a thermodynamics point of view, we know that such systems are considering as nonequilibrium systems in the same way that super statistical systems, and if their behaviors are described by a mixture of ensemble distributions, it remains very complex to be able to obtain the mixing distribution of the fluctuating parameters . However, as mentioned in the introduction, according to Beck and Straeten, if we consider that the dynamics of the velocity of a financial asset's log-price is enough slow to relax to a Gaussian distribution, therefore, in the long time we can consider the stationary velocity distributions of the log-price as a superposition of Gaussian distribution. Consequently, if we call the variable the observable velocity of the studied system such that for every given time , " represent the change in log-price for , and we call # = { $ , … , # } the observing data with = 1, … , , we are able to apply Bayes' theorem to determine the posterior probability of the fluctuating parameter conditioning to the data # : is the likelihood function and: is the normalizing constant, which is also called the evidence. Thus, if we use the same believe than Beck and Straeten regarding the fact that the data # is relaxing to a Gaussian distribution in the long time, and by considering that the only free parameter we have is the variance , we know the mean , we can consequently write the likelihood function given by Bayes' theorem : From equation (3) we assess that the probability distribution function of the likelihood belongs to the exponential family distributions, where the sample = ℝ ) is the non-negative real line and we can simplify the above likelihood distribution by: ( | )~' *( (4), and in Bayesian probability, when the likelihood function is a Gaussian continuous distribution with known mean , the conjugate prior ( ) of this probability likelihood, in the most convenient parametrization, is the Inverse Gamma distribution. Given has an inverse Gamma distribution with parameters and and 1/ ~ ( , ) ; the density takes form as: , and with this prior, the posterior distribution of is given by: and is also an inverse gamma distribution function.
Moreover, the prior conjugate could also be alternatively parameterized in terms of the scaled inverse ! distribution with parameters 0 , 0 ! , which has density of the form: "$ (7) . Under this prior conjugate, the posterior probability distribution takes the form of: Thus, from a Bayesian Inference and drawing on the assumption that the marginal likelihood of the model given some data # is a Gaussian probability distribution with known mean and with the variance as free parameter, we show that the most convenient parametrization that maximized the posterior probability is to choose a prior conjugate ( ) = +,-( ) that is governed by an inverse Gamma distribution of parameters and . And by demonstrating that the related distribution Scaledinv-− + could also be chosen as prior conjugate from a Bayesian approach, we conclude in a similar way than Beck, Xu, Straeten, Cohen, Swinney, Queiros and Tsallis that demonstrated that the inverse ! superstatistics is also suitable to describe the posterior probability distribution of the fluctuating parameter .
However, even though from a Bayesian approach the inverse Gamma probability density distribution appears to be the one that maximized our credence regarding the posterior probability of the fluctuating parameters , it is also not impossible that regarding specific time scales, related distributions to the inverse Gamma could as relevant in terms of maximizing the posterior probability. Therefore, we propose in part 3 to compare different models on different time scales with Bayesian statistics with Bayes factor testing in order to conclude which model is the most relevant for which time scales.
Model comparison and choice via
Bayesian factor
Empirical data
We propose in this article to use as empirical data the American Index future S&P500 drawing on the work of Lanford in 1973 [24], who argued that the level of relevance in using statistical mechanics to explain behaviors of a system such as a gas, relies on the number of particles that constitute it. Indeed, the spirit of his approach relies on the fact that the probability of errors in determining statistically the complex behaviors of a system at a macroscopic level, is inversely proportional to the number of degrees of freedom that characterized the system. More we have microscopic particles or local equilibrium dynamics with respective statistical features, more the nonequilibrium dynamics system at macroscopic scale tends to converge to a dynamics behavior relatively close to an average dynamics of every local equilibrium dynamics, describing by a superposition of statistics. For the same reason we consider index future markets as a complex and dynamics system constituting of a large number of heterogenous agents with different opportunities of profit due to large number of inhomogeneous strategies and taking decision on different time scales with different volume.
We will use one-year log returns data from 2020 corresponding to 52 weeks and propose to look at the dynamics on different time scales: minutes, hours, 4hours and daily. We define the log returns as being the studied system, the change in log returns as being the system dynamics and we define the system fluctuation as the volatility and ( ) = +,-( ) defining the probability distribution for every given time scale. Moreover, as mentioned in part 2, we consider the system dynamics historical data # as being Normally distributed with the marginal likelihood defined by equation 4. is decaying more slowly than an exponential decay, and more we increase the timescales, more the decay is fast. It seems that on larger timescales such as daily of even 4-hours, the long-term dependence recedes. Figure4&5 gives us a good outlook about the existing of volatility concertation from the length of the tail distribution of , obeying the power law distribution. But we also assess the fact that more we increase the time scale, less the distribution of the fluctuating volatility converges to a power law decay but transit towards an exponential decay with a less obvious concentration of volatility. This assessment is even more true for daily time scale where the volatility seems to be more uniformly distributed.
From these analyses' premises, we first conclude that financial volatility exhibits long term dependence as function of time scales, with a probability power law decays distribution function for small time scales with strong volatility concentration. This last sustaining the Mandelbrot multifractal model (1963) [2] to simulate volatility fluctuation using Lévy stable distribution with finite expected mean but infinite expected variance. Second, we also conclude that on larger time scales such as 4-hours, the volatility probability density distribution and the autocorrelation function decay exponentially with remaining but smaller volatility concentration. And on very large time scales as daily, it seems that the notion of volatility concentration tends to be absent.
Next part will focus on the methodology chosen to define which model between the inverse Gamma and the log-normal superstatistics is the most relevant in terms of Bayesian criterion according to respective time scales.
Methodology & results on parameters estimation: Metropolis-Hasting sampler
Before determining which superstatistics model is the most appropriate, we must priorly on every time scales estimate the parameters of each model that fit the most the observable data. By saying "fit", we mean the parameters that enable us to maximize our posterior probability of the fluctuating parameter conditioning to the data # as described by equation (1). To this end we introduce the Metropolis-Hasting random walk algorithm which is a specific Markov Chain Monte Carlo (MCMC) methods.
We consider equation (1) ( | # ) ∝ ( | # ) * ( ) as our proposal distributions, with ( | # ) the marginal likelihood and ( ) the prior also called the target and being proportional to the posterior. The prior distribution is defined on a state space that is ergodic and stationary with respect to , meaning that if " ~ ( ) then ")$ ~ ( ) and consequently by constructing a Markov Chain on , this last will have a unique stationary distribution that converges to the prior . This means that the chain can be considered as a sample from the proposal distribution and the purpose is to iterate samples for given parameter , and the acceptance criteria on depends on the fact that based on this parameter, the obtained stationary prior distribution ( ) conjugate to the marginal likelihood must ensure the posterior distribution of interest. In other words, must guarantee that the prior remains proportional to the posterior.
In this paper we compare two models {M $ , M ! }: • we call M $ = IGa as being an inverse Gamma distribution where: with > 0 the financial volatility, the shape parameter, the scale parameter, and Γ is the gamma function. We define our prior distribution π on θ conditioned to M $ as π(θ|M $ ) ~ g $ (θ; α, β). Figure5 emphasizes the Metropolis-Hasting random walk iterations for optimizing parameters θ conditioned to observed data # for both superstatistics IGa and logN. We observed a quick convergence after 25 iterations to the target parameters. If for both superstatistics parameter θ get larger on smaller time scales, we also assess a larger dispersion of θ between different time scales on the log-Normal superstatistics. Indeed, if regarding IGa superstatistics θ range from 1.83 to 1.98 from daily to minutes time scales, on log-Normal superstatistics it ranges from 0.99 to 1.26. Even though the difference is slight, it seems that the log-Normal distribution provides more information about the difference in volatility concentration between different time scales than IGa superstatistics.
Figure6&7 provide another overlook of the previous assessment regarding the dispersion on every time scales of the posterior, prior and marginal likelihood probability density distribution for both superstatistics ( M $ for IGa & M ! for log-Normal). Once again, between both model the difference in volatility concentration is slight but remains less important on model M ! . However, the transition of statistics from power law decay on small timescales to exponential decay on large scale with less heavy tails is easily assessable on both Figures.
Fig8.
Probability density distribution comparison between observed and estimated volatility on different time scales (red: observed absolute log-returns, blue: estimated absolute log-returns with IGa superstatistics, green: absolute estimated log-returns with log-N superstatistics).
Figure8 compares the probability density distribution of the observed fluctuating volatility considered as the absolute log-returns with the probability density distribution integrated from both the IGa and logN superstatistics with respective parameter {θ $ , θ ! } on different time scales. If from Figure7 it was hardly possible to see a difference between both superstatistics to explain long term dependence in volatility, from Figure8 we observe clearly that the IGa superstatistics is a much better fit than the logN superstatistics to the probability distribution of empirical data by describing with accuracy the long dependence of volatility on very small-time scales with strong power lay decay. We
Minutes times sales
Hours times sales 4-hours times sales daily times sales also observe that for larger the timescales, less reliable become both superstatistics in the description of the empirical data.
Next part will consist in quantifying and comparing the degree of accuracy to fit and describe the emperical data of both the IGa and the logN model by determining the Bayes factor on every time scale through Bayesian Inference.
Methodology & results on models' comparison and model's choice: Bayesian factor
Once the parameter {θ $ , θ ! } have been estimated for both superstatistics, we propose now to compare both model on different time scales by determining the Bayes factor between M $ and M ! .
We consider not anymore the absolute value of the logreturns as previously but we call the series of data # = $ , … , # as being the change in log-price of the index future S&P500 with the size of depending on the time scales length. We also consider two parametric model as describe previously M $ and M ! with M $ being the IGa superstatistics with prior probability π(θ $ |M $ ) under model M $ and M ! being the log-N superstatistics with prior probability π(θ ! |M ! ) under model M ! . According to bayes theorem, the posterior probability of M $ and M ! conditional to the data # is: where : and {ℒ $ , ℒ ! } are the likelihood function for model M $ and M ! . Hence the Bayes factor between both models is obtained by divided equation (14) with (15): The uses of Bayes factor are used here as a Bayesian alternative to classical hypothesis testing in the choice of models. Therefore, according to equation (16), if for a given time scales, we have ℬℱ(R @ ) > 1, then the model M $ must be preferred to M ! . The reciprocal is also true. Thus, we propose to draw 1000 random series of (θ $ |M $ ) and (θ ! |M ! ) in order to get 1000 Bayes factors on every time scales. The purpose here is to know the marginal probability of reliability on each model. The importance of working with many Bayes factors relies on the fact that as emphasized by Figure6&7, on very small-time scales, the difference in posterior probability density distribution of the fluctuating volatility between both superstatistics is not obvious. Therefore, determining which model in the long term gives the highest marginal probability of reliability seems more reasonable than making a decision on model based only on one iteration of bayes factor.
Figure9 gives a series of 1000 Bayes factor on different time scales according to equation (16) and according to estimated parameters { θ $ , θ ! } respectively to model M $ and M ! . We observe that for ∆ =minute roughly 75% of the time we would prefer model M $ to M ! with an average Bayes factor equal to 1.00013, for ∆ =hour we would prefer 98% of the time the model M $ with an average Baye factor equal to 1.0048, for ∆ =4 hours we would choose M $ 87% of the time with an average Bayes factor equal to 1.0049, and finally for ∆ =day, 100% of the time we would prefer the model M $ with a Bayes factor equal to 1.31.
It seems obvious that on every time scales, the inverse Gamma superstatistics is preferable to the log-Normal, however, we also assess that more the time scales become smaller, more the superstatistics log-Normal become relevant. If we do not observe a clear transition of statistics from large timescales to small timescales as Xu and Beck [10] mentioned in their article, me must observe that on small time scales both superstatistics IGa and log-N are relatively similar with a Bayes factor slightly above to one with a precision within ten minus four.
Minutes times sales
Hours times sales
4-hours times sales daily times sales
Fig10. Probability density distribution comparison between observed and estimated log-returns data on minutes and hours time scales (red: observed log-returns, blue: estimated log-returns with IGa superstatistics, green: estimated logreturns with log-N superstatistics).
Fig11
. Probability density distribution comparison between observed and estimated log-returns data on 4hours and daily time scales (red: observed log-returns, blue: estimated logreturns with IGa superstatistics, green: estimated log-returns with log-N superstatistics).
The Figure10&11 seems to confirm our previous conclusion obtained from the series of Bayes factor by showing that the probability density distribution of log-returns integrated from an inverse Gamma superstatistics fit better to the probability density of the observed log-returns data than the log-Normal superstatistics on every time scales. It can describe better the power law decay on shorter time scale and emphasized well the decrease in volatility concentration at longer time scale. However, we do not observe obvious transition of superstatistics from IGa to logN. If both converge to the same distributions and can describe pretty well the same distribution on shorter time scales, the superstatistics inverse Gamma remains the preferred choice on every time scales. Finally, we also observe that compare to the minute time scales, on 4hours and daily time scales, both superstatistics loose in power of accuracy in describing the probability density distribution of logreturns. The reason is that the fluctuating volatility parameter θ on shorter time scales are more correlated to each other than on longer time scale as shown in Figure3, and consequently the choice of infinite variance and volatility concentration make much less sense on daily than minutes time scales. Therefore, it would be more reasonable to describe the absolute logreturns on larger time scale with finite variance.
Conclusion
We have achieved satisfactory results for parameter estimation for asymmetric fluctuating volatility probability density distributions applying Bayesian inference methods, which are based on our conditionally superstatistics framework.
We first show that on small time scales particularly on minutes basis, both the power-law decay of the probability density of log-returns and the power law decline of the autocorrelations in volatilities demonstrate the presence of long dependence and strong concentration of volatility. This led us to provide evidence from Bayesian model comparison that the inverse Gamma superstatistics describe the best the fluctuating dynamics of financial volatility and must be preferred to the log-Normal model.
We first conclude that superstatistics with infinite superstatistical volatility parameter are suitable to describe this concentration volatility dependence.
Next, we provided evidence that on larger time, scales a transition of statistics takes places, with a shift from power law decay to exponential decay of the volatility probability density distribution due to the fast decline in the autocorrelations function, characteristics of memoryless phenomenon. | 2021-05-11T01:16:18.342Z | 2021-05-10T00:00:00.000 | {
"year": 2021,
"sha1": "f88d29f0b837cf66a08eb74cb25e9dd45da4cf5b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f88d29f0b837cf66a08eb74cb25e9dd45da4cf5b",
"s2fieldsofstudy": [
"Mathematics",
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
268272808 | pes2o/s2orc | v3-fos-license | Genetic history of the Koryaks and Evens of the Magadan region based on Y chromosome polymorphism data
In order to clarify the history of gene pool formation of the indigenous populations of the Northern Priokhotye (the northern coast of the Sea of Okhotsk), Y-chromosome polymorphisms were studied in the Koryaks and Evens living in the Magadan region. The results of the study showed that the male gene pool of the Koryaks is represented by haplogroups C-B90-B91, N-B202, and Q-B143, which are also widespread in other peoples of Northeastern Siberia, mainly of Paleo-Asiatic origin. High frequency of haplogroup C-B80, typical of other Tungus-Manchurian peoples, is characteristic of the Evens of the Magadan region. The shared components of the gene pools of the Koryaks and Evens are haplogroups R-M17 and I-P37.2 inherited as a result of admixture with Eastern Europeans (mainly Russians). The high frequency of such Y chromosome haplogroups in the Koryaks (16.7 %) and Evens (37.8 %) is indicative of close interethnic contacts during the last centuries, and most probably especially during the Soviet period. The genetic contribution of the European males’ Y chromosome significantly prevails over that of maternally inherited mitochondrial DNA. The study of the Y chromosome haplogroup diversity has shown that only relatively young phylogenetic branches have been preserved in the Koryak gene pool. The age of the oldest component of the Koryak gene pool (haplogroup C-B90-B91) is estimated to be about 3.8 thousand years, the age of the younger haplogroups Q-B143 and N-B202 is about 2.8 and 2.4 thousand years, respectively. Haplogroups C-B90-B91 and N-B202 are Siberian in origin, and haplogroup Q-B143 was apparently inherited by the ancestors of the Koryaks and other Paleo-Asiatic peoples from the Paleo-Eskimos as a result of their migrations to Northeast Asia from the Americas. The analysis of microsatellite loci for haplogroup Q-B143 in the Eskimos of Greenland, Canada and Alaska as well as in the indigenous peoples of Northeastern Siberia showed a decrease in genetic diversity from east to west, pointing to the direction of distribution of the Paleo-Eskimo genetic component in the circumpolar region of America and Asia. At the same time, the Evens appeared in the Northern Priokhotye much later (in the XVII century) as a result of the expansion of the Tungusic tribes, which is confirmed by the results of the analysis of haplogroup C-B80 polymorphisms
Introduction
The extreme Northeast of Siberia is inhabited by the Chukotka-Kamchatkan peoples (the Chukchis, Koryaks, Itelmens) and the Eskimos, which are characterized by genetic peculiarities and occupy a distinct position among the ethnogeographical groups of Northern Eurasia (Rasmussen et al., 2010;Fedorova et al., 2013;Cardona et al., 2014;Pagani et al., 2016;Pugach et al., 2016;Gorin et al., 2022).According to paleogenomic data, the genetic specificity of these peoples is due to their ancient Paleo-Siberian genetic substrate, inherited in part by the Native Americans (Sikora et al., 2019).Meanwhile, the results of the analysis of autosomal loci polymorphism in the indigenous Siberian populations have shown that in the east of Siberia the appearance of alleles of European origin is estimated to be relatively recent (about 3-6 generations ago), which is associated with the Russian discovery of Siberia, start ing mainly from the XVII century and especially intensive during the Soviet period (Cardona et al., 2014).Moreover, various studies demonstrate that the flow of European genes into the gene pools of the indigenous populations of Northeastern Siberia was carried out predominantly by men (Balanovska et al., 2020a, b;Agdzhoyan et al., 2021;Solovyev et al., 2023).In this regard, the contribution of European Y chromosome va riants to the gene pools of indigenous peoples of Northeastern Siberia and other Arctic regions usually exceeds that of European maternally inherited mitochondrial DNA (mtDNA) variants (Bosch et al., 2003;Rubicz et al., 2010;Dulik et al., 2012;Olofsson et al., 2015).
The results of genetic studies of the indigenous populations of the northern coast of the Sea of Okhotsk -the Koryaks and Evens of the Magadan region -have shown that they have a very low frequency of European mtDNA variants (only in the Evens it reaches 4 %) (Derenko et al., 2023), and according to the results of genome-wide analysis, the frequency of the European genetic component in the Northeastern Siberian populations has significantly increased only in the last ~100 years (Cardona et al., 2014).Most likely, this may be related to the increased European contribution by males, and therefore, the aim of this paper is to analyze the Y chromosome polymorphism in the indigenous populations of the Magadan region.
Materials and methods
Unrelated males from the indigenous populations of the Magadan region (the Koryaks and Evens) were studied (Sup-plementary Materials 1 and 2)1 .Based on survey data, the Koryaks (N = 36) and Evens (N = 61) studied had identified themselves as belonging to the above ethnic groups for at least 2-3 generations.According to the results of mtDNA analysis, all individuals studied are characterized by haplotypes of Northeast Asian origin.
DNA was extracted and purified from whole blood as we previously described (Derenko, Malyarchuk, 2010).Samples were genotyped for 12 microsatellite (STR) loci (DYS19, DYS385a, DYS385b, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS437, DYS438, DYS439) using PowerPlex Y System (Promega Corporation, Madison, WI, USA).Alleles were detected by capillary electrophoresis on ABI 3500xL Genetic Analyzer (Applied Biosystems, USA).The results were analyzed using the programs Genscan v. 3.7 and Genotyper v. 3.7 (Applied Biosystems).Data for DYS385 loci were not considered in the statistical analysis because the order of the DYS385a and DYS385b loci on the Y chromosome is unknown.The number of repeats at the DYS389II locus was determined by subtracting the length of the smaller repeat (DYS389I) from the length of the larger repeat (DYS389II).
Y chromosome haplogroups were determined by direct DNA sequencing or restriction fragment length polymorphism analysis of haplogroup markers as we described previously (Malyarchuk et al., 2013).Data on variability of the B77, B79, B80, B81, B90, B91, B92, B94, B143, B186, B202, B203, B204, and B471 loci were obtained earlier in studies of whole Y chromosome variability in different ethnic groups, including some Koryak and Even individuals from the Magadan region (Karmin et al., 2015).
The Vp statistic, the average dispersion of the number of repeats in STR loci, was used to estimate intrapopulation genetic diversity (Kayser et al., 2001).The evolutionary age of the Y chromosome haplogroups was calculated based on the analysis of the average number of repeats in loci and their variance (Zhivotovsky et al., 2004).The mutation rate value used in the calculations, 2.79•10 -3 substitutions per locus per generation, was obtained by averaging mutation rates for the 10 Y chromosome loci analyzed, according to Ballantyne et al. (2010).The program Network 10.2 (www.fluxus-engineering. com) was used to construct median networks of the Y chromosome STR haplotypes.C-M217-M48-B90-B91-B92 13.9 0 C-M217-M48-B90-B91-B94 13.9 0
Results and discussion
The results of the study of Y chromosome polymorphism showed that the male gene pool of the Koryaks living in the Magadan region is represented mainly by haplogroups C, N, and Q (Table 1).European lineages in the Koryaks were found at a frequency of 16.7 % for haplogroups R-M17, I-M253, and I-P37.2.The frequency of European haplogroups is even higher among the Evens -37.8 %.They are represented by haplogroups R-M17, R-M269, I-P37.2, as well as N-B186, which is characteristic of the peoples of Northeastern Europe (Karmin et al., 2015).The East Asian component of the Even gene pool consists of various subgroups of haplogroup C (55.7 % in total).In addition, haplotypes belonging to haplogroup Q-M3, which is widespread among the Native Americans and Eskimos, have been found in the Evens.
Haplogroup C variants in the Koryak and Even populations differ significantly.The Koryaks are characterized by the B90 and B91 specific markers, while the Evens fall into the B80defined subgroup.According to the results of whole-genome studies, the B90 marker is specific for the Y chromosomes of the indigenous populations of Northeastern Siberia (the Koryaks, Evenks, and Ulchi) (Karmin et al., 2015;Balanovska et al., 2018), and the B91-defined subgroup is present only in the Koryaks (Karmin et al., 2015).Its frequency in the Koryaks of the Magadan region is 27.8 % (see Table 1).
According to the results of molecular dating based on the analysis of polymorphism associated with single nucleotide substitutions (SNP) in whole Y chromosomes, the age of the B91 subgroup is estimated at 3.8 (3.0-4.7)thousand years (Karmin et al., 2015).The age of the upstream C-B90 subgroup is approximately 5.0 (4.2-5.7)thousand years.Based on the similarity of STR profiles, B90 haplotypes appear to be predominantly distributed in Northeastern Siberia, since, in addition to the Koryaks, Evenks and Ulchi, homologous STR haplotypes are observed in the Yakuts, Yukaghirs, Itelmens, and Evenks2 .In our study, a single homologous B90 haplotype (similar to that of the Koryaks) was also found in the Evens.
In the Evens, the C subgroup, marked by a substitution at locus B80, is mainly distributed (see Table 1).It is known that B80 haplotypes, in addition to the Evens, are also characteristic of other Tungus-Manchurian peoples (the Orochens, Evenks, and Manchurians) (Yu et al., 2023).The evolutionary age of this subgroup, according to the SNP data, is 1.7 (1.2-2.2) thousand years (Karmin et al., 2015).The results of the analysis by H.-X. Yu et al. (2023) have shown that the age of the B80 subgroup is estimated to be about 2 thousand years, while the B81 and B471 haplotypes specific to the Evens originated in the Amur region and spread to Northeastern Siberia as a result of the migrations of the Tungus ancestors in the last approximately 1.5 thousand years.
The N haplogroup in the Koryaks of the Magadan region is represented exclusively by the N-B202 branch (25 %).The same subgroup predominates in the gene pool of the Chukchi (Karmin et al., 2015;Ilumäe et al., 2016;Agdzhoyan et al., 2021), and is also found in the neighboring peoples -the Itelmens and Eskimos (Agdzhoyan et al., 2021).The age of the N-B202 branch is approximately 2.4 (1.8-3.1)thousand years (Ilumäe et al., 2016).This haplogroup consists of two subgroups, the older N-B204 (estimated to be about 1.4 thousand years old based on STR haplotype diversity) and the younger N-B203 (about 600 years old) (Agdzhoyan et al., 2021).In the Chukchi, both subgroups are present to a nearly equal extent, with the older subgroup N-B204 predominating in the Koryaks (see Supplementary Material 1).In the Evens of the Magadan region, haplogroup N was found at a relatively low frequency (6.6 %) and is represented by different haplotypes.In this respect, the Magadan Evens are similar to the Kamchatkan Evens, but differ from the Okhotsk Evens, who are characterized by the "Amur region" subgroup N-B479 at a frequency of 10 % (Agdzhoyan et al., 2019).
Haplogroup Q represents the oldest component of the gene pools of the indigenous populations of Siberia and America.Haplogroup Q-F903 was found in an Upper Paleolithic inhabitant of Eastern Siberia (the Afontova Gora archaeological site, approximately 17 thousand years old) (Raghavan et al., 2014), and haplogroup Q-B143 was revealed in Northeastern Siberia (the Duvanniy Yar site, about 10 thousand years old) (Sikora et al., 2019).The same haplogroup was reported in a representative of the Paleo-Eskimo Sakkak culture who lived in Greenland about 4 thousand years ago (Rasmussen et al., 2010).Currently, haplogroup Q-B143 is distributed only among the indigenous populations of the American Far North, Greenland and Siberia (Malyarchuk et al., 2011;Karmin et al., 2015;Grugni et al., 2019;Luis et al., 2023).In the Koryaks of the Magadan region, this Q haplogroup was detected with a frequency of 16.7 % (see Table 1).According to indirect data (based on the frequencies of haplogroups Q(xM346) and Q-NWT01, as well as on the similarity of STR haplotypes), haplogroup Q-B143 is present in the Koryaks of Kamchatka (with frequency varying from 6 to 18 %)3 (Karafet et al., 2018), in the Chukchi (13 %) 3 , in the Yukaghirs (30.8 %) (Pakendorf et al., 2006), and it has also been found with high frequencies (up to 50 %) in the Eskimos of Alaska, Canada and Greenland (Dulik et al., 2012;Olofsson et al., 2015;Luis et al., 2023).
The presence of haplogroup Q-B143 in the Northeast of Siberia about 10 thousand years ago and at present suggests that Q-B143 is the most ancient Siberian component that has been a part of the gene pools of the Paleo-Asiatic peoples and their ancestors.Archaeological data, as well as the results of the study of haplogroup Q polymorphism, showed that about 5 thousand years ago the carriers of haplogroup Q-B143 (as well as unsuccessful Q-L713 and Q-preM120) migrated from Siberia to America and then to Greenland and became the founders of the Paleo-Eskimo culture (Grugni et al., 2019).However, the results of the Q-B143 dating showed that the age of this haplogroup in modern Koryaks is only about 2.8 thousand years, which indicates the possibility of back migration of the carriers of these haplotypes (most likely, the Paleo-Eskimos) from North America to Northeast Asia (Grugni et al., 2019).Similarly, the results of studies of STR variability within haplogroup Q-B143 in Greenlandic and North American Eskimos showed that the diversity and evolutionary age of haplotypes in Greenlandic Eskimos are higher than in Canadian and Alaskan Eskimos (Olofsson et al., 2015;Luis et al., 2023).In this connection, these authors suggested that haplogroup Q-B143 was spread by the Paleo-Eskimos from the east to the west of America and, moreover, became one of the main components of the gene pool of the Neo-Eskimos, which most likely formed in the north of America about 700 years ago.
Since Luis et al. (2023) did not investigate Q-B143 haplotypes in the indigenous populations of Northeast Asia, we analyzed STR haplotype diversity in samples of Greenlandic, Canadian, and Alaskan Eskimos (based on data from Dulik et al. (2012), Olofsson et al. (2015), Luis et al. (2023)), and in the Koryaks, Yukaghirs, and Chukotkan Eskimos (according to Pakendorf et al. (2006), Luis et al. (2023), and the present study).The results of our study showed that, indeed, Northeast Asian sample has the lowest diversity of Q-B143 haplotypes compared to Greenlandic and North American ones, indicating that these haplotypes appeared in Northeast Asia later than in North America and Greenland (Table 2).
It is necessary to note the discrepancy between the dates obtained using STR markers and whole-genome SNP data, because the evolutionary age of haplogroup Q-B143 in the Koryaks according to SNP data (2.8 ± 0.9 thousand years as per Grugni et al. (2019)) exceeds that obtained using STR markers for the indigenous population of Northeastern Siberia (0.7 ± 0.4 thousand years) (see Table 2).This is most likely due to the very large mismatch in the number of variable positions for the compared genetic systems, the high probability of recurrent (forward and reverse) mutations for rapidly evolving STR loci, and the dependence of such mutational events on the age of haplogroups.Therefore, it is likely that STR dates close to the whole-genome ones can be obtained only for young branches (Agdzhoyan et al., 2021).Thus, if we focus on the whole-genome SNP dating (as more accurate), we can assume that the appearance of haplogroup Q-B143 in Northeast Asia occurred long before the appearance of the Neo-Eskimos and is thus associated with the migrations of the Paleo-Eskimos.The possibility of such events is evidenced by archaeological data, according to which the Paleo-Eskimo cultural tradition was established in Chukotka about 3.0-3.5 thousand years ago (the Chertov Ovrag site on Wrangel Island and the Unenen settlement), as well as in the Sea of Okhotsk's northern coasts by representatives of the Tokarev culture (probable ancestors of the Koryaks) about 2.8 thousand years ago (Grebenyuk et al., 2019).The low level of diversity of Northeastern Siberian STR haplotypes and their peripheral position in the median network among the huge number of Q-B143 haplotypes of Arctic peoples indicate a very small number of successful (in terms of reproduction) migrations of the Paleo-Eskimos to the Asian coast (see the Figure).In fact, a single haplotype (ht20 in the Figure ) is the most likely ancestor for the other haplotypes identified in the Koryaks and Yukaghirs.
The low level of heterogeneity of Q-B143 haplotypes in the indigenous populations of Northeastern Siberia also indicates that the most ancient haplotypes, ancestral to the haplotypes of the Paleo-Eskimos of the north of America and Greenland, have not been preserved in their gene pools.This seems quite likely, given the low effective population size of Northeastern Siberians and the increasing role of genetic drift under these conditions, as well as the continuing influence from neighboring Siberian populations.It is known that periods of almost complete population replacements occurred more than once during the 35 thousand years of Siberia's population history (Sikora et al., 2019).
Traces of later contacts between the Neo-Eskimos and Paleo-Asiatic peoples are very strongly recognized by genetic data.The Neo-Eskimos were formed on the basis of two genetic components -the Paleo-Eskimo and the Paleo-Indian ones (Flegontov et al., 2019;Sikora et al., 2019).At that, the Paleo-Indian component of the Neo-Eskimos is well recognized by mtDNA haplogroups (A2a, A2b) and Y chromosome haplogroups (Q-M3).Therefore, by the presence of these haplogroups, it is possible to estimate the genetic contribution of the Neo-Eskimos.Based on mtDNA markers, the frequency of haplogroups A2a and A2b is very high in the Asian Eskimos and Chukchi, while among other Paleo-Asiatic peoples, these haplogroups were found only in the Koryaks at frequencies ranging from 2.7 to 9.1 %4 (Derenko et al., 2023).On the Y chromosome, the Paleo-Indian contribution, marked by haplogroup Q-M3, in the Chukchi and Kamchatkan Koryaks has been estimated to be 11.0 and 6.1 %, respectively5 .The
95
ГЕНЕТИКА ЧЕЛОВЕКА / HUMAN GENETICS Q-M3 haplogroup was not detected in the Koryak population we studied; however, the frequency of this haplogroup in the Evens is 3.3 % (see Table 1).The most probable reason for the appearance of the "American" haplogroup Q-M3 in the Evens of the Magadan region is interethnic contacts, either with the Koryaks or directly with the Eskimos or related tribes, which, according to archaeological, ethnographic and linguistic data, could have lived on the Sea of Okhotsk coast as early as in the beginning of the 2nd millennium AD (Burykin, 2001).
The high level of interethnic admixture in Northeastern Siberia, mentioned in a number of studies (Khakhovskaya, 2003;Balanovska et al., 2020a, b), is associated with the economic development of this region, first by Russian explorers and then, in the Soviet period, by numerous migrants, mainly of Eastern European origin.In the present study, we also found a high frequency of Y chromosome haplogroups characteristic of Eastern Europeans (and Russians, in particular): haplo groups R, I and J (Derenko et al., 2006;Balanovsky et al., 2008).In the Koryaks, their frequency was 16.7 %, and 37.8 % in the Evens (see Table 1).Moreover, in the Evens, the diversity of R-M17-haplotypes significantly exceeds that of the C-M217 haplogroup characteristic of the Evens themselves (Vp = 0.225 and 0.1, respectively).Meanwhile, the results of the study of maternally inherited mtDNA variability in the Koryaks and Evens of the Magadan region showed that they have a very low frequency of European mtDNA variants (up to 4 % in the Evens) (Derenko et al., 2023).The obtained results, thus, testify to a long history of admixture between the indigenous and immigrant populations in the territory of the Magadan region, as well as to the fact that immigrant men were predominantly involved in interethnic marriages and most of the children of such marriages were likely to be registered as indigenous, which is also typical of other areas of Northeastern Siberia according to demographic data (Khakhov skaya, 2003;Balanovska et al., 2020b).
Conclusion
The results of the study have shown that the male gene pools of the indigenous populations of the Magadan region -the Koryaks and Evens -differ significantly in their structure.The Koryaks have a specific set of Y chromosome haplogroups similar to those of the indigenous peoples of Northeastern Siberia: C-B90-B91, N-B202, Q-B143, while the Evens are characterized by a high frequency of haplogroup C-B80, common among the Tungus-Manchurian peoples.The haplogroups common to the Koryaks and Evens (such as R-M17 and I-P37.2) were obtained from Eastern European migrants as a result of interethnic admixture.The high frequency of this kind of Y chromosome haplogroups in the indigenous peoples of the Magadan region testifies to rather intensive interethnic contacts, mainly from the side of Eastern European males.The analysis of the evolutionary age of aboriginal Y chromosome haplogroups has shown that the gene pools of the Koryaks and Evens are represented by relatively young phylogenetic branches.In the Koryaks, the age of the oldest component of the gene pool (haplogroup C-B91) is estimated to be about 3.8 thousand years; later, haplogroups Q-B143 (about 2.8 thousand years ago) and N-B202 (about 2.4 thousand years ago) appeared in the Koryak gene pool.The Q-B143 haplogroup was most likely inherited by the ancestors of the Koryaks (as well as other Paleo-Asiatic peoples) from the Paleo-Eskimos as a result of their migrations along the Sea of Okhotsk coast.The Evens appeared in the Northern Priokhotye much later (in the XVII century) as a result of the expansion of Tungusic-speaking populations, which is confirmed by the results of the analysis of haplogroup C-B80 polymorphism.
Вавиловский журнал генетики и селекции / Vavilov Journal of Genetics and Breeding • 2024 • 28 • 1 Genetic history of the Koryaks and Evens of the Magadan region based on Y chromosome polymorphism data
Table 1 .
Frequency (in %) of Y chromosome haplogroups in the Koryaks and Evens of the Magadan region
Table 2 .
Diversity and evolutionary age of the Q-B143 STR haplotypes in the Eskimo and Paleo-Asiatic peoples N is the sample size, n is the number of STR haplotypes, Vp is the variance of the number of repeats in STR loci. | 2024-03-09T05:07:04.211Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "217d077face2ab5ea955dcc4160cd847777d9fff",
"oa_license": "CCBY",
"oa_url": "https://vavilov.elpub.ru/jour/article/download/4059/1812",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "217d077face2ab5ea955dcc4160cd847777d9fff",
"s2fieldsofstudy": [
"Biology",
"History"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53076507 | pes2o/s2orc | v3-fos-license | A data-independent distance to infeasibility for linear conic systems
We offer a unified treatment of distinct measures of well-posedness for homogeneous conic systems. To that end, we introduce a distance to infeasibility based entirely on geometric considerations of the elements defining the conic system. Our approach sheds new light into and connects several well-known condition measures for conic systems, including Renegar’s distance to infeasibility, the Grassmannian condition measure, a measure of the most interior solution, as well as the sigma and symmetry measures. AMS Subject Classification: 65K10, 65F22, 90C25
Introduction
The focus of this work is the geometric interpretation and coherent unified treatment of measures of well-posedness for homogeneous conic problems. We relate these different measures via a new geometric notion of a distance to infeasibility.
The development of condition measures in optimization was pioneered by Renegar [22,24,25] and has been further advanced by a number of scholars. Condition measures provide a fundamental tool to study various aspect of problems such as the behavior of solutions, robustness and sensitivity analysis [7,18,20,23], and performance of algorithms [5,14,15,19,21,25]. Renegar's condition number for conic programming is defined in the spirit of the classical matrix condition number of linear algebra, and is explicitly expressed in terms of the distance to infeasibility, that is, the smallest perturbation on the data defining a problem instance that renders the problem infeasible [24,25]. By construction, Renegar's condition number is inherently data-dependent. A number of alternative approaches for condition measures are defined in terms of the intrinsic geometry of the problem and independently of its data representation. Condition measures of this kind include the symmetry measure studied by Belloni and Freund [3], the sigma measure used by Ye [27], and the Grassmannian measure introduced by Amelunxen and Bürgisser [1]. In addition, other condition measures such as the ones used by Goffin [16], Cheung and Cucker [9], Cheung et al. [11], and by Peña and Soheili [21] are defined in terms of most interior solutions. The perspective presented in this paper highlights common ideas and differences underlying most of the above condition measures, reveals some extensions, and establishes new relationships among them.
Condition measures are typically stated for feasibility problems in linear conic form. Feasibility problems of this form are pervasive in optimization. The constraints of linear, semidefinite, and more general conic programming problems are written explicitly as the intersection of a (structured) convex cone with a linear (or, more generally, affine) subspace. The fundamental signal recovery property in compressed sensing can be stated precisely as the infeasibility of a homogeneous conic system for a suitable choice of a cone and linear subspace as explained in [2,8].
We focus on the feasibility problems that can be represented as the intersection of a closed convex cone with a linear subspace. Our data-independent distance to infeasibility is a measure of proximity between the orthogonal complement of this linear subspace and the dual cone. This distance depends only on the norm, cone, and linear subspace. Specific choices of norms lead to interpretations of this distance as the Grassmannian measure [1] as well as a measure of the most interior solution [11]. Our approach also yields neat two-way bounds between the sigma measure [27] and symmetry measure [3,4] in terms of this geometric distance. Our work is inspired by [1], and is similar in spirit to an abstract setting of convex processes [6,Section 5.4] (also see [12]). For a more general take on condition numbers for unstructured optimization problems and for an overview of recent developments we refer the reader to [28].
The main sections of the paper are organized as follows. We begin by defining our data-independent distance to infeasibility in Section 2, where we also show that it coincides with the Grassmannian distance of [1] for the Euclidean norm. In Section 3 we discuss Renegar's distance to infeasibility and show in Theorem 1 that the ratio of the geometric distance to infeasibility and Renegar's distance is sandwiched between the reciprocal of the norm of the matrix and the norm of its set-valued inverse, hence extending [1, Theorem 1.4] to general norms. In Section 4 we show that the cone induced norm leads to the interpretation of the distance to infeasibility in terms of the most interior solution (Proposition 3). We also provide further interpretation as eigenvalue estimates for the cone of positive semidefinite matrices and for the nonnegative orthant.
In Section 5 we propose an extension of the sigma measure of Ye and establish bounds relating the sigma measure and the distance to infeasibility (Proposition 5). Section 6 relates our distance infeasibility and the sigma measure to the symmetry measure used by Belloni and Freund via neat symmetric bounds in Theorem 2 and Corollary 1. Finally, Section 7 describes extensions of our main developments via a more flexible choice of norms.
Data-independent distance to infeasibility
Let E be a finite dimensional real vector space with an inner product ·, · , endowed with a (possibly non-Euclidean) norm · . Recall that the dual norm · * is defined for u ∈ E as Notice that by construction, the following Hölder's inequality holds for all Let K ⊆ E be a closed convex cone. Given a linear subspace L ⊆ E, consider the feasibility problem and its alternative find Here K * denotes the dual cone of K, that is, and L ⊥ is the orthogonal complement of the linear subspace L, In what follows we assume that K ⊆ E is a closed convex cone that is also regular, that is, int(K) = ∅ and K contains no lines. In our analysis the cone K is fixed, and the linear subspace L is treated as the problem instance. This is a standard approach that stems from the real-world models, where the cone is a fixed object with well-known structure that encodes the model's structure (for instance, the nonnegative orthant, the cone of positive semidefinite matrices, copositive or hyperblicity cone), and the problem instance is encoded via the coefficients of a linear system that in our case corresponds to the linear subspace.
Observe that (2) and (3) are alternative systems: one of them has a strictly feasible solution if and only if the other one is infeasible. When neither problem is strictly feasible, they both are ill-posed: each problem becomes infeasible for arbitrarily small perturbations of the linear subspace.
The main object of this paper is the following data-independent distance to infeasibility of (2): When the norm · is Euclidean, that is, v = v * = v 2 = v, v , the distance to infeasibility (4) coincides with the Grassmann distance to illposedness defined by Amelunxen and Bürgisser [1]. To see this, first observe that the Euclidean norm is naturally related to angles. Given Given a linear subspace L ⊆ E and a closed convex cone C ⊆ E, let Proposition 1 and [1, Proposition 1.6] imply that when · = · 2 the distance to infeasibility ν(L) matches the Grassmann distance to ill-posedness of [1]. The flexibility in the choice of norm in E is an interesting feature in our construction of ν(L) as some norms are naturally more compatible with the cone. Suitable choice of norms generally yield sharper results in various kinds of analyses. In particular, in condition-based complexity estimates an appropriately selected norm typically leads to tighter bounds.
The articles [10,21] touch upon this subject, and consistently in [7] a supnorm is deemed a convenient choice for the perturbation analysis of linear programming problems. We discuss this matter in some depth via induced norms in Section 4.
We conclude this section with a useful characterization of ν(L).
Proposition 2.
If L is a linear subspace of E and L ∩ int(K) = ∅ then the distance to infeasibility (4) can be equivalently characterized as u, x .
Proof. By properties of norms and convex duality for all u ∈ E we have u, x .
Renegar's distance to infeasibility
We next relate the condition measure ν(·) with the classical Renegar's distance to infeasibility. A key conceptual difference between Renegar's approach and the approach used above is that Renegar [24,25] considers conic feasibility problems where the linear spaces L and L ⊥ are explicitly defined as the image and the kernel of the adjoint of some linear mapping. For a linear mapping A : F → E between two normed real vector spaces F and E consider the conic systems (2) and (3) defined by taking L = Im(A). These two conic systems can respectively be written as and Here A * : E → F denotes the adjoint operator of A, that is, the linear mapping satisfying y, Aw = A * y, w for all y ∈ E, w ∈ F. Let L(F, E) denote the set of linear mappings from F to E. Endow L(F, E) with the operator norm, that is, (5) is feasible. The distance to infeasibility of (5) is defined as Observe that (5) is strictly feasible if and only if dist(A, I) > 0. Given |w|.
Induced norm and induced eigenvalue mappings
In addition to our assumption that K ⊆ E is a regular closed convex cone, throughout the sequel we assume that e ∈ int(K) is fixed. We next describe a norm · e in E and a mapping λ e : E → R induced by the pair (K, e). These norm and mapping yield a natural alternative interpretation of ν(L) as a measure of the most interior solution to the feasibility problem x ∈ L ∩ int(K) when this problem is feasible.
For the special case of the nonnegative orthant R n + this norm has a natural interpretation: it is easy to check that for e = 1 · · · 1 T we obtain · e = · ∞ . The geometric interpretation is shown in Figure 2. Define the eigenvalue mapping λ e : E → R induced by (K, e) as follows Observe that x ∈ K ⇔ λ e (x) ≥ 0 and x ∈ int(K) ⇔ λ e (x) > 0. Furthermore, observe that when x ∈ K λ e (x) = max{r ≥ 0 : v e ≤ r ⇒ x + v ∈ K}.
Thus for x ∈ K, λ e (x) is a measure of how interior x is in the cone K.
x It is easy to see that u * e = u, e for u ∈ K * . In analogy to the standard simplex, let It is also easy to see that the eigenvalue mapping λ e has the following alternative expression λ e (x) = min The next result readily follows from Proposition 2 and convex duality.
Proposition 3. If · = · e , then for any linear subspace L ⊆ E ν(L) = min In particular, when L ∩ int(K) = ∅ the quantity ν(L) can be seen as a measure of the most interior point in L ∩ int(K).
We next illustrate Proposition 3 in two important cases. The first case is E = R n with the usual dot inner product, K = R n + and e = 1 · · · 1 T ∈ R n + . In this case · e = · ∞ , · * e = · 1 , (R n + ) * = R n + and ∆(R n + ) is the standard simplex ∆ n−1 := {x ∈ R n + : n i=1 x i = 1}. Thus λ e (x) = min The second special case is E = S n with the trace inner product, K = S n + and e = I ∈ S n + . In this case · e and · * e are respectively the operator norm and the nuclear norm in S n . More precisely where λ i (X), i = 1, . . . , n are the usual eigenvalues of X. Furthermore, (S n + ) * = S n + and ∆(S n + ) is the spectraplex {X ∈ S n + : n i=1 λ i (X) = 1}. Thus λ e (x) = min j=1,...,n λ j (X). In addition, in a nice analogy to (8), for · = · e we have ν(L) = max X∈L X ≤1 min j=1,...,n λ j (X).
Sigma measure
The induced eigenvalue function discussed in Section 4 can be defined more broadly. Given Define the sigma condition measure of a linear subspace L ⊆ E as follows The quantity σ(L) can be interpreted as a measure of the depth of L ∩ K within K along all directions v ∈ K. Proposition 3 and Proposition 5(c) below show that σ(L) coincides with the measure ν(L) of the most interior point in L ∩ K when · = · e . The construction (10) of σ(L) can be seen as a generalization of the sigma measure introduced by Ye [27]. Observe that L ∩ int(K) = ∅ if and only if σ(L) > 0. Furthermore, in this case Proposition 5 below shows that the quantities σ(L) and ν(L) are closely related. To that end, we rely on the following analogue of Proposition 2.
Proof. Assume v ∈ K is fixed. The construction of λ v implies that where on the second line we used the von Neumann minimax theorem [26] (also see [17,Theorem 11.1.]), and the last step follows from the identity max (a) For any norm · in E the following holds where In particular, if K * ⊆ K then ν(L) = σ(L).
(b) The first inequality follows from part (a). For the second inequality observe that since cos(·) is decreasing in [0, π] cos(Θ(K * , K)) = min The second inequality then follows from part (a) as well.
Symmetry measure
Next, we will consider the symmetry measure of L, which has been used as a measure of conditioning [3,4]. This measure is defined as follows. Given a set S in a vector space such that 0 ∈ S, define Sym(0, S) := max{t ≥ 0 : w ∈ S ⇒ −tw ∈ S}.
Observe that Sym(0, S) ∈ [0, 1] with Sym(0, S) = 1 precisely when S is perfectly symmetric around 0. Let A : E → F be a linear mapping such that L = ker(A). Define the symmetry measure of L relative to K as follows.
It is easy to see that Sym(L) depends only on L, K and not on the choice of A. More precisely, Sym(0, A({x ∈ K : x ≤ 1})) = Sym(0, A ({x ∈ K : x ≤ 1})) if ker(A) = ker(A ) = L. Indeed, the quantity Sym(L) can be alternatively defined directly in terms of L and K with no reference to any linear mapping A as the next proposition states. Proposition 6. let L ⊆ E be a linear subspace. Then Proof. Let A : E → F be such that L = ker(A). From (13) and (14) it follows that for S : Observe that L ∩ int(K) = ∅ if and only if Sym(L) > 0. It is also easy to see that Sym(L) ∈ [0, 1] for any linear subspace L. The following result relating the symmetry and sigma measures is a general version of [13,Proposition 22]. Theorem 2. Let L ⊆ E be a linear subspace such that L ∩ int K = ∅. Then with the convention that the right-most expression above is +∞ if Sym(L) = 1. If there exists e ∈ int(K * ) such that z = e, z for all z ∈ K then Sym(L) 1 + Sym(L) = σ(L).
Proof. To ease notation, let s := Sym(L) and σ := σ(L). First we show that σ ≥ s 1+s . To that end, suppose v ∈ K, v = 1 is fixed. By Proposition 6 there exists z ∈ K, z ≤ 1 such that z + sv ∈ L. Observe that z + sv = 0 because z, v ∈ K are non-zero and s ≥ 0. Thus x := 1 z+sv (z + sv) ∈ L, x = 1 and Since this holds for any v ∈ K, v = 1, it follows that σ ≥ s 1+s .
Next we show that σ ≤ s 1−s . Assume s < 1 as otherwise there is nothing to show. Let v ∈ K, v = 1 be such that At least one such v exists because s = Sym(L) < 1. It follows from the construction of σ(L) that there exists x ∈ L, x = 1 such that λ v (x) ≥ σ > 0. In particular, x−σv ∈ K. Furthermore, x−σv = 0 as otherwise v = 1 σ x ∈ L and x + v ∈ L which would contradict (15). Thus z := x−σv x−σv ∈ K, z = 1 and z + σ Since this holds for any v ∈ K, v = 1 satisfying (15), it follows that s ≥ σ 1+σ or equivalently σ ≤ s 1−s . Next consider the special case when there exists e ∈ int(K * ) such that z = e, z for all z ∈ K. In this case, x − σv = e, x − σv = e, x − e, σv = x − σ v = 1 − σ in the previous paragraph and so the second inequality can be sharpened to s ≥ σ 1−σ or equivalently σ ≤ s 1+s .
We also have the following relationship between the distance to infeasibility and the symmetry measure. .
Extended versions of ν(L) and σ(L)
The construction of the distance to infeasibility ν(L) can be extended by de-coupling the normalizing constraint of u ∈ K * from the norm defining its distance to L ⊥ . More precisely, suppose ||| · ||| is an additional norm in the space E and consider the following extension of ν(L) Proceeding as in Proposition 2, it is easy to see that V(L) = min Thus only the restriction of ||| · ||| to L matters for V(L). We next consider a special case when this additional flexibility is particularly interesting. Suppose L = Im(A) for some linear map A : F → E and define the norm ||| · ||| in L as follows |||x||| := min where |·| denotes the norm in F . The proof of Theorem 1 readily shows that in this case V(L) = dist(A,I)
A
. In other words, V(L) coincides with Renegar's relative distance to infeasibility when the norm ||| · ||| in L is defined as in (16).
The additional flexibility of V(L) readily yields the following extension of Proposition 3: If · = · e for some e ∈ int(K) then for any linear subspace L ⊆ E and any additional norm ||| · ||| in L The construction of σ(L) can be extended in a similar fashion by decoupling the normalizing constraints of v ∈ K and x ∈ L. More precisely, let ||| · ||| be an additional norm in L and consider the following extension of σ(L): The additional flexibility of Σ(L) readily yields the extension of Proposition 5 to the more general case where ν(L) and σ(L) are replaced with V(L) and Σ(L) respectively for any additional norm ||| · ||| in L.
Next, consider the following variant of ν(L) that places the normalizing constraint on y ∈ L ⊥ instead of u ∈ K * : It is easy to see thatν(L) = ν(L) = sin ∠(L ⊥ , K * ) when · = · 2 . However,ν(L) and ν(L) are not necessarily the same for other norms. Like ν(L), its variantν(L) is closely related to Renegar's distance to infeasibility as stated in Proposition 7 below, which is a natural counterpart of Theorem 1. Suppose A : E → F is a linear mapping and consider the conic systems (2) and (3) defined by taking L = ker(A), that is, and In analogy to dist(A, I), define dist(A, I) as follows A straightforward modification of the proof of Theorem 1 yields Proposition 7. We note that this proposition requires that A be surjective. This is necessary because dist(A, I) = 0 whenever A is not surjective whereas Proof. First, we prove dist(A, I) ≤ ν(L) A . To that end, letȳ ∈ L ⊥ and u ∈ K * be such that ȳ * = 1 and ν(L) = ȳ −ū * . Sinceȳ ∈ L ⊥ = Im(A * ) and ȳ * = 1, it follows thatȳ = A * v for somev ∈ F with |v| * ≥ 1/ A . Letz ∈ F be such that |z| = 1 and v,z = |v| * = 1. Now construct ∆A : E → F as follows ∆A(x) := ū −ȳ, x |v| * z .
Next, we prove ν(L) ≤ A −1 ·dist(A, I). To that end, supposeà ∈ L(E, F ) is such thatà * w ∈ K * for somew ∈ F \ {0}. Since A is surjective, A * is one-to-one and thus A * w = 0. Without loss of generality we may assume that A * w * = 1 and so |w| * ≤ A −1 . It thus follows that Since this holds for allà ∈ L(E, F ) such thatà * w ∈ K * for some w ∈ F \ {0}, it follows that ν(L) ≤ A −1 · dist(A, I).
Finally, consider the extension of ν(L) obtained by de-coupling the normalizing constraint of y ∈ L ⊥ from the norm defining its distance to K * . Suppose ||| · ||| is an additional norm in the space L ⊥ and consider the following extension of ν(L): To illustrate the additional flexibility of V(L) consider the special case when L = ker(A) for some surjective linear mapping A : E → F and define the norm ||| · ||| in L ⊥ as follows |||x||| := |Ax|, where | · | denotes the norm in F . The proof of Proposition 7 shows that V(L) = dist(A,I) A for this choice of norm. | 2018-05-29T19:57:55.000Z | 2018-05-24T00:00:00.000 | {
"year": 2018,
"sha1": "3e69c5383fa411c6b7febfc63e2bd36a09275ae5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.09494",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3e69c5383fa411c6b7febfc63e2bd36a09275ae5",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
17977930 | pes2o/s2orc | v3-fos-license | The Effects of Sertraline in Controlling Refractory Hypertension in Women with Premenstrual Syndrome.
Objective: The aim of this study was to evaluate the effect of Premenstrual Syndrome (PMS) treatment with selective serotonin reuptake inhibitor (SSRI) on treatment response of refractory hypertension of the patients. Method: This was a triple-blind randomized clinical trial conducted on female patients suffering from refractory hypertension and PMS at the same time. We obtained informed consent from 40 patients who had inclusion criteria and selected 20 patients for the intervention (sertraline 50 mg daily) and 20 for the control groups. The study period was five weeks. The mean of systolic and diastolic blood pressure before and after intervention was measured separately for each individual in each group and the mean of blood pressure of the members of the two groups were compared with each other. Results: The mean age of the participants was 43.60 ± 4.57. In this study, systolic and diastolic blood pressure of both groups reduced after intervention. The mean of systolic blood pressure was reduced by 40.86 mmHg in the intervention group and this reduction was 16 mm Hg in control group after intervention (P<0.001). Comparing this reduction between the two groups, we found that reduction rate in systolic blood pressure of the two groups did not have a significant statistical difference before and after the intervention (P = 0.11). Mean of diastolic blood pressure also showed reduction of 9.17 mm Hg and that of control group showed 6.7-mmHg reduction. Reduction rate of diastolic blood pressure in the intervention group had a statistically significant difference with that of the control group (P<0.017). Conclusion: Administration of sertraline is more effective in controlling diastolic blood pressure in women suffering from refractory hypertension and comorbid PMS.
premenstrual phase have been the focus of attention since the past (1). Premenstrual syndrome (PMS) and its stronger and more specific form, premenstrual dysphoric disorder (PMDD), have been classified as Not Otherwise Specified Disorder (NOS). While PMS could affect 80% of women in childbearing age, PMDD incidence is about 5% (2). If symptoms have enough intensity to disturb daily life or interpersonal relations, PMDD would be considered (3). PMDD has research criteria in Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR), and its main symptoms are low mood, stress, emotional instability and reduction of interests in activities (4). Some randomized control trial studies on women suffering from PMDD have shown that selective serotonin reuptake inhibitors (SSRIs) have suitable effects with least side effects (5)(6)(7)(8)(9). A systematic review also confirmed that SSRIs and Clomipramine are effective in reducing symptoms of premenstrual Although PMS is not a proven factor for hypertension nor is one of the factors reducing the impact of antihypertensive medications (11), hypertension is higher in patients with PMS (12) and it is suggested that PMS be considered in women with hypertension, whose control of blood pressure is difficult and are not yet in their postmenopausal period (11). Studies have shown that in women with PMS, activity of the sympathetic nervous system increases in final stages of luteal phase and inversely activity of parasympathetic nervous system decreases in this stage (13,14). In premenstrual phase, there is a tendency to sodium and fluid retention (2), and in those with severe PMS, activity of parasympathetic system reduces during sleep (15). The primary objective of treating those suffering from hypertension, is reaching minimum cardiovascular disorders, which is possible through reduction of blood pressure and reversible risk factors (16). No comprehensive study has been conducted on the effect Iranian J Psychiatry 11:4, October 2016 ijps.tums.ac.ir of controlling PMS symptoms with SSRIs and since the prevalence of PMS is high in women in childbearing age and lack of control on women's refractory hypertension could lead to significant consequences, conducting such a study could be of prime importance. The aim of this study was to evaluate the effect of PMS treatment with sertraline to control refractory hypertension in women with premenstrual syndrome.
Materials and Method
This was a triple-blinded randomized controlled study performed from June 2010 to June 2012. In this study, 100 female patients with the age range of 15 to 49 who suffered from refractory hypertension (patients for whom despite lifestyle modification and receiving 3 anti-hypertension drugs, one of which was diuretics, systolic blood pressure lower than 140 or diastolic lower than 90 could not be preserved) and referred to the cardiology clinic of Tabriz University of Medical Sciences, the main referral center in Northwest of Iran, were selected. Among these 100 refractory hypertension patients, 40 who were also suffering from premenstrual syndrome based on test of Daily Record of Severity of Problems (DRSP) were selected after providing written informed consent. DSRP is a method used for diagnosis and evaluation of DSM-IV Premenstrual Dysphoric Disorder (17). Using Randlist software (Version 1.2, DatInf GmbH, Tubingen, Germany) patients were divided into two groups randomly: one group received 50 mg sertraline daily and the other received placebo . Inclusion criteria were being female, having refractory hypertension, having criteria for premenstrual dysphoric disorder, age between 15 and 49, and providing written informed consent. Exclusion criteria were history of allergy or complication to sertraline, bipolar mood disorders, depressive disorders and anxiety, psychosis, mental retardation, breastfeeding and pregnancy, acute coronary syndrome and heart failure. Patients with psychiatric disorders were excluded from the study, using a structured routine psychiatric interview. In this study, the physicians, patients and statisticians were not aware of type of consumed drug (medication or placebo). This was a triple-blinded study. Patients used their anti-hypertension medications as usual after joining the study, and no change was made in antihypertension drug regimen of the patients. Patients in the intervention group received 50 mg sertraline daily (made in Sobhan Pharmaceutical Company) for five weeks. Patients in the control group underwent treatment by placebo with a similar duration . The two groups were studied for five weeks, and between weeks 2 and 3, the patients of both groups were controlled in terms of medication side effects (using a related checklist) in person or by phone; and the checklist of side effects of SSRIs was completed for all patients. In addition, both groups were studied in terms of symptoms of sudden increase or decrease in blood pressure by history taking. TONOPORT All participants signed a written consent, and the Ethics Committee of Tabriz University of Medical Sciences (TUMS) approved the study protocol, which was in compliance with Helsinki Declaration. The registration code of this study in Iranian Registry of Clinical Trials (IRCT) web site is IRCT138904092181N4. At the beginning of study and at the end of the fifth week, blood pressure of both groups were controlled again by 24-hour TONOPORT (PAR Medizintechnik GmbH & Co. , Berlin, Germany) , which is a portable patient monitor for ambulatory blood pressure measurement. The mean of blood pressure, minimum and maximum BP and even blood pressure of the patient in nighttime was obtained and all were compared with pre-study blood pressure . Obtained data have been represented as Mean ± Standard deviation (Mean ± SD), and also distribution and percentage. SPSSTM Version 15 (SPSS ltd, Chicago, IL, USA) was used for statistical analysis. Independent t-test and chi square test were used to analyze the data. P value less than 0.05 was considered as statistically significant.
Results
Forty patients participated in this study, and the mean age of the intervention group was 43.60 ± 4.57 years. The minimum age among the patients in the intervention group was 32 years and that of the control group was 33 years. The maximum age in the both groups was 49 years. No significant difference was found between the two groups in terms of age (Log Rank test χ2, 1 degree of freedom = 6.81, P = 0.738). During the study period, three patients (from intervention group) were excluded from the study due to experiencing medication side effects, and three persons from the control group due to dissuasion and lack of consent for continuing the study. In this study, the mean of systolic and diastolic blood pressures was not significantly different between the two groups based on the results of the t-test (peripheral systolic blood pressure = 0.742) and (peripheral diastolic blood pressure = 0.796). At the end of the study, systolic and diastolic blood pressures of both groups were decreased. The mean of systolic blood pressure of the intervention group had a reduction of 40.86 mm Hg and that of the control group a reduction of 16 mm Hg. The mean of diastolic blood pressure of the intervention group had a reduction of 17.9 mm Hg and that of the control group a reduction of 6.7 mm Hg (Log Rank test χ2, 1 degree of freedom = 8.15, P<0.001). Blood pressure of both groups at the start and at the end of the study is demonstrated in Table 1. The reduction of systolic and diastolic blood pressure was statistically significant after prescribing medication in the intervention group (P<0.001 for both). Iranian J Psychiatry 11:4, October 2016 ijps.tums.ac.ir
Table1. Blood Pressure of Intervention and Control Groups at the Start and End of Refractory Hypertension Study
Mean and standard deviation of diastolic blood pressure
Figure1. Mean Diastolic Blood Pressure in Both Groups at the Beginning and End of the Refractory Hypertension Study
In addition, reduction in systolic and diastolic blood pressure was statistically significant at the end of study in the control group (Log Rank test χ2, 1 degree of freedom = 10.52, P<0.001). The reduction rate of systolic blood pressure was not significantly different between the two groups before and after the intervention (Log Rank test χ2, 1 degree of freedom = 11.2, P = 0.11). The reduction rate of diastolic blood pressure was statistically significant between the two groups before and after the intervention, and reduction of diastolic blood pressure was more in the intervention group (Log Rank test χ2, 1 degree of freedom = 4.88 , P=0.017). (Figure1). Effect size for the significant difference in the level of reduction in diastolic BP in intervention and control groups was respectively relative risk, 1.22; 95% CI, 0.94 to 1.50 and relative risk, 2.48; 95% CI, 1.85 to 3.11. Placebo, compared to sertraline, had a lower effect on diastolic blood pressure of the patients (There was a statistically significant difference between the effect of placebo and sertraline on the reduction of diastolic blood pressure) (Log Rank test χ2, 1 degree of freedom = 6.24, P = 0.017) In this study, the observed side effects in prescribing five-week sertraline with dose of 50 mg daily, which were clinically important for patients were nausea (45%), stomach ache (25%), drowsiness (20%), increase or decrease of appetite (20%), vomiting (15%) and feelings of inner tension (15%). No side effects were reported in placebo group. Attention should be paid to the point that a major part of these side effects is almost clinically negligible and is seen in prescribing placebo.
Discussion
Since blood pressures of both groups were controlled by sphygmomanometer at a cardiology clinic, perhaps stress and anxiety resulted by measuring blood pressure on patient is more on systolic blood pressure rather than diastolic one. After the intervention, the systolic blood pressure reduced in the control group and it might have masked the probable effect of medication on systolic blood pressure.
In addition, in this study, basic blood pressure of both groups was measured by normal barometer, but blood pressure was registered and controlled by a 24-hour monitoring after the intervention. However, checking blood pressure at clinics could lead to momentary hypertension in some people (18). In a case report study, hypertension in two women with recurrent hypertension after both were diagnosed with PMS, after treatment PMS problems with 25 mg amitriptyline and 1/5 mg Bromazepam daily during luteal phases per cycle in one case, and 50 mg sertraline daily and 1/5 mg Bromazepam twice a day, in the other one besides of antihypertensive drugs, blood pressure was well controlled. The authors suggested that in women with refractory hypertension who are not yet in menopause, PMS and the treatment beside of antihypertensive treatment should be considered (11). In a study conducted on women with PMS, it was found that diastolic blood pressure (DBP), heart rate (HR) and (SBP) systolic blood pressure in women with PMS were higher in healthy subjects and the difference was statistically significant (19). Considering that reduction in diastolic blood pressure in the group receiving PMS treatment (intervention group) was significantly different compared with that of the control group, it may be possible that accompaniment of PMS and HTN has higher effect on Iranian J Psychiatry 11:4, October 2016 ijps.tums.ac.ir DBP rather than SBP. This issue should be investigated more in further studies. In a study, plasma aldosterone level was higher in women with PMS during luteal phase compared with that of the control group. Water and salt retention during luteal phase takes place in women with PMS (20); and one reason for hypertension in these people may be the retention of water and salt during luteal phase. Some studies have shown that norepinephrine concentration in luteal phase in considerably higher in that in follicular phase (21,22). Increase in norepinephrine concentration could be a reason for hypertension. Some studies showed that activity of sympathetic nervous system (SNS) during luteal phase is more compared to during follicular phase (23)(24)(25)(26). Increase in the activity of SNS and decrease in the activity of parasympathetic nervous system during luteal phase could justify hypertension in people with PMS (27).
According to the findings of this study, it is suggested that in women with refractory hypertension who are not in their menopausal period, diagnosis and treatment of PMS be considered as a comorbid disease for a better control of their diastolic hypertension.
In terms of side effects of sertraline during the five weeks of prescription, some side effects of this study were more than those of the others; for instance, in this study nausea was reported to be 45%, which was 27% in another study (25). In this study, drowsiness was 20%, but it was 14% in the another study (28). On the other hand, some of the reported side effects of this study were less than those of other studies. For example, in this study, significant sexual side effects were not reported. It should be mentioned that due to small sample size, low treatment period and type of consumed medication, generalization of medication side effects of this study or comparing them with the results of other studies is not possible.
Limitations
In this study, recovery or failure in recovery in PMS symptoms of patients were not controlled, and the only criterion was prescription of medication. It is recommended to consider the control or lack of control of PMS symptoms and their effects on controlling patients' blood pressure in the future studies.
Conclusion
According to the findings of this study, prescribing 50 mg sertraline daily for women with refractory hypertension and comorbid PMS is more effective as an anti-hypertension medication and a better control of DBP than placebo and it is not due to its effect on anxiety conditions, since people with anxiety disorder were excluded of this study. This effect does not have a statistically significant difference with the effect of placebo about systolic blood pressure. Thus, further studies should be designed with higher populations or different subtypes of patients. It is suggested that further studies with larger sample size and with other common treatments of PMS be conducted in this field. In this study, considering the fact that sertraline was started for the trial group with a high dose, this may be the reason for high side effects reported in this study compared with other studies. | 2018-04-03T04:53:11.782Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "f52d55dca5e9336756847396f150517b4e2f351b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f52d55dca5e9336756847396f150517b4e2f351b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219316983 | pes2o/s2orc | v3-fos-license | Prosumers Matching and Least-Cost Energy Path Optimisation for Peer-to-Peer Energy Trading
Potential benefits of peer-to-peer energy trading and sharing (P2P-ETS) include the opportunity for prosumers to exchange flexible energy for additional income, whilst reducing the carbon footprint. Establishing an optimal energy routing path and matching energy demand to supply with capacity constraints are some of the challenges affecting the full realisation of P2P-ETS. In this paper, we proposed a slime-mould inspired optimisation method for addressing the path cost problem for energy routing and the capacity constraint of the distribution lines for congestion control. Numerical examples demonstrate the practicality and flexibility of the proposed method for a large number of peers (15 – 2000) over existing optimised path methods. The result shows up to 15% cost savings as compared to a non-optimised path. The proposed method can be used to control congestion on distribution links, provide alternate paths in cases of disruption on the optimal path, and match prosumers in the local energy market.
The weight along the link (i, j) representing the link cost.
The capacity of the link (i, j). D i,j The conductivity/total traffic of the tube/link. E Set of network links (i, j) connecting the prosumers. G The strongly connected network graph. G * A subgraph of G. I o A constant flux/energy demand flowing from the n i to n j . L The length of the tube. N Total number of actors. n i Source prosumer. n j Destination prosumer.
The associate editor coordinating the review of this manuscript and approving it for publication was Bin Zhou .
P
The path traversed by traffic flow x i,j from producer n i to consumer n j . p i , p j The pressure at n i and n j .
Flux/energy demands on the tube/link connecting n i and n j . R Set of nodes of size |M |. t Iteration time.
V
Interconnected nodes representing the set of actors. X , Y Bipartite sets of graph G.
The traffic/demand flow on the link (i, j).
I. INTRODUCTION
In recent years, smart grid (SG) has emerged with intelligent monitoring, control, and management of the traditional power grid, offering increased automation and bi-directional communication to improve the efficiency of the grid [1], [2]. Peer-to-peer energy trading and sharing (P2P-ETS) is an application of SG for transacting energy among a community of connected peers to realise better performance of the grid including scalability, robustness, and reduction in carbon emissions [3]- [5]. One challenge of P2P-ETS is the large influx of distributed energy resources (DER) actively being connected to the grid, making grid management, power flow, and control challenging. To alleviate these challenges, distributed algorithms have been proposed for energy coordination and control [4]- [10]. Authors in [4] proposed pair matching strategies for prosumer's energy trading market. In [7], an energy exchange problem among several microgrids is addressed to minimise global operation costs. Study [10] presented a framework for P2P energy trading. The study assumes that the underlying communication link is perfect. The impact of imperfect communication links is assessed in [5] while [6] addressed the optimal routing algorithm to facilitate communication among microgrids. The aforementioned literature paid far little attention to the underlying energy path/route connecting these prosumers. Energy path optimisation has been a fundamental distribution grid network challenge; the primary aim of which is to minimise the weights/cost of the path connecting generators to loads. Energy loss due to long-distance transmission and other associated costs has a direct correlation to the distance it travels. For instance, 'Ofgem' reported that 66.48% of an electricity bill is a service charge, with approximately 24% related to network characteristics including distance charge [11]. One solution to minimising cost path is the creation of multiple and redundant links between the generators and the loads, leading to increased network cost and complexities. With the integration of these numerous DERs, energy path complexity increases, thus, an optimised path for the energy demand/supply of a prosumer (a producer and consumer of energy) to be routed to their energy trading target based on least-cost path and capacity constraints is actively needed. This will inadvertently reduce the energy routing cost, improves control schemes, and reduces energy congestion on the distribution lines.
While several techniques have been proposed for path optimisation problems in the cyber-physical networks, including the Dijkstra algorithm [12], Bellman-Ford algorithm [13], etc., the computation time for these algorithms is excessive when the network scale becomes large [14]. To address the computational complexity, nature-based algorithms including bioinspired techniques like genetic algorithm, particle swarm optimisation, and ant colony optimisation have emerged [14]. Recently, the capability of Physarum Polycephalum, also known as slime mould dynamics, has been shown to offer efficient techniques in solving many graph network problems [14]- [17].
Interestingly, slime mould optimisation is particularly suited for the SG network, as it can model both the shortest path cost problem for energy routing and the capacity constraint of the distribution lines for congestion control. This combination is a major strength of the proposed scheme compared to other traditional algorithms that only model shortest path problems. Besides, for P2P-ETS where prosumers are distributed and diverse, a shortest path finding algorithm to establish an optimised path among connected peers in parallel is of most importance. Parallel execution reduces network computation time and improves efficiency. This would not only result in low cost but would also reduce the search time significantly, especially in a large network as P2P-ETS.
The main contributions of this work can be summarised as follows: • we present a slime mould-inspired approach for SG to determine the least-cost optimal path between consumers and producers whilst implementing different scenarios for energy network representation, and routing energy demand between the prosumers; • a specific case of maximum flow capacity in the distribution network is considered to realise optimal path for energy flow in a capacitated network for congestion control and to provide an alternate path in scenarios of disruption with the optimal energy distribution path; • finally, we extend the optimised path algorithm for perfect matching of the energy prosumers, ensuring all consumers are matched with producers to collectively reduce the network costs. The remaining sections are organised as follows: Relevant literature review is presented in Section II. Section III introduced the problem formulation, the slime mould based optimised path algorithm, and an extension to the algorithm for energy matching between producer and consumer. Evaluation of the developed solution is discussed in Section IV, including its application to the optimal path for energy flow as well as its numerical examples and the results. Section V summarises the paper and identified future works.
II. LITERATURE REVIEW
The theory of complex networks involving a graph modelling of real-world networks and evolutionary computation algorithms have increasingly been used in SG applications for path optimisation, resource discovery and power routing. SG has been modelled as a complex network to analyse and adapt the distribution of power flow [18]. The optimisation objectives in SG, including the least-cost flow and shortest path finding problems, can be solved by using the technique of graph theory. Shortest path finding algorithms are used to find the minimum weighted or most efficient path in the network. They are used to identify a path or route between two vertices such that the sum of the weights of its constituent edges is minimised. This is equivalent to finding the optimal and also the alternate paths for the flow of power from a generating station to consumer ends [19]. By integrating the theory of complex network and evolutionary algorithm concepts, a multi-objective minimisation problem can be formulated [20]. This objective function combines cost elements, related to the number of electric cables (graph links), and several metrics that quantify properties beneficial for SG including energy exchange at the local scale (considering high robustness and resilience). A method to manage the active power in the distribution systems using an application of the graph theory, specifically, the successive shortest path algorithm, is introduced in [21], for optimal power generation, dispatch, and power flows. The algorithm is implemented in a distributed way with simulations to prove VOLUME 8, 2020 its efficiency in cases of optimal operation, congestion management, and power generation cost. Furthermore, in minimising time-out delays in power system networks during outages, Hemalatha, et al. [19] investigated the transmission of power through the optimal path for quick reconfiguration of power system components using Bellman-Ford algorithm. The solution is modelled for a given set of generation, load pair, through the optimal path considering the capacity of the transmission line, voltage stability, shortest path (minimum losses), priority of loads, and power balance between the generation and demand. The algorithm was applied to a practical 230kV network to demonstrate its effectiveness. To maintain the stability of a microgrid (MG) system through load shedding, study [17] proposed a Physarum-based hybrid optimisation algorithm with ant colony optimisation (PM-ACO). The model improved the selection probability of important items and emerged a positive feedback process to generate optimal solutions. Experimental results demonstrate that the proposed PM-ACO algorithms have stronger robustness and a higher convergence rate. The authors in [22] extended the Dijkstra algorithm to a multi-objective shortest path algorithm to design a spanning graph of a communication infrastructure connecting the Phasor Measurement Units to the control centre.
From the preceding discussion, Dijkstra, Bellman-Ford, and Kruskal's algorithms are examples of single objective shortest path algorithms used in the SG applications discussed. These algorithms track the shortest paths from a single source to one determined node in the graph and find the shortest paths independently [22]. However, according to the structure of power networks, specifically in large P2P-ETS networks, it is sometimes necessary to find shortest paths from one centered bus and/or multiple points to multiple buses (i.e. phasor measurements units, households) simultaneously checking overlapped paths in the routing problem. This requires a multi-objective shortest path algorithm like slime mould optimisation, to find the best solution routes for connecting SG components to ensure the least-cost power flow.
This motivated the study to harness the potential of slime mould-inspired path optimisation to address the problem of matching prosumers in the microgrid to facilitate P2P-ETS utilising the least-cost optimal path between them. Secondly, to route energy demand between the producers and consumers and reduce the chances of overloading the distribution lines.
III. OPTIMAL PATH PROBLEM FORMULATION
Consider an energy network model that focuses on leastcost path optimisation and energy demand routing, among prosumers at the tertiary level of control without explicitly touching the physicality of the underlying distribution network such as power flow analysis. Here, least-cost path optimisation refers to the cost of routing energy demands over a distribution link in relation to the weight or cost of the links used. Interactions among these prosumers are illustrated in Fig. 1, that shows the power and communication connection between prosumers, consumers, and the grid [23].
From Fig. 1, for power flow from the grid to the consumer, the optimal path would be through the connection to the prosumer, then to the consumer. The same analogy applies to other actors (prosumers, consumers, and producers) illustrated in Fig. 1. Thus, each actor (i = 1, · · · , N ) has its computed energy needs (demands or supplies) to be satisfied in the network. For instance, actor i is a producer (source) that desires to sell energy to consumer j (sink). The energy network is described as a connected graph G = (V , E), where V = {1, · · · , N } denotes the set of actors.
G is a strongly connected directed graph, where every node/actor in the network is reachable from every other node, i.e., there exists a directed link e ij ∈ E denoted by (i, j) from node n i to node n j in the network. E(t) ⊆ V × V is a set of links that changes over time according to the state of the link at the time, t. Each directed link is characterised by its capacity, c i,j , (the maximum power that can flow through the link), and the power flow, x i,j from the producer to the consumer. For each link, A denotes the set of the link weights which represents the link costs. Representing |V | as n, then A can be expressed as the symmetrical adjacency matrix of equation (1), where A i,j denotes the weight on the link (i, j), Given a source node i ∈ V , a sink node j ∈ V , a function i,j (x i,j ) represents the cost of path P that traversed by energy flow x i,j from n i to n j . The optimal cost path problem can be formulated as where P(i, j) represents the set of all paths from producer n i to consumer n j . Equation (2b) is the energy conservation constraints, where all energy entering a node (not the source nor destination) leaves the node. Equation (2c) is the capacity constraints that suggest that the energy flow x i,j to be less than the capacity of the link c i,j , while (2d) is the non-negativity constraints meaning that, there should be a positive flow from n i to n j .
A. PHYSARUM POLYCEPHALUM ALGORITHM
The slime mould based path optimised algorithm is briefly introduced in this section. Physarum polycephalum forms a dynamic tubular network connecting peers, where the diameters of the tubes carrying large fluxes of flows grow to expand their capacities, and the tubes that are not used decline and disappear entirely. The segments of tubes of slime mould may be modelled as edges of a graph network, with an intersection point representing the nodes. The parameters Q i,j represent the flux on the tubes connecting node n i to node n j . According to Kirchoff's law and the law of conservation of flow, the flux of input at a source is equal to the total flux of output at all node sets [24], [25]. Also, at any other node, the sum of flux flowing in is equal to the sum of flux flowing out [25]. The flow conservation may be expressed as: where I o is constant and represents the flux flowing from the source node or into the sink node. Assuming the flow along the tubes is approximated by Poiseuille flow, the flux Q i,j can be calculated by, where D i,j is the conductivity of the tube, L i,j is the length of the link, and p i is the pressure at node i. To calculate the pressure on each node, equation (4) is substituted in (5) as: By setting p j to the basic pressure level of p j = 0, all p i and Q i,j can be determined. To model the adaptive behaviour of the slime mould, the conductivity D i,j is believed to change over time as the flux increases or decreases, resulting in the evolution of D i,j as where γ is a constant that represents the decay rate of the tube that ensures the convergence of the algorithm when the value is set to 1. Equation (6) is called the adaptation equation, depicting the relationship between the conductivity and flux on the link. For instance, the conductivity diminishes when the flux on the links is zero and increases by the amount of flux on the link. f (·) is assumed to be monotonically increasing continuous function satisfying f (0) = 0. Assume f (|Q|) = |Q|, γ = 1, Physarum can always converge to the shortest path [25]. For an iterative process, and adopting the functional form f (Q) = |Q|, (7) is used instead of (6) to calculate the conductivity of the link as where D t i,j is the value of D i,j at the t th iteration time. Thus, based on the feedback from the iteration, a critical link would be preserved and others would be deleted, thus forming a Physarum spanning tree. Other algorithms solve problems by progressing step-by-step in a single direction, the slime mould algorithm works by sampling a variety of directions in parallel. Based on the samples, it discards less optimal directions to progress toward the solution. After adaptation, the algorithm selects another set of sources and sink nodes and repeats the calculations.
Example 1: Consider the network shown in Fig. 2, the shortest path between nodes 1 and 5 needs to be determined. The number along the edge represents the weight of the edge. Each source node creates a unit flow of demands and the demands are consumed at destination nodes called sinks. To start the implementation, the conductivity is first initialised, as the flux moves along the edges, the conductivity is recorded, as shown in Fig. 3. It can be observed that the flux along edges (1, 3) and (3,5) converge to 1 signifying the shortest path, with the same result evidence from other traditional algorithms including the Dijkstra algorithm, but with a longer convergence time.
Here, convergence is achieved when the strongest flow moves from the source to the sink using the least-cost shortest path and remained constant (the flux of each arc does not change anymore) until the end of the simulation.
In Section IV-E there will be further comparison results of the convergence time with the traditional algorithms. Interestingly, if there is a unique shortest path from source to sink and the dynamics stabilise, the diameter of edges on the shortest path converges to > 0, or 0 otherwise, which means, the link constraint enforces unused links to zero thereby removing such links from the design. This suggests that other paths except the optimised one can be successfully removed/deleted from the network design, thus saving cost in terms of eliminating redundant links in the network.
B. OPTIMAL PATH AMONG PROSUMERS
The physarum algorithm is modified for energy network optimisation as follows. Nodes are defined as energy producers, consumers, and prosumers; links are energy distribution links connecting the producers to the consumers. In a P2P-ETS, energy producers could assume the role of consumers or prosumers. To be more representative of the distribution network, an additional parameter of capacity constraint, c i,j , is included to ensure the congestion control of power flow on the distribution lines. A single source, single sink of the original Physarum model of (5) is also extended to provide multiple sources, multiple sinks for distributed implementation expressed in (8) where I i,j o represents the energy flow between producer i and consumer j, A i,j is the link cost on the distribution link, while c i,j represents the capacity of the link. This is a practical solution to ensure that the distribution line is not congested for control purposes. In the physarum model, the flow on each link is continuous during iterations; the costs on each link is updated with the flow based on peer activities at time t. The traditional shortest path algorithms including the Dijkstra algorithm only reflect the link length attributes (to calculate the shortest path), whereas, energy routing has two attributes: energy flow and the link cost (the edge weight as a function of the flow). This resulted in most algorithms utilising the Dijkstra as the path searching algorithm and then proposing another algorithm to optimise cost as found in [21], [26]. The developed algorithm is presented in Algorithm 1.
Algorithm 1 Proposed Optimal Path Algorithm for Prosumers
1 Input: The graph G = (V , E), the link cost A i,j , total flow traffic D i,j , p j = 0, the demand flow Q i,j , the capacity c i,j , γ , and the step-size, α, for all i ∈ N 2 for t > 0, do 3 Calculate p i , according to (8) Update D i,j using (7), ∀i, j ∈ N 6 Go to next time slot until maximum time-step is reached 7 end
C. ENERGY DEMAND PERFECT MATCHING
This section presents an extension to the algorithm to realise a perfect demand matching among the prosumers in the network. Consider a network of 5 consumers and 5 producers; each consumer requires a demand for 10kWh of energy. Energy demands of Consumer A can be satisfied by Producers B, C, and D. However, there is a requirement to ensure that the producer utilising the minimum cost would be appropriate to satisfy the demand among the Producers B, C, and D. Thus using the proposed slime mould algorithm and a Hungarian matching algorithm (also called the Kuhn-Munkres algorithm), the proposed optimised path algorithm is extended for prosumers matching in the network for a perfect match. This is to ensure that all the consumers are matched to the least-cost producers whilst meeting their demand requirements. The Hungarian algorithm solves the maximum-weight matching (perfect matching) in a complete bipartite graph represented with an adjacency matrix described in (1). The proposed algorithm is discussed in the next section.
D. PROSUMERS MATCHING ALGORITHM
LetM be a subset of E of the graph G. If any of two links ofM are disjoint in G,M is a matching of G, the two connected nodes of a link ofM are matched in G. Here, n consumers' energy demands are to be satisfied by m producers in the network, considering that the consumers' demands can be met by one or more producers. The aim is to ensure that at least one consumer is matched with one producer with the least path costs. Consider the bipartite subgraph G * N i ,N j , where N i = {n i,1 , n i,2 , · · · n i,n }, and N j = {n j,1 , n j,2 , · · · n j,m } denotes set of consumers and producers in the network respectively. Note that a bipartite graph is a graph that its nodes can be divided into two independent and non-empty sets.
To accommodate the matching, Algorithm 1 now becomes Algorithm 2. After the initial path optimisation, Step 6 of Algorithm 2 starts by searching for a maximum matching in the subgraph G * N i ,N j , with G = (X \ Y , E) of bipartite sets X,Y. if one is found, the algorithm stops, else, it proceeds to the next step, Step 9. The algorithm proceeds by initialising an empty matching (Steps 11 and 12), then search for augmenting matches in the subgraph, by flipping the matched and unmatched links along the search path (Steps 14 and 15).
Algorithm 2 Proposed Optimal Path and Prosumers
Matching Algorithm 1 Input: The graph G = (V , E), the link cost A i,j , total flow traffic D i,j , p j = 0, the demand flow Q i,j , the capacity c i,j , γ , and the step-size, α, for all i ∈ N 2 for t > 0, do 3 Calculate p i , according to (8) Update D i,j using (7)
IV. NUMERICAL SIMULATION AND RESULT ANALYSIS
To test the effectiveness of the proposed optimised path solution, the result of a network consisting of 10 prosumers is presented in Fig. 4. Simulations are performed in MATLAB using random graphs which are presumed to represent realworld distributed systems [26]. Assigning arbitrary costs to the connecting links, and setting α = 1 (α serves as a multiplier to calculate the value of D i,j from the demand flow Q i,j ) and γ = 1 (γ ensures faster convergence of the algorithm, the sensitivity analysis of α and γ will be discussed further in Section IV-G). In the following, simulation results to demonstrate the performances of the system under different network scenarios are presented.
A. SINGLE PRODUCER AND SINGLE CONSUMER
The convergence of the system is achieved when the flow runs from the source to the destination on the shortest path and remains until the simulation ends. Since this is the single producer-consumer pair problem, the calculations of step 4 of Algorithm 1 is according to (5). The result of the total demand convergence is illustrated in Fig. 5, while Fig. 4 shows the energy network with the weights on the links. This shows that the proposed solution solves the network problem by first locating the consumer using the least cost path, and then transmitting a demand value of 10kWh from Producer 1 to Consumer 10, as evident in Fig. 5, the total flow on edges 1 −→ 2 −→ 3 −→ 4 −→ 10 converged to 10kWh indicating the shortest path from Producer 1 to Consumer 10, while the conductivity on other edges converged to 0.
B. A SINGLE PRODUCER AND n CONSUMERS
For the case of connecting a single producer to multiple consumers, the calculations of step 4 of Algorithm 1 is VOLUME 8, 2020 given as The resulting flow plot of transmitting a demand value of 10kWh from Producer 1 to Consumer 6 and 10 of the network layout of Fig. 4 is illustrated in Fig. 6. The plot shows the total flow on each link, with higher flows on the optimal path to the destination. For instance, the flow on links 1 −→ 2 −→ 3 −→ 4 −→ 10 and 1 −→ 2 −→ 5 −→ 6 are the highest. While 7 −→ 8 −→ 9 −→ 10 shows alternate paths (less optimal) to the destination consumers. Besides, the other links converged to 0, since they are not involved in the optimised path to the consumer.
C. n PRODUCERS AND A SINGLE CONSUMER
In this setting, the demands are set to flow from multiple producers in the network directed to a single consumer.
With the configurations, the calculations of step 4 of Algorithm 1 is according to: Similarly, from the total flow convergence of Fig. 7, it can be observed that the routes 1 −→ 2 −→ 3 −→ 4 −→ 10 and 7 −→ 8 −→ 9 −→ 10 converges to higher flow value than the other links 12 −→ 5 −→ 6. While other unused links converged to 0.
D. n PRODUCERS AND n CONSUMERS
Further, setting some prosumers as producers and some as consumers, and transmitting a demand value of 10kWh among them, in this case, two consumers. Similarly, the calculations of step 4 of Algorithm 1 is according to (8). The convergence result of Fig. 8 demonstrates that the proposed optimised path algorithm solves the network problem by transmitting demand value of 10kWh each from Producers 1 and 7 to Consumers 4 and 10 using the least cost path.
E. COMPARISON WITH OTHER SHORTEST-PATH ALGORITHMS
Whilst this work in envisioned for energy networks, the solution therein can be proven for other cyber-physical networks requiring path finding algorithms. In this section, the proposed shortest path algorithm is compared to the traditional path-finding algorithms including ACO, Dijkstra [12], and IPPA (improved physarum polycephalum algorithm) [14] using different datasets shown in Table 1. The efficiency of the algorithm is tested over a network with random and varying topologies, with network sizes ranging from 15 to 2000 nodes [14], and link costs ranging from 1 to 100 and analysed based on the algorithm execution time. The reported probability is the probability of establishing a connection between the nodes.
1) COMPARISON BASED ON EXECUTION TIME
The performance of an algorithm is mostly determined by its execution time and its accuracy in solving the proposed problem. The accuracy of the algorithm has been confirmed from the previously presented cases, which located the destination prosumer utilising the least-cost path and routed the demand successfully. Thus the execution time of its shortest path property is compared against other algorithms illustrated in Fig. 9, including the Dijkstra, ACO, and IPPA. In IPPA, the authors combined the original slime mould algorithm with a parameter called Energy, which quantifies the energy provided and consumed by the tube. However, in this paper, we combined the capacity constraints to the original slime mould algorithm to model the distribution network problem, by setting a limit to the amount of power flow, thereby controlling congestion and reducing the algorithm execution time. Algorithm response time would affect network application performances, especially in a large network as P2P-ETS. It can be observed that with a fewer number of nodes, the Dijkstra is faster than the developed algorithm. However, as the number of nodes increased from 100, they both have similar execution time, which is faster than both the IPPA and the ACO algorithms. Moreover, in the Dijkstra algorithm, each link is only associated with one criterion; length, and there is no equivalent attribute like 'flow' evident in the Physarum model reacting to the change of the link cost. As a result, many classical algorithms for the cyber-physical network problem must have two separate processes/algorithms: path finding and flow optimisation. In contrast, the presented Physarumbased algorithm solves both problems simultaneously; once the link cost A i,j is updated, with the help of (4), the flow is redistributed and reallocated dynamically in the next iteration. Physarum algorithm is suitable for solving the network optimisation problems in a dynamic environment because it can utilise the computational (or intermediate) results in the previous iterations and respond to the changes by adjusting the flow. The scalability result of the algorithm by utilising 2000 prosumers is equally deducted from Fig. 9.
F. CONGESTION CONTROL IN DISTRIBUTION NETWORK
While the previous sections are motivated by the desire to address the problem of the optimal least-cost path between producers and consumers, the solution proffer can be extended to realise optimal path for power flow in a capacitated network. Thus, the power flow optimisation is defined as a minimum cost flow problem relating to both the shortest optimised path and maximum flow capacitated problem [21]. This section's focus is on an optimal path for energy transfer in a capacitated network for congestion control. As defined in Section III that each link is characterised by two nonnegative attributes c i,j , capacity, and A i,j , link cost. Given a source node i ∈ V , a sink node j ∈ V . Function¯ i,j (x i,j ) represents the optimisation problem defined in (2a). c i,j is the capacity of the link i, j that ensures the distribution line is not congested, as well as to maintain a maximum flow of power for control purposes. The traditional solution to solving the maximum cost flow problem is the successive shortest path algorithm, which has been discussed to have higher computational time. Thus, using the proposed slime mould solution, by setting the capacity of the distribution line, the maximum flow problem is remodelled to cope with congestion on the lines. For instance, Fig. 10 illustrates the same test graph but comprising of the link cost as well as the capacity of each link.
In Fig. 11, the convergence result of the optimised path from Producer 1 to Consumer 10 is shown, which is consistent with Fig. 5. However, it can be observed that, while all other links not on the optimal path converged to zero in Fig. 5, FIGURE 10. P2P-ETS network with capacity constraints and link costs. 11 provided alternate paths in cases of fault with the main optimised path. In addition, limiting the capacity installed on link 1 −→ 2 from 10kWh to 5kWh, it can be observed that the power flow on the link has been suppressed to 5kWh which is different from Fig. 11 due to the restriction on the lines as shown in Fig. 12. This property is useful in coping with congestion on the distribution lines for control purposes.
G. SENSITIVITY ANALYSIS
To further analyse the performance of the algorithm, a sensitivity analysis is done by varying the variables α and γ . Fig. 13 shows the corresponding result. It can be observed that the time to convergence decreases when γ is 1, compared to when γ is 0.01 and 0.1 irrespective of the corresponding α value. The selection of α and γ for the test cases is based on this property.
H. ROUTING POWER COST COMPARISON USING IEEE 39 BUS
For further analysis of the proposed solution, this subsection shows a comparison result with study [26] using an IEEE 39 bus. The IEEE 39 bus system represents an approximation of an electrical power system with 39 buses, comprising of 46 lines, 10 power sources, and 19 loads. Representing the power sources as producers and the loads as consumers, Fig. 14 shows the IEEE 39 bus test including the line capacities and costs of the links. A total power generated by all producers in the networks equal 570kW , while the 19 consumers demanded a total of 570kW of power. To compare the effectiveness of the proposed slime mould solution, the line cost here represents the total power loss as modelled in greedy smallest-cost-rate path first (GRASP) [26]. Fig. 15 shows the result of GRASP, optimal, and the proposed solution. It can be observed that the proposed slime mould approach yielded the least total cost in routing the power from the producers to the consumers in the network.
I. NUMERICAL SIMULATION AND RESULT OF THE PERFECT MATCHING ALGORITHM
The effectiveness of the proposed perfect matching algorithm, i.e. Algorithm 2, is presented as follows. A situation involving an odd number of actors is first considered, with a focus on one-to-many or many-to-one matching, i.e. a producer to many consumers, as the case arises. This reflects a real-world scenario where the demands of a consumer are satisfied by one or two producers and vice versa. Utilising the average separation distances between houses in the UK [27], ranging from a minimum of 1m of a bed dwelling to 27.5m of a three/four storey building, the distances between the actors are determined accordingly and illustrated in Figs 16 and 17. Fig. 16 reflect the case of three producers to two consumers, the producers are N j = {1, 3, &5}, while the consumers are N i = {2, &4}. It can be observed that the algorithm successfully matched peers with the least-cost to minimise the overall network cost. For instance, Producers 3 and 5 to Consumer 4, while Consumer's 2 demands are met by Producer 1.
Similarly, Fig. 17 To further quantify the effectiveness of the proposed optimised path algorithm, Fig. 18 reflect the saved costs from the two cases previously considered. The non-optimal path is the path the peers would have taken without the algorithm, while the optimal path is the path taken as a result of the proposed algorithm. A total of 15% cost was saved for the case of establishing an optimised path between two producers and three consumers, while 8% was saved for the case of three producers to two consumers. This cost savings reflect the variety of roles of participants in the network. For instance, the cost savings increases when there are more consumers to buy energy than when there are more producers than consumers.
J. ONE-TO-ONE MATCHING
The numerical example presented in Section IV-I considers a case of an odd number of actors. Here, we focus on an even number of producers and consumers in the network to produce a one-to-one matching. Using the same set numbers of prosumers from Section IV, but assigning 5 producers and 5 consumers, Fig. 19 is a resulting plot of the matching algorithm. The producers are N j = {1, 3, 5, 7, &9}, while the consumers are N i = {2, 4, 6, 8, &10}. It can be observed that the algorithm successfully matched the peers with the least-network cost, for instance, Producer 1 to Consumer 2, Producer 3 to Consumer 4, Producer 5 to Consumer 6, Producer 7 to Consumer 8, Producer 9 to Consumer 10. Although, when observed closely, and based solely on the link costs, Consumer 8 could have been paired with Producer 9, however, if this were to be true, it would result in no pairing (or higher cost) for Consumer 10.
As previously established that energy loss is mostly due to long distance-transmission, thus producing power locally and matching supply and demand can minimise the losses from transportation for economic and environmental benefits. Invariably, matching local energy demand, would lower the control effort of the overall power system. Furthermore, distribution networks are vulnerable to a variety of faults, thus by providing an alternate route for power distribution, disruption to the consumers would be minimised making the electric grid more resilient.
V. CONCLUSION
This paper addressed the problem of matching prosumers in MG to facilitate P2P-ETS. Two main issues were addressed; Firstly, as the cost of energy has a direct correlation with the distance between energy producer and consumer, a pathoptimised system was developed for energy routing. This was shown to have up to 15% cost savings as compared to a nonoptimised path. Also, the execution time for the developed algorithm, as the number of peers increases (15 − 2000), is reduced as compared to other traditional algorithms, which is highly desirable in a large network as P2P-ETS. Secondly, the proposed solution was applied to a maximum flow capacity problem in the energy distribution network to reduce congestion on the power distribution lines. This result shows the secure operation and control of the grid. Future work will look at including additional constraints such as the cost of renewable energy generation, different generation capacities from prosumers, energy storage systems, and electric vehicles in the problem formulations.
WEIZHUO WANG received the B.Eng., M.Sc. (Eng.), and Ph.D. degrees. He is currently a Senior Lecturer in mechanical engineering with Manchester Metropolitan University, U.K. He is specialising in modeling and simulation in engineering mechanics. He has been working in various multi-disciplinary projects, e.g., workpackage coordinator in a EU project Advanced Dynamic Validations using Integrated Simulation and Experimentation (ADVISE). He has been also working with industries on Knowledge Exchange Partnership (KTP) projects in the area of applying data analytics, digital twin and optimisation. Dr. Wang is a member of the Institute of Physics (MInstP) and a Fellow of the Higher Education Academy (FHEA).
BAMIDELE ADEBISI (Senior Member, IEEE) received the master's degree in advanced mobile communication engineering and the Ph.D. degree in communication systems from Lancaster University, U.K., in 2003 and 2009, respectively. He is currently a Full Professor of intelligent infrastructure systems with the Metropolitan University, Manchester, UK. He has worked on several commercial and government projects focusing on various aspects of smart infrastructure systems. He is particularly interested in collaborative research and development of technologies for electrical energy monitoring/management, transport, water, home automation, the IoTs, cyber physical systems, and critical infrastructures protection. He is a Chartered Engineer and a member of IET. VOLUME 8, 2020 | 2020-05-28T09:16:59.785Z | 2020-05-21T00:00:00.000 | {
"year": 2020,
"sha1": "aa80964765cc819bd6b6eca7f974232d65349113",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09097579.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "0860c587203457c414100090cdc1e36844fb32bd",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14803781 | pes2o/s2orc | v3-fos-license | A SIMPLE EXTENSION OF STOLLMANN’S LEMMA TO CORRELATED POTENTIALS
We propose a fairly simple and natural extension of Stollmann's lemma to correlated random variables. This extension allows (just as the original Stollmann's lemma does) to obtain Wegner-type estimates even in some problems of spectral analysis of random operators where the Wegner's lemma is inapplicable (e.g. for multi-particle Hamiltonians).
Introduction
The regularity problem for the limiting distribution of eigen-values of infinite dimensional self-adjoint operators appears in many problems of mathematical physics. Specifically, consider a lattice Schrödinger operator (LSO, for short) H : ℓ 2 (Z d ) → ℓ 2 (Z d ) given by (Hψ)(x) = X y: |y−x|=1 If the above limit exists, k(E) is called the limiting distribution function (LDF) of e.v. of H. One can easily construct various examples of the function V : Z d → R (called potential of the operator H) for which the LDF does not exist. One can prove the existence of LDF for periodic potentials V , but even in this, relatively simple situation existence of k(E) is not a trivial fact. However, one can prove existence of k(E) in a large class of ergodic random potentials. Namely, consider an ergodic dynamical system (Ω, F, P, {T x , x ∈ Z d }) with discrete time Z d and a mesurable function (sometimes called a hull) v : Ω → R. Then we can introduce a family of sample potentials labeled by ω ∈ Ω. Under the assumption of ergodicity of {T x }, the quantity k(E, ω) = lim is well-defined P-a.s. Moreover, k(E, ω) is P-a.s. independent of ω, so its value taken for a.e. ω is natural to take as k(E). In such a context, k(E) is usually called integrated density of states (IDS, for short). It admits an equivalent definition: where f ∈ ℓ 2 (Z d ) is any vector of unit norm, and Π ( − ∞, E](H(ω) is the spectral projection of H(ω) on (−∞, E]. The reader can find a detailed discussion of the existence problem of IDS in excellent monographs by Carmona and Lacroix [4] and by Pastur and Figotin [16]. It is not difficult to see that k(E) can be considered as the distribution function of a normalized measure, i.e. probability measure, on R. If this measure dK(E), called measure of states, is absolutely continuous with respect to Lebesgue measure dE, its density (or Radon-Nikodim derivative) dK(E)/dE is called the density of states (DoS). In physical literature, it is customary to neglect the problem of existence of such density, for if dK(E)/dE is not a function, then "it is simply a generalized function". However, the real problem is not terminological. The actual, explicit estimates of the probabilities of the form for LSO HΛ L in a finite cube ΛL of size L, for small ǫ, often depend essentially upon the existence and the regularity properties of the DoS dk(E)/dE. Apparently, the first fairly general result relative to existence and boundedness of the DoS is due to Wegner [18]. Traditionally referred to as Wegner's lemma, it certainly deserves to be called theorem.
with bounded density pV (u) of their common probability distribution: pV ∞ = C < ∞. Then the DoS dk(E)/dE exists and is bounded by the same constant C.
The proof can be found, for example, in the monograph [4]. This estimate and some of its generalizations have been used in the multi-scale analysis (MSA) developed in the works by Fröhlich and Spencer [13], Fröhlich, Spencer, Martinelli ans Scoppola [14], von Dreifus and Klein [11], [12], Aizenman and Molchanov [1], and in a number of more recent works where the so-called Anderson Localization phenomenon has been observed. Namely, it has been proven that all e.f. of random lattice Schrödinger operators decay exponentially at infinity with probability one (for P-a.e. sample of random potential V (ω)). Von Dreifus and Klein [12] proved an analog of Wegner estimate and used it in their proof of localization for Gaussian and some other correlated (but non-deterministic) potentials. The author of these lines recently proved, in a joint work with Yu. Suhov [9], an analog of Wegner estimate for a system of two or more interacting quantum particles on the lattice under the assumption of analyticity of the probability density pV (u), using a rigorous path integral formula by Molchanov (see a detailed discussion of this formula in the monograph [4]). In order to relax the analyticity assumption in a multi-particle context, V.C. and Yu. Suhov later used ([10]) a more general and flexible result guaranteeing existence and boundedness of the DoS: the Stollmann's lemma, which we discuss below.
In the present work, we propose a fairly simple and natural extension of Stollmann's lemma to correlated, but still non-deterministic random fields generating random potentials. To the best of author's knowledge, such an extension seems to be original, although very simple. However, the author will appreciate any reference to published papers or preprints where the same or similar result was mentioned and proved. Our main motivation here is to lay out a way to interesting applications to localization problems for multi-particle systems.
Stollmann's lemma for product measures
Recall the Stollmann's lemma and its proof for independent r.v. Let m ≥ 1 be a positive integer, and J an abstract finite set with |J|(= cardJ) = m. Consider the Euclidean space R J ∼ = R m with standard basis (e1, . . . , em), and its positive quadrant For any measure µ on R, we will denote by µ m the product measure µ × · · · × µ on R J . Furthermore, for any probability measure µ and for any ǫ > 0, define the following quantity: and assume that s(µ, ǫ) is finite. Furthermore, let µ m−1 be the marginal probability distribution induced by µ m on q ′ = (q2, . . . , qm). (1) for any r ∈ R m + and any q ∈ R m , (2) moreover, for e = e1 + · · · + em ∈ R m , for any q ∈ R m and for any t > 0 It is convenient to introduce the notion of J-monotonic operators considered as quadratic forms. In the following definition, we use the same notations as above.
Definition 2 Let H be a Hilbert space. A family of self-adjoint operators
In other words, the quadratic form Q B(q) (f ) := (B(q)f, f ) as function of q ∈ R J is non-decreasing in any qj , j = 1, . . . , |J|, and
Remark 1 By virtue of the variational principle for self-adjoint operators, if an operator
is a J-monotonic operator family in Hilbert space H, and H0 : H → H is an arbitrary self-adjoint operator, then the family H0 + H(q) is also Jmonotonic.
This explains why the notion of monotonicity is relevant to spectral theory of random operators. Note also, that this property can be easily extended to physically interesting examples where H has infinite dimension, but H(q) have, e.g., compact resolvent, as in the case of Schrödinger operators in a finite cube with Dirichlet b.c. and with bounded potential, so the respective spectrum is pure point, and even discrete.
Theorem 2 (Stollmann, [17]) Let J be a finite index set, |J| = m, µ be a probability measure on R, and µ m be the product measure on R J with marginal measures µ. If the function Φ : Proof. Let I = (a, b), b − a = ǫ > 0, and consider the set Furthermore, define recursively sets A ǫ j , j = 0, . . . , m, by setting Obviously, the sequence of sets A ǫ j , j = 1, 2, ..., is increasing with j. The monotonicity property implies (2): Now, we conclude that For q ′ ∈ R m−1 , set I1(q ′ ) =˘q1 ∈ R : (q1, q ′ ) ∈ A ǫ 1 \ A¯. By definition of set A ǫ 1 , this is an interval of length not bigger than ǫ. Then we have Similarly, we obtain for j = 2, . . . , m Now, taking into account the above Remark 1, Stollmann's theorem yields immediately the following estimate. 3 Extension to multi-particle systems Results of this section have been obtained by the author and Y. Suhov [9]. Let N > 1 and d ≥ 1 be two positive integers and consider a random LSO H = H(ω) which can be used, in the framework of tight-binding approximation, the as the Hamiltonian of a system of N quantum particles in Z d with random external potential V and interaction potential U . Specifically, let x1, . . . , xN ∈ Z d be positions of quantum particles in the lattice Z d , and x = (x1, . . . , xN ). Let {V (x; ω), x ∈ Z d } be a random field on Z d describing the external potential acting on all particles, and U : (x1, . . . , xN ) → R be the interaction energy of the particles. In physics, U is usually to be symmetric function of its N arguments x1, . . . , xN ∈ Z d . We will assume in this section that the system in question obeys either Fermi or Bose quantum statistics, so it is convenient to assume U to be symmetric. Note, however, that the results of this section can be extended, with natural modifications, to more general interactions U . Further, in [9] U is assumed to be finite-range interaction: Such an assumption is required in the proof of Anderson localization for multi-particle systems, however, it is irrelevant to the Wegner-Stollmann estimate we are going to discuss below.
Now, let H be as follows: where ∆ (j) is the lattice Laplacian acting on the j-th particle, i.e.
is no longer an i.i.d. random field on Z Nd , even if V is i.i.d. Therefore, neither Wegner's nor Stollmann's estimate does not apply directly. But, in fact, Stollmann's lemma does apply to multi-particle systems, virtually in the same way as to single-particle ones. Proof. Fix Λ and consider the union of all lattice points in Z d which belong to the singleparticle projections Λ (j) , j = 1, . . . , N : Now we can apply Stollmann's lemma to HΛ by taking the index set J = X (Λ) and auxiliary probability space R J . Indeed, the random potentialV (x; ω) := V (x1; ω) + · · · + V (xN ; ω) can be re-written as follows: with integer coefficients c(x, y) such that For example, if N = 2, one can have either V (x1, ω) + V (x2; ω) with x1 = x2, in which case we have In any case, as shows (4), random potential at x ∈ Λ is a linear function of one or more coordinates in the auxiliary space R J growing at rate ≥ N t ≥ t along the principal diagonal {q1 = q2 = · · · = q |J | = t ∈ R}. Hence, the operators of multiplication byV (x; ω) form a J-monotonic family, and, by virtue of Remark 2, the same holds for H = H0 + U +V (ω), just as in the single-particle case (and even "better", for N > 1 !). By Theorem 2, this implies immediately the estimate It is not difficult to see that the same argument, with obvious notational modifications, applies to Fermi and Bose lattice quantum systems, i.e. to restrictions of H to the subspaces of symmetric (Bose case) or anti-symmetric (Fermi case) functions of N arguments x1, . . . , xN on (Z d ) N .
Extension to correlated random variables
Now let µ m be a measure on R m with marginal distributions of order m − 1, . . , qj−1, qj+1, . . . , qm), j = 1, . . . , m, and conditional distributions µ 1 j (qj | q ′ =j ) on qj given all q k , k = j. For every ǫ > 0, define the following quantity: and assume that C1(µ, ǫ) is finite: Remark 3 As a simple sufficient condition of finiteness of C1(µ, ǫ), one can use, e.g., a uniform continuity (but not necessarily absolute continuity !) of the single-point conditional distributions, or even the existence and uniform boundedness of the density p(qj |q ′ =j ) of these conditional distributions: sup
Remark 4
In applications to localization problems, the aforementioned continuity moduli C1(µ m , ǫ), C2(µ m , ǫ), C3(µ m , ǫ) need to decay not too slowly as ǫ → 0. A power decay of order O(ǫ β ) with β > 0 is certainly sufficient, but it can be essentially relaxed. For example, it suffices to have an upper bound of the form uniformly for all sufficiently large L > 0 with some (arbitrarily small) β > 0 and with B > 0 which should sufficiently big, depending on the specific spectral problem.
Using notations of the previous section, one can formulate the following generalization of Stollmann's lemma.
Proof. We proceed as in the proof of Stollmann's lemma and introduce in R m the sets A = { q : Φ(q) ≤ a } and A ǫ j , j = 0, . . . , m. Here, again, we have For q ′ =1 ∈ R m−1 , we set Furthermore, we come to the following upper bound which generalizes (3): Similarly, we obtain for j = 2, . . . , m
Application to Gaussian random fields
Let V (x, ω), x ∈ Z d , d ≥ 1, be a regular stationary Gaussian field of zero mean on the lattice Z d . The regularity implies that the field V (·, ω) is non-deterministic, i.e. the conditional probability distribution of V (0, ·) given {V (y), y = 0} is Gaussian with strictly positive variance. In other terms, the r.v. V (0, ·), considered as a vector in the Hilbert space H V,Z d generated by linear combinations of all V (x, ·), x ∈ Z d , with the scalar product does not belong to the subspace H V,Z d \{0} : i .
Furthermore, for any subset Λ ⊆ Z d \ {0}, Therefore, the conditional variance of V (0, ·) given any non-zero number of values of V outside x = 0 is bounded from below byσ 2 0 . Respectively, the conditional probability density of V (0, ·), for any such nontrivial condition is uniformly bounded by (2πσ 2 0 ) −1/2 < ∞. Now a direct application of Lemma 1 leads to the following statement.
Theorem 5 Let Λ ⊂ Z d be a finite subset of the lattice, and Λ ′ ⊂ Z d \ Λ any subset disjoint with Λ (Λ ′ may be empty). Consider a family of LSO HΛ(ω) with Gaussian random potential V (ω) in Λ, with Dirichlet b.c. on ∂Λ. Then for any interval I ⊂ R of length ǫ > 0, we have where the constant C(V ) < ∞ whenever the Gaussian field V is non-deterministic. 6 Application to Gibbs fields with continuous spin Apart from Gaussian fields, there exist several classes of random lattice fields for which the hypothesis of Lemma 1 can be easily verified. For example, conditional distributions of Gibbs fields are given explicitly in terms of their respective interaction potentials. Specifically, consider a lattice Gibbs field s(x, ω) with bounded continuous spin, generated by a short-range, bounded, two-body interaction potential u(·, ·). The spin space is assumed to be equipped with the Lebesgue measure ds. In other words, consider the formal Hamiltonian where h : S → R is the self-energy of a given spin. The interaction potentials u |x−y| (s(x), s(y)) vanish for |x − y| > R and are uniformly bounded: Then for any lattice point x and any configuration s ′ = s ′ =x of spins outside {x}, the singlesite conditional distribution of s(x) given the external configuration s ′ admits a bounded density satisfying the upper bound A similar property is valid for sufficiently rapidly decaying long-range interaction potentials, for example, under the condition sup s,t∈S |u |y| (s, t)| ≤ Const |y| d+1+δ , δ > 0. (7) as well as for more general, but still uniformly summable many-body interactions. Here is one possible Wegner-Stollmann-type result concerning such random potentials.
In the case of unbounded spins and/or interaction potentials, the uniform boundedness of conditional single-spin distributions does not necessarily hold, since the energy of interaction of a given spin s(0) with the external configuration s ′ may be arbitrarily large (depending on a particular form of interaction) and even infinite, if s ′ (y) → ∞ too fast. In such situations, our general condition (5) may still apply, provided that rapidly growing configurations s ′ have sufficiently small probability, so that the outer integral in the r.h.s. of (5) converges.
Conclusion
Wegner-Stollmann-type estimate of the density of states in finite volumes is a key ingredient of the MSA of spectra of random Schrödinger (and some other) operators. The proposed simple extension of Stollmann's lemma shows that a very general assumption on correlated random fields generating potential rules out an abnormal accumulation of eigen-values in finite volumes. This extension applies also to multi-particle systems [10]. | 2007-05-20T15:48:03.000Z | 2007-05-20T00:00:00.000 | {
"year": 2007,
"sha1": "d377f8e2a9fe8406f21c18aa2bac55560812210c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d377f8e2a9fe8406f21c18aa2bac55560812210c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
52814423 | pes2o/s2orc | v3-fos-license | Data on assessment excess lifetime cancer risk and risk of lung cancer from inhalation of Radon 222 in radiotherapy centers in Tehran, Iran
The purpose of the data was to determine excess lifetime cancer risk (ELCR) and risk of lung cancer from inhalation of radon in radiotherapy staff at Tehran radiotherapy Centers in 2015.The concentration of radon gas was extracted from a study done at Tehran radiotherapy centers, and then ELCR and risk of lung cancer were calculated in all centers by standard equations. The excess lifetime cancer risk and risk of lung cancer were 1.89 and 8.46 cases per 100,000 people in radiotherapy centers in Tehran City. The data indicate that the excess lifetime cancer risk and risk of lung cancer in radiotherapy centers are lower than the standard values which presented by UNSCEAR 2000.
a b s t r a c t
The purpose of the data was to determine excess lifetime cancer risk (ELCR) and risk of lung cancer from inhalation of radon in radiotherapy staff at Tehran radiotherapy Centers in 2015.The concentration of radon gas was extracted from a study done at Tehran radiotherapy centers, and then ELCR and risk of lung cancer were calculated in all centers by standard equations. The excess lifetime cancer risk and risk of lung cancer were 1.89 and 8.46 cases per 100,000 people in radiotherapy centers in Tehran City.
Type of data
Tables, graph. How data was acquired The concentration of radon gas was extracted from a study done at Tehran radiotherapy centers [3], then the excess lifetime cancer risk and risk of lung cancer were calculated in all centers using standard equations [5,6].
Data format
Analyzed.
Experimental factors
The concentrations of radon gas were analyzed according to the standards to calculate excess lifetime cancer risk and risk of lung cancer from inhalation of radon-222. Experimental features Excess lifetime cancer risk and risk of lung cancer from inhalation of radon-222 were determined. Data source location Tehran city, Iran.
Data accessibility
The data are available with this article
Value of the data
Data showed that the excess lifetime cancer risk and risk of lung cancer in radiotherapy centers are lower than the standard values which presented by UNSCEAR 2000. That means the possible hazards from radon concentration are low compared to UNSCEAR 2000.
Data can be used to demonstrate that the risk of lung cancer is greater than excess lifetime cancer risk in radiotherapy centers in Tehran City i.e., for the current population radon concentration should also be considered a potentially significant cause of lung cancer which is exposed through contamination of indoor air by radon from surrounding materials.
The data can be used to compare ELCR and the risk of lung cancer with other studies in radiotherapy centers.
Data
The excess lifetime cancer risk and risk of lung cancer were calculated in eight radiotherapy centers in Tehran ( Table 1). The ELCR and the risk of lung cancer were compared with UNSCEAR 2000 range (Diagram 1). According to the UNSCEAR 2000 the annual effective dose for radiation workers by Radon-222 and Radon-220 is different from to 0.1 to 1.15 mSv [1,2]. In this data, the mean annual effective dose is equal to 0.48 mSv.
Experimental design, materials, and methods
The concentration of Radon-222 was extracted from a study, which was carried out at eight radiotherapy centers in Tehran, Iran [3]. Then, the excess lifetime cancer risk and risk of lung cancer were calculated.
Assessing the excess lifetime cancer risk
To calculate the excess lifetime cancer risk due to gamma-ray radiation the following equation was used [4][5][6]:
Calculating the risk of lung cancer
The probability of annual lung cancer cases per million people (CPPP) caused by effective dose received from Radon-222 was assessed by Eq. (2) [9][10][11].
ERn ¼ Effective dose received by the Radon 222.
Transparency document. Supporting information
Transparency data associated with this article can be found in the online version at https://doi.org/ 10.1016/j.dib.2018.09.005. Diagram 1. The comparison between ELCR and risk of lung cancer ( Â 10 À 3 ) in the current study with 95% confidence intervals (CI) and the UNSCEAR 2000 value. Table 1 The excess lifetime cancer risk (ELCR) and risk of lung cancer ( Â 10 À 3 ). | 2018-10-02T01:19:39.590Z | 2018-09-06T00:00:00.000 | {
"year": 2018,
"sha1": "b9fdd9b5434d17f706c9cbcb0c2714f6806eee0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2018.09.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9fdd9b5434d17f706c9cbcb0c2714f6806eee0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
182015771 | pes2o/s2orc | v3-fos-license | Geohistorical records of the Anthropocene in Chile
The deep-time dynamics of coupled socio-ecological systems at different spatial scales is viewed as a key framework to understand trends and mechanisms that have led to the Anthropocene. By integrating archeological and paleoenvironmental records, we test the hypothesis that Chilean societies progressively escalated their capacity to shape national biophysical systems as socio-cultural complexity and pressures on natural resources increased over the last three millennia. We demonstrate that Pre-Columbian societies intentionally transformed Chile’s northern and central regions by continuously adjusting socio-cultural practices and/or incorporating technologies that guaranteed resource access and social wealth. The fact that past human activities led to cumulative impacts on diverse biophysical processes, not only contradicts the notion of pristine pre-Industrial Revolution landscapes, but suggests that the Anthropocene derives from long-term processes that have operated uninterruptedly since Pre-Columbian times. Moreover, our synthesis suggests that most of present-day symptoms that describe the Anthropocene are rooted in pre-Columbian processes that scaled up in intensity over the last 3000 years, accelerating after the Spanish colonization and, more intensely, in recent decades. The most striking trend is the observed coevolution between the intensity of metallurgy and heavy-metal anthropogenic emissions. This entails that the Anthropocene cannot be viewed as a universal imprint of human actions that has arisen as an exclusive consequence of modern industrial societies. In the Chilean case, this phenomenon is intrinsically tied to historically and geographically diverse configurations in society-environment feedback relationships. Taken collectively with other case studies, the patterns revealed here could contribute to the discussion about how the Anthropocene is defined globally, in terms of chronology, stratigraphic markers and attributes. Furthermore, this deep-time narrative can potentially become a science-based instrument to shape better-informed discourses about the socio-environmental history in Chile. More importantly, however, this research provides crucial “baselines” to delineate safe operating spaces for future socio-ecological systems.
demic articles have examined the stratigraphic criteria required to recognize the Anthropocene as a new formal chronostratigraphic unit characterized by unprecedented human-induced transformations after the Industrial Revolution and/or the "Great Acceleration" that followed World War II Swindles et al., 2015;. Hence, there is growing interest in validating unambiguous globally traceable time-stratigraphic markers of the human footprint on the Earth system, including artificial radionuclides, atmospheric CO 2 levels, patterns in environmental isotopes, fly ash particles, plastic pollution and/or anthropogenic soils (Certini and Scalenghe, 2011;Dean et al., 2014;Swindles et al., 2015;Waters et al., 2016;Zalasiewicz et al., 2017).
Currently there is overwhelming evidence for the imprint, direction, magnitude and intensity of human transformations of Earth's ecosystems over the last 200 years. Nevertheless, the chronostratigraphic portrayal of the Anthropocene has prompted sharp criticism. Autin (2016), for example, argues that it obstructs the dialogue among academic peers and between socio-political parties. Others criticize its reductionism and determinism as it underestimates the human-induced disturbances on different biophysical processes that could have begun diachronically in different parts of the Earth before the 1800s (Certini and Scalenghe, 2011;Glikson, 2013;Lewis and Maslin, 2015;Lightfoot et al., 2013;McClure, 2013;Ruddiman et al., 2015;Smith and Zeder, 2013). Similarly, other authors point out that this approach emphasizes "symptoms" (i.e., markers for anthropogenic effects) instead of causal mechanisms rooted in social decisions and behaviors that operate at different spatio-temporal scales (Balter, 2013;Braje, 2016;Ellis, 2015;Ellis et al., 2018;Malm and Hornborg, 2014;Sawyer, 2015).
Several authors have stressed that the conceptualization of the Anthropocene must explicitly consider the longterm capacity of humans to modify the Earth's natural systems to improve access to natural resources (Boivin et al., 2016;Crumley, 2015;Ellis, 2015;Ellis et al., 2018Ruddiman et al., 2015;Smith and Zeder, 2013). The core of this perspective is the cultural niche construction or ecosystem engineering process. This refers to alterations in biophysical conditions induced by humans through culturally learned knowledge (cultural inheritance) to enhance societal well-being (fitness). These alterations are eventually inherited by succeeding generations, affecting, in turn, positively or negatively their adaptive fitness (ecological inheritance) (Ellis, 2015;Laland and O'Brien, 2011;Odling-Smee and Laland, 2011). Within this perspective, the Anthropocene results from the long-term interplay between social upscaling (increasing trend in socio-cultural complexity), cooperative ecosystem engineering (environmental and cultural transformations brought by cooperative social interactions), and energy substitution (changes in energy sources) (Ellis, 2015;Ellis et al., 2018). Such reasoning has led to proposing the Anthropocene onset to at least 8000 years ago, when cooperative-production economies (i.e. based on agriculture and livestock rearing) emerged, amplifying the human capacity for engineering environments, and started to release greenhouse gases (CO 2 , CH 4 ) into the atmosphere due to farming (Ellis et al., 2018;Gowdy and Krall, 2013;Ruddiman et al., 2015;Smith and Zeder, 2013). This process of Neolithisation, however, was not transversal and homogeneous in time or space, having different starting points and modifying mechanisms and even not occurring in some areas of the world (Larson et al., 2014). Moreover, the emphasis on this process dismisses the ability of hunter-gatherers to induce radical landscape transformations (Lewis and Maslin, 2015;Sullivan et al., 2017).
Interest in unfolding the origin and nature of the Anthropocene has trespassed geohistorical sciences, and has also been approached by anthropologists, politicians, artists, philosophers and educators who embrace manifold connotations about the interaction between environment, society and culture (see Autin, 2016;Descola, 2013;Lewis and Maslin, 2015;Matless, 2016;Toivanen et al., 2017). As a consequence, there is a pressing need to prioritize a cross-disciplinary agenda for answering contingent questions such as: how did we get here?, are there regional expressions of the Anthropocene?, how do these regional manifestations add up to a global phenomenon?, how have idiosyncratic behaviors contributed? Evolutionary studies on the dynamics of coupled social-ecological systems offer meaningful tools to overcome biased paradigms that hamper a common ground framework. A longue durée approach could illuminate the feedback mechanisms between social development and environment and, in turn, on interactions that molded the human-dominated Earth state through time and possibly into the future (Braje, 2016;Crumley et al., 2015;Dearing et al., 2015;Sawyer, 2015).
Starting from the premise that the Anthropocene represents a social-cultural-environmental process that "was not made in a day, nor was it created uniformly" p. 192), here we review the evolution in the deep time of human-environment interactions to understand patterns, trends and mechanisms that have led to the manifestations of the human dominated epoch in Chile. Particularly, through the integration of archeological and paleoenvironmental records, we test the hypothesis that Chilean societies progressively escalated their capacity in shaping national and regional biophysical systems as socio-cultural complexity and natural-resources pressures increased over the past 3000 years. Therefore, we predict that the current state of Chilean ecosystems (i.e. Anthropocene) appears as a process rooted in the longterm human-environment interactions. Chile offers a privileged context to develop an integrative and comparative narrative from contrasting socio-ecological trajectories. Firstly, during the last three millennia, most of the territory was extensively occupied and subject to different socio-economic systems that included hunter-gathering, agriculture, silviculture and industrialization (Armesto et al., 2010;Campbell and Quiroz, 2015;Gayo et al., 2015). This opens up the opportunity to evaluate sequential regime shifts in environmental patterns brought about by different capacities in ecosystem engineering. Secondly, its extraordinary ecophysiographic diversity provides an avenue for comparing these dynamics among societies that have evolved under markedly distinct bioclimates and, therefore, for exploring convergences and/or divergences in the evolution of social and ecological systems either at micro or macro regional scales.
Regional settings
Chile is a ribbon of land that extends from 17°30' to 56°30'S across the western edge of South America, between the eastern Pacific coast and the western Andean mountain range (Figure 1). Stretching from the Neotropic to Cape Horn, this territory of more than 756,096 km 2 entails contrasting bioclimates from the Atacama Desert, passing through the Mediterranean region of Central Chile, to the cold temperate sub-Antarctic region in Patagonia. Such environmental, climatic and topographic diversity determines unique landscapes that have been inhabited continuously at least for the last 14000 years (Dillehay, 2000;Jackson et al., 2007;Latorre et al., 2013;Nuñez et al., 2016;Salazar et al., 2017).
The abundance of natural resources distributed in a latitudinal gradient, contributes to regional differentiated economic activities and modern social-environmental interactions. Therefore, based on the strong north-south ecophysiographic gradient and spatial variations in resource-based economies, Chile can be divided into northern (18°-28°S), central (28°-42°S) and austral (42°-56°-30'S) regions (Figure 1). Although there is evidence for ancient human-induced transformations (i.e. localized fires set by hunter-gatherers) over the past 3000 years (Holz et al., 2016;Méndez et al., 2016), Austral Chile is considered as a near pristine region up to 1750 AD, when European settlers burned and opened vast areas of densely vegetated Patagonia (Moreno et al., 2018;Simi et al., 2017). Leaving aside the fact that humanized landscapes are a recent occurrence, this region is omitted from our analyses because discriminating between natural and anthropogenic agents causing such transformations is challenging, as typically both factors are involved (Holz et al., 2016;Méndez et al., 2016;Moreno et al., 2018). Thus, this article reviews the evolution of human-environment interaction in northern and central Chile, area that together contains nearly 98% of the national population (INE, 2017).
The northern region drapes across the Atacama Desert -the world's driest desert- (Figure 1). This area hosts unusual minerals (sodium nitrates, perchlorates) and abundant deposits of valuable ores such as copper, gold, silver, iron, borax and lithium (Clarke, 2006). This wealthy mining-based economy region has ranked as a world's top mineral producer since the 19 th century. Sustained by the permanent upwelling of nutrient-rich cold waters of the Humboldt Current System, this region was also positioned as a leading anchovy and sardine exporter after 1950 AD, but these fisheries rapidly collapsed due to over-exploitation (Yáñez et al., 2017). Between 18° and 25°S precipitation is practically nil at the coast and in the inland zone below 2500 masl (Houston, 2006), and vast areas are devoid of macroscopic life along the extreme hyperarid core of the Atacama. At elevations above 2500 masl precipitation occurs during the austral summer fed by the South American Summer Monsoon (Garreaud, 2009), promoting less harsh conditions over the high-elevation desert and the Altiplano that withstand montane grasslands and farming-pastoral households up to 4000 masl (Arroyo et al., 1988;Santoro and Núñez, 1987). South of 25°S, precipitation amounts increase progressively associated with the frontal systems of the southern westerlies (Garreaud, 2009). Flowering Desert phenomena occur along the coastal unvegetated landscape during episodes of usual high rainfalls. Water availability depends heavily on highelevation rainfalls (>3500 masl) as these resources feed ephemeral/perennial stream flows and groundwater that discharges into exoreic and endorheic basins (Houston, 2006) (Figure 1). Main urban centers have emerged at coastlines, where freshwater is piped long distances from inland aquifers or supplied from seawater desalination. Today, industrial mining activities relying on lixiviation together with large-scale agriculture and urban uses, exert strong pressure on fresh water availability causing a significant hydrological deficit (Aitken et al., 2016;Houston, 2007). At the same time, massive mining operations (i.e Chuquicamata -the biggest open pit mine on Earth) have brought significant heavy metal and metalloid pollution (Gidhagen et al., 2002;Huneeus et al., 2006;Schwanck et al., 2016;Sträter et al., 2010).
A shift towards mesic conditions at 28°S due to a more recurrent influence of the southern westerly winds marks the transition to central Chile (28°-42°S, Figure 1). Over this region, the progressive southward increase in the frequency and intensity of winter precipitation leads to a gradual change from semi-arid to mild-humid temperate conditions (Aceituno, 1988). Therefore, the structure and composition of ecosystems varies significantly across this north-south moisture gradient, encompassing xerophytic-thorny shrubland in the northernmost portion, sclerophyllous woodlands at ∼32°-35°S and winter-deciduous forests and evergreen forest by the south (Armesto et al., 2007;Veblen, 2007). All of these ecosystems form the "Chilean winter rainfall-Valdivian forests" biodiversity hotspot, which harbors a richly endemic flora and fauna (Arroyo et al., 2004). Population is densely concentrated around rivers and lakes that dissect the Longitudinal Valley -a narrow plain between the Andes and the Coastal range- (Figure 1). Important cities have expanded in the coastal zone thriving on service economy and industrial fisheries that have dramatically reduced stocks of the Chilean Jack-Mackerel among other pelagic taxa. Largescale metallic mining operations (i.e. El Teniente copper mine) have been established particularly in the northern area of central Chile (32°-35°S). Because of fertile fluvioglacial and volcanic soils together with abundant water supplies derived from Andean snow reserves and winter rainfalls (Huygens et al., 2011;Muñoz et al., 2007), the Longitudinal Valley of Central Chile has become the heart of agricultural, forestry and livestock production for national and international markets. Indeed, this region is particularly known for the production of world-renowned wines as well as introduced crops (i.e. berries, cherries, plums, kiwis, olives, apples, walnuts). These activities coupled to fast urban expansion, industrial development and Elevation (km asl) 0.5 recurrent anthropogenic fires have seriously degraded water quality and storage, marine/terrestrial biodiversity, soils, biogeochemical cycles and air quality (Barra et al., 2005;Barraza et al., 2017;Casanova et al., 2013;Donoso et al., 1999;Gallardo et al., 2018;Lara et al., 2009;Molina et al., 2017;Schulz et al., 2010).
Data and Methods
To reconstruct the long-term interaction between pre-Columbian societal behavior and environmental alterations, we delineated two case studies from northern and central Chile regions. Each case describes dynamics at meso and micro spatial scales, but, when taken together, they define trends at a macro-regional scale -i.e. equivalent to a "national" trajectory in what is nowadays the Chilean territory.
We surveyed published paleoenvironmental records spanning the last 3000 years for both micro-regions. Additional data from adjacent territories (i.e. Peru, Bolivia, Patagonia, Antarctica) were also considered to enrich the discussion. Imprints of humanized landscapes in such archives have been critically assessed to make sure that they reflect disturbances/anomalies in any environmental variable that cannot be explained by natural factors, and which were explicitly differentiated from intrinsic variations in the Earth system by the original author. Hence, we provide a synoptic overview of human-driven impacts on regional ecological patterns by indistinctly considering and combining information derived from available pollen, diatom, macrofossil, charcoal and/or tree-ring records (Table S1, Supplemental Material 1). Also, geochemical data from lacustrine, ice and/or peat-bog cores were examined to identify imprints on biophysical processes (e.g. pollution, erosion, deforestation).
Archaeometric data and other archeological evidence for pre-Columbian resource exploitation, settlement patterns and technological production/innovations were considered to grasp the engineering capacity and direction of human-induced changes in ecological and biophysical patterns (Table S1). Because demography is strongly linked to socio-cultural complexity (Henrich, 2004;Powell et al., 2009;Turchin et al., 2018) and energy consumption/ production (Freeman et al., 2018a(Freeman et al., , 2018b, representing therefore one of the most important factors behind the impacts of human activities on the landscape (Ellis et al., 2013;Malm and Hornborg, 2014), we reconstructed paleodemographic history at micro-regional scales over the last three millennia. Particularly, we estimated summed probability distributions (SPDs) of radiocarbon and thermoluminescence dates from published archaeological sites from the northern (Gayo et al., 2015;Troncoso and Pavlovic, 2013) and central (Campbell and Quiroz, 2015;Falabella et al., 2015Falabella et al., , 2007Sanchez, 2001, Sanhueza et al., 2006, 2003Troncoso and Pavlovic, 2013) regions (see Text S1 in Supplemental Material 1, Datasets 1-2 in Supplemental Data 1). This method uses "dates as data" and assumes that changes in the accumulation of chronological determinations on archaeological remains reflect variations in the intensity of human activities through time as a proxy of population levels (Chaput and Gajewski, 2016;Rick, 1987). This approach has gained acceptance over other estimates of past demographic change, because it yields a proxy that is unaffected by subsistence strategies, technologies, record of specific cultural artifacts (i.e. lithic versus ceramic) and/or bioarchaeological remains. Further details of this method are presented in the Text S1, Supplemental Material 1.
The impact of past anthropogenic changes in the structure of Chilean environments was evaluated statistically by implementing the Rodionov (2004) method for detecting regime shifts in long-term data (Text S2 in Supplemental Material 1, Supplemental Data 1). For the purpose of this paper, regime shifts are substantial and persistent changes caused by human activities in the mean state (e.g. trend) of any biophysical variable (Scheffer et al., 2001). Thus, our regime shift detection analyses were restricted to time-series derived from qualitative and temporally-continuous proxy records that capture long-term environmental modifications unequivocally caused by anthropogenic activities. Few geohistorical records meet these conditions. Such limitation implies that the regime shifts reconstructed here represent a fraction of potential human-induced changes in the dynamics of regional ecosystems, and, in turn, these should be taken at face value for available data at the moment.
In the case of northern Chile, we explored regime shifts in air quality brought about by metallurgical activities (Text S2). Specifically, we examined long-term variations in the emissions of heavy metals in a time-series that concatenates crustal-normalized and background flux-ratios of different metalloids accumulated in paleopollution proxy records over the past 3150 cal yrs BP (Table S4 in Supplemental Material 1, Dataset 3). Because wildfires caused by natural agents are unusual in central Chile (Aravena et al., 2003;Gonzalez et al., 2011), selected microcharcoal and tree-ring records (Table S4) allow to examine the magnitude of human-induced changes in the regional fire activity (Dataset 4). We also evaluated variations in the emissions of black carbon (i.e. spheroidal carbonaceous particles) from industrial activities in the Santiago basin (33°S) throughout the period 1852 AD-2002 AD (Dataset 5). Spearman's correlation coefficients were calculated to evaluate the relationship between a given change in the mean state of metalloids pollution/fire-regime and reconstructed demographic levels (Text S2, Supplemental Material 1).
Northern Chile
Since ∼3500 cal yrs BP population growth accelerated to pre-Columbian unprecedented levels (Gayo et al., 2015;Williams et al., 2008). Still, our paleodemographic reconstruction reveals significant fluctuations in population levels at centennial or millennial scale (Figure 2a). Around 3000 cal yrs BP, northern Chilean populations began to expand, rising slowly but steadily up to 1700 cal yrs BP (Figure 2a). This first paleodemographic phase occurred during a period in which increased moisture availability was interrupted by a centennial-scale dry pulse at ∼1950-Sáez et al., 2016. Trends in SPDs between 1700 and 600 cal yrs BP delineate a second population event. The intensity of human activities increased gradually between 1700 and 1300 cal yrs BP, but an abrupt short-lived decline is apparent at 1300-1100 cal yrs BP (Figure 2a). By 1050 cal yrs BP populations recovered rapidly, peaking between 800 and 600 cal yrs BP, corresponding to the widespread positive hydroclimate anomaly detected throughout the Medieval Climate Anomaly (MCA) (Gayo et al., 2012;Latorre et al., 2006Latorre et al., , 2002Maldonado et al., 2005;Morales et al., 2012;Mujica et al., 2014;Sáez et al., 2016). A third population event is defined by the dramatic decrease in demographic levels since 600 cal yrs BP, which is coeval to the onset of drier conditions during the so-called Little Ice Age (LIA) (Kuentz, 2012;Latorre et al., 2003). Brief wetter interludes are evident during this phase (Christie et al., 2009;Kuentz, 2012;Morales et al., 2012;Mujica et al., 2014), but this population contraction did not reverse, and extended beyond the European incursion into northern Chile by 1533 AD (de Vivar, 1979). We suspect, however, that this decreasing trend is partially related to research biases in the accumulation of chronometric data (see Text S1, Supplemental Material 1).
Historical demographic data (not shown in Figure 2) indicate that the population decline reversed by the 17 th century. Actually, northern Chile populations experienced positive growth rates by 1650 AD. Even so, rural native populations steadily decreased since 1850 AD mainly because of the long-term drying trend in the highlands and the socio-economic pressures imposed by the nitrate industry (Lima et al., 2016). As the saltpeter market collapsed by 1940 AD, regional demographic levels markedly fell. However, regional population growth accelerated after the second-half of the 20 th century (Table S5, CELADE, 2005;McCaa, 1972).
Neolithisation spread over northern Chile shortly after positive but variable hydrological conditions persisted from ∼3500 cal yrs BP (Núñez et al., 2010;Núñez and Santoro, 2011;Sinclaire, 2004). Coastal populations from Northern Chile remained practically immune to this process, maintaining a marine foraging subsistence since the late Pleistocene up to the Spaniard colonization at ∼1533 AD (Andrade et al., 2014;Pestle et al., 2015;Roberts et al., 2013;Santoro et al., 2015Santoro et al., , 2017b. The one exception is the case of populations from the fertile coast of northernmost Chile (18°S-19°S) that complemented fishing, hunter-gathering activities with small-scale agriculture developed at the mouth of perennial rivers that discharge into the Pacific Ocean (Diaz-Zorita et al., 2016;Núñez and Santoro, 2011). Overall, maritime communities from the northern Chile concentrated around palustrine areas and/or sheltered bays, and discrete settlement complexes with architecture were founded onto marine terraces or coastal plains since 1950 cal yrs BP (Urbina et al., 2011) (Figure 2b). Targeted intertidal and subtidal resources such as gastropods (Concholepas concholepas), mollusks (Mytellidae), echinoderms (Loxechinus albus), fish (Genypterus sp, Trachurus symmetricus.) and marine mammals (Otaria flavencens, Arctocephalus australis) were intensively exploited using a diverse array of toolkits and strategies that were continuously improved over time (Flores et al., 2016;Olguín et al., 2015;Santoro et al., 2017b). Prolonged and intense foraging of particular species -i.e. keystone species such as C. concholepas-might have affected the long-term structure of shore ecosystems (Rivadeneira et al., 2010;Santoro et al., 2017b). The recurrent and continuous disposal of marine fauna remains over the last 9000 cal yrs BP have resulted in the accumulation of conspicuous archeological shell-middens along 1300 km of the northern Chile coastline. Extending over several hectares and rising up to more than four meters high (Santoro et al., 2005), these artificial landforms transformed the coastal geomorphology through the creation of new flat alkaline and nutrient-rich sedimentary fills (i.e. anthropogenic soils) along the rugged and poorly developed coastline (see profile A in Figure 1). Actually, these anthropogenic surfaces have provided substrates for successive littoral settlements even in the present-day (Santoro et al., 2005;Urbina et al., 2011).
Gathering, farming, pastoralism and technological innovations became important strategies that sustained the demographic expansion of inland populations since the first population event (3500-1700 cal yrs BP, Figure 2a-c). This is particularly the case for populations from the northernmost sectors (20°-24°S) that settled with domestic architecture along more productive environments such as wetlands, ravines or high Andean peat bogs since 3500 cal yrs (Figure 2b, Adán and Urbina, 2007;Adan et al., 2013;Agüero, 2005;Agüero and Uribe, 2011;Urbina et al., 2012). Aside from high-elevation incipient agropastoral settlements (>2300 masl), population aggregations also occurred in the hyperarid Longitudinal Valley where amplified hydrological budgets created fertile oases throughout an area now unpopulated and perceived as hostile for human life due to the scarcity of local resources (Figure 2b). Actually, some agrarian settlements were founded 3500 cal yrs BP in former wetlands/ ravines that flourished along the Pampa del Tamarugal basin (20°-22°S) (Figure 2b, Adan et al., 2013;Gayo et al., 2012;Urbina et al., 2012).
Pottery and metallurgy were common in most of these settlements leaving prominent traces in the environment (Núñez et al., 2010;Núñez and Santoro, 2011;Troncoso et al., 2016;Uribe and Vidal, 2012). Ceramic production started from 3200 cal yrs BP onwards (Figure 2c), and vast areas of northern Chile are still nowadays covered by abundant pre-Columbian pottery fragments (Uribe, 2006b;Uribe, 2009;Uribe and Ayala, 2004). In practice, this industry propelled a novel non-degradable material that throughout chemical interchange with organic domestic waste (i.e. CaCO 3 , fatty acids, proteins) is capable of producing anthropogenic soils in domestic archeological sites from the Atacama Desert (Muñoz, 2004). Meanwhile, there are considerable records of metallurgic slags, copper artifacts, extraction tools and ore fragments in archeological sites dated after 3125 cal yrs BP (Figueroa et al., 2015;Nuñez et al., 2017). This implies that metalworking has been a significant human activity in northern Chile that began shortly after the Neolithisation.
Although archeological evidence and impacts for smelting-based metallurgy prior to 3300 cal yrs BP in the Andes are debated (see Eichler et al., 2017), a peat-bog record from Patagonia (53°S) suggests increased copper emissions from early metalworking in this region could have been sporadically transmitted into southern Chile during southward winds anomalies around ∼3500 cal yrs BP (Figure 2e, De Vleeschouwer et al., 2014).
From 2400 cal yrs BP, the first paleodemographic phase event is characterized by a trend toward increased complexity either in inland settlement patterns or production system (Figure 2b-c). Sedentary settlements with architecture increased in number, extension and complexity. This applies especially to Pampa del Tamarugal agricultural villages and some high Andean agropastoral settlements (i.e Tulor, Tulan), in which architecture involved delimitation of public, habitational and cultivation areas. Architecture became more sophisticated as stones, massive trunks, adobe and perishable vegetable materials were widely used to build residential and public structures (Adán and Urbina, 2007;Núñez et al., 2010;Núñez and Santoro, 2011;Uribe, 2006a). Such demand for wood as a construction material and fuel, probably exerted an important pressure on the few native woody species available across this region (i.e. Prosopis tamarugo, Polylepis spp, Escallonia angustifolia, Schinus molle, Myrica pavonis).
Exploitation of wild camelids and small-scale husbandry of domestic breeds (llamas and alpacas) thrived in the highlands (>3000 masl) after 2400 cal yrs BP (Figure 2c, Núñez and Grosjean, 2003;Núñez et al., 2010;Núñez and Santoro, 2011). The Laguna Seca peat-bog record (18°S, 4000 masl) indicates that such grazing activities resulted in a marked change in the long-term structure of peatbogs as non-palatable species increased (i.e. Poaceae) to the detriment of foraging herbaceous taxa (Baied and Wheeler, 1993). At the same time, several Andean and Mesoamerican crops were introduced into riparian/wetland ecosystems including maize, Chenopodium quinoa, Cucurbita ssp, Lagenaria sp, Oxalis tuberosa, Canna edulis, Capsicum spp, Phaseolus spp, Solanum spp, Manihot spp, Amaranthus spp, Ipomoea spp, among others (García et al., 2014;Núñez and Grosjean, 2003;Vidal-Elgueta et al., 2019). The exploitation of wild plants, however, did not cease, and the use of byproducts from native species intensified systematically (Núñez and Santoro, 2011). Exotic crops were cultivated in fields established on extensively worked natural silty-flat terrains that were cleaned up from clasts and artificially irrigated by complex irrigation networks . For these purposes, deliberated interventions of river courses and spring outcroppings became a recurrent practice involving the construction of superficial irrigation channels and dams (Núñez and Santoro, 2011). In the Pampa del Tamarugal, this "Green Revolution", which resembles the "Arab agricultural revolution" defined by Watson (1974), implied turning the hyperarid landscape into a productive arable environment (Gayo et al., 2012;Rivera and Dodd, 2013). Conservative estimates on the extension of irrigated crop fields associated to Pircas-Caserones, and Guatacondo and Ramaditas villages indicate these involved an area at least of 580 ha .
Certainly, cultivation systems that prospered over much of northern Chile since 2400 cal yrs BP, attest to an unprecedented land-use change over vast desert areas covered by hyper-saline and organic-poor soils. Crop production in nutrient-deficient substrates was achieved by mesquite tree agroforestry (Figure 2c) along furrowed cultivation fields either to fertilize via nitrogen-fixation, facilitate soil moisture or prevent salinity, erosion and evaporation (Beresford-Jones et al., 2009;McRostie, 2014). A welldocumented case for human management of alien tree species shows that agroforestry practices were accompanied by the intended introduction of Prosopis-Algarrobia species at least by 2000 cal yrs BP (Figure 2c) from the eastern subtropical South America (McRostie et al., 2017). Due to their invasive character and multi-purpose economic value, these exotic trees (P. alba, P. flexuosa) rapidly dispersed and naturalized during pre-Columbian times, becoming an important element in a diverse array of modern ecosystems from northern Chile (Martínez, 1998).
Smelting furnaces preserved in the Ramaditas village (21°S), indicate that sophisticated technology for native copper processing was started to be developed over the low-elevation Atacama Desert at ∼2000 cal yrs BP. Here, copper-alloy production was achieved by combusting charcoal at temperatures above 1100°C within ancestral wind-sourced furnaces (Graffam et al., 1996). A prolonged rise in copper pollutants recorded in the Illimani ice-core (16°S, Eichler et al., 2017) and the Patagonia peat-bog record (53°S, De Vleeschouwer et al., 2014), suggest that metalworking in this region actively contributed to the anthropogenic air pollution detected in South America at 2650-1750 cal yrs BP (Figure 2d-e). Heavy metal pollution levels remained low and relatively stable (Mean = 0.03 ± 0.02) throughout much of the first population event. However, slightly higher values are recorded at 2425-2675 cal yrs BP and 2025-2075 cal yrs BP (Figure 3a). Even so, we verify that pollution levels during Pre-Columbian times (425-3500 cal yrs BP) correlate positively, but moderately, with population levels (Figure 3b, Spearman's rho = 0.55, p < 0.05).
During the second population event (1700-600 cal yrs BP, Figure 2a) there was a staggering increase in food demand imposed by the escalating growth in demographic levels concentrated along even more complex settlements engaged in intensive agricultural production (Figure 2a-c, Castro et al., 2016;Muñoz et al., 2016;Núñez et al., 2010). Overall, wetland/riparian ecosystems continued to be converted into farmlands. Agricultural terracing began to be widely practiced in non-arable steeper areas of the highlands (Núñez et al., 2010;Santoro et al., 2004). These earthworks implied skillful landscape engineering including soil clearing and deepening, slope infilling, built up stoned-contention walls, manipulation of fertile sediments, and control of natural fresh-water resources by developing sophisticated hydraulic systems such as stoned-distribution and transfer channels (Santoro et al., , 2004Uribe, 2006a). Nevertheless, at the interval 1300-1100 cal yrs BP irrigated-agriculture ceased briefly in the low-elevation areas, and these populations migrated to higher-elevations (>2400 masl) to establish new permanent settlements (Castro et al., 2016;Santoro et al., 2017a;Zori and Brant, 2012). This hyperarid landscape, however, was transformed even more intensely as positive hydrological budgets returned again between 1050 and 680 cal yrs BP (Figure 2a). Indeed, the agricultural land area over the Pampa del Tamarugal expanded by implementing terraced and flat maize crops, several kilometers of perched and stone-lined irrigation canals were developed over the surface, Prosopis trees agroforestry peaked and new exotic species were introduced (Garcia and Uribe, 2012;Gayo et al., 2012;McRostie et al., 2017). Morphological and genetic evidences indicate that the crop yield of maize was increased through artificial selection of regional varieties to produce large cobs and kernels (Vidal-Elgueta et al., 2019). Nitrogen isotope ratios from local human remains suggest that this process was apparently accompanied by the formation of anthropogenic soils through incipient sediment fertilization with camelid manure and/or seabird guano (Figure 2c, Santana-Sagredo et al., 2015).
The production system during the second population event was further enhanced by medium-scale herding of domesticated camelids at elevations above 2400 masl, which sustained the traffic of surplus production and other precious goods (e.g. copper and silver ores) across long-distance trans-Andean routes (Núñez et al., 2010). A peak in the concentration of charcoal particles in the Cosapilla peat-bog record (17°47'S, 4380 masl) by 1500 cal yrs BP, suggests that camelid livestock production in the highlands likely involved the management of grazing pasture by setting localized burnings of the herbaceous cover (Domic et al., 2018). However, the long-term reduction in charcoal accumulation from 1400 cal yrs BP onwards indicates that such practice was rapidly abandoned and replaced by the artificial irrigation of peatlands (Domic et al., 2018).
Starting at 1500 cal yrs BP, the intensity of metallurgic activities experienced a progressive increase and improvement (Figueroa et al., 2015) (Figure 2c). Metallurgy mainly focused on copper and tin-bronze, yet silver and a rare ternary bronze (Cu-As-Ni) started to be produced by 1300 cal yrs BP (Figueroa et al., 2015;Maldonado et al., 2013). Complex wind-driven "huayras" or smelting "perpendicular" furnaces were adopted since 700 cal yrs BP (Figueroa et al., 2018;Zori, 2018). Charcoal records from the Sajama ice-core and the Cosapilla peat-bog indicate a concurrent rise in charcoal accumulation since 1180 cal yrs BP, resulting most likely from the intensive combustion of wood-charcoal during the ore smelting process (Domic et al., 2018;Reese et al., 2013).
Widespread mining industries in inland and coastal areas of northernmost Chile during the interval 1500-700 cal yrs BP led to increased metalloid emissions recorded even in far Patagonian records (Figure 2e; De Vleeschouwer et al., 2014). A moderate increase in the mean pollution index since 1375 cal yrs BP (Mean = 0.11 ± 0.04, Figure 3a) defines a significant regime shift (RSI = -0.28, p-value < 0.05) in atmospheric pollution. In fact, most existing paleopollution records concur in indicating that natural background metal concentrations no longer returned since this period but fluctuated around these anthropogenic levels (Figure 2d-g). Central Andean lacustrine records (11°-19°30'°S) show peaks in copperexcess as well as in [Pb], [Ag] and Hg fluxes from ∼900 to 700 cal yrs BP (Figure 2f-g) as silver smelting thrived regionally (Abbott and Wolfe, 2003;Cooke et al., 2008Cooke et al., , 2007Cooke et al., , 2011Guedron et al., 2019). The metal record from the Illimani cap ice-core displays increased enrichment factors for lead and copper at the interval 1500-1000 cal yrs BP (Figure 2d, Eichler et al., 2017Eichler et al., , 2015, whereas modest increases in EFs of Cu and Ag are detected in the Quelccaya ice-core 1150-500 cal yrs BP (Uglietti et al., 2015).
The third population event coincides with a pluvial multidecadal period (Morales et al., 2012) and corresponds to intensified resource production through the Inca Andean territorial expansion (1450-1520 AD) followed by the Spanish colonization (1533 AD), and postcolonial industrial growth (Figure 2c). Aside from reorganizing the socio-political structure, the Inca regime intensified the irrigated-agriculture over the region to produce crop surplus either for paying tribute to the empire or for provisioning armies and workforces involved in mining and construction industries (Núñez et al., 2010;Salazar et al., 2013;Santoro et al., 2010;Troncoso et al., 2016;Uribe and Sanchez, 2016;Vidal-Elgueta et al., 2019;Zori et al., 2017). Since 1450 AD, peoples from the low-elevation Atacama Desert engaged in selecting highly productive maize varieties (e.g. with large kernel sizes), which represent the nearest predecessors of traditional landraces currently cropped in the Atacama areas (Vidal-Elgueta et al., 2019). This production system also supplied prized domestic camelids used in the Inca Road, which represents the first pan-South American vial network (∼23000 km of roads, bridges, waystations) that connected different subcontinental ecoregions including the Pacific coast, Altiplano, Atacama Desert, central Chile and the upper Amazon basin (Berenguer et al., 2007). Actually, such social-economic trade network was maintained through large-scale camelid herding in the highlands, as shown by the sharp increase in the accumulation of organic matter (i.e. animal excretions) in Cosapilla sediments dated at ∼1400 AD (Domic et al., 2018).
By 1533 AD, the Spanish colonization introduced several Old World crops such as alfalfa, wheat, orchard fruits, olives and grape varieties (Figure 2c, . Hydraulic innovations (e.g. underground irrigation systems, watermills) were also imported to increase crop yields nearly fivefold (Núñez et al., 2010). Similarly, domestic ruminants (cattle, goat, horses, donkeys, mules) were spread and started to substitute native grazing herbivores (i.e. camelids) in certain activities . Data from Cosapilla records do not evidence intensification in the land-use of peat-bogs during Colonial times, but a partial change in the composition. This pollen record shows that exotic taxa (Trifolium spp) successfully established in the peatland since 1550 AD, apparently facilitated by the passive dispersion and overgrazing pressures exerted either by native or by introduced livestock (Domic et al., 2018).
Metallurgy boosted over the last 600 cal yrs BP, spurred by the growing interest in exploiting regional silver, copper and gold reserves as well as other non-metallic ores. The Inca empire improved silver production through lead cupellation, and by incorporating sophisticated furnaces to process Cu and Au alloys (Cantarutti, 2013;Figueroa et al., 2015;Salazar et al., 2013;Zori et al., 2013;Zori and Tropper, 2010). Extraction and refining of highly toxic cinnabar ores (HgS) was apparently carried out in the region under the Inca rule (Arriaza et al., 2018). Commercial mining developed rapidly after the Spanish conquest by adopting Andean wind-sourced furnaces; both Hg and Pb amalgamations were routinely used to recover silver since 1600 AD (Figure 2b). The impact of pre-industrial mercury extraction is manifested in the bioarcheological record, which evinces high Hg levels in colonial mummies and increased incidence of pneumoconiosis in male bodies (Munizaga et al., 1975). During the Colonial times (1525 AD-1818 AD), ore extraction and processing were optimized by the introduction of explosives, large-scale mechanical equipment as well as advanced furnaces fed by bellows (Gavira-Marquez, 2005;Núñez et al., 2010). Industrialization of mining activities, however, did not occur until 1880 AD, when sodium nitrate beds (saltpeter) started to be exploited with heavy industrial machinery imported from Great Britain. Entering the 20 th century, the saltpeter boom declined and industrialized mining started to focus mainly on Cu-production in mine complexes operated since pre-Columbian times (Núñez, 2012). 600-yr of mining activities exacerbated the degradation of regional ecosystems. The long-term demand for biomass fuel endangered endemic plants with high combustion properties such as the cushion-resinous Azorella compacta and several woody species (e.g. P. tamarugo, Polylepis tarapacana) (Briones, 1985;Núñez and Grosjean, 2003;Rundel and Palma, 2000). Although A. compacta and P. tarapacana have experienced some recovery during the last decades (Rundel and Palma, 2000), natural forest of P. tamarugo were almost eradicated (Núñez and Grosjean, 2003). Moreover, natural and forested Tamarugo populations from Pampa del Tamarugal basin are still threatened by the sustained decline in phreatic levels imposed by intensive groundwater extraction (Chavez et al., 2016;Decuyper et al., 2016). Mining operations, particularly the saltpeter industry, have left a legacy of profound landscape modification along northernmost Chile with several ghost towns, abandoned earthworks and industrial machinery, massive tailings, and desiccation of wetlands as well as extensive blasted/perforated surfaces (Aldunate, 1985;Lorca, 2016).
Paleoepollution records indicate that the intensification of metallurgy during the past 600 years led to a progressive regional rise in the emission of heavy metals since 1500 AD (Figure 2d-g) (Cooke et al., 2008(Cooke et al., , 2007De Vleeschouwer et al., 2014;Eichler et al., 2017Eichler et al., , 2015Guedron et al., 2019;Hong et al., 2004;Schwanck et al., 2016;Uglietti et al., 2015). By this period (1575 AD-2005 AD) the correlation between paleodemographic and pollution levels increased considerably (Figure 3b; Spearman's rho = 0.67, p < 0.05) compared to Pre-Inca times. Our sequential T-test analysis for regime shifts reveals two long-term interludes of increased metalloidpollution throughout the period encompassing from the Inca expansion to the early 21 st century (Figure 3a). After 1375 AD the atmospheric trace metals composition experienced an important transition (mean = 0.28 ± 0.05, RSI = -1.1, p-value < 0.05), characterized by high and variable paleopollution index values (Figure 3a). Meanwhile, the rapid increase in the emissions of most heavy metals from 1925 AD onwards (Figure 2d-g; Cooke et al., 2008Cooke et al., , 2007De Vleeschouwer et al., 2014;Eichler et al., 2017Eichler et al., , 2015Guedron et al., 2019;Hong et al., 2004;Schwanck et al., 2016;Uglietti et al., 2015) defines a major regime shift in air quality (mean = 0.57 ± 0.06; RSI = -3.5, p-value < 0.05; Figure 3a). Records from Antarctica and the Central Andes show that even though several industrial activities in northern Chile were responsible for significant arsenic and lead pollution during much of the 20 th century, this was rapidly reversed as environmental regulations were implemented (Cooke et al., 2011;Eichler et al., 2015;Schwanck et al., 2016).
Central Chile
There is general consensus that the onset of the modern climate in Central Chile started about 3200 cal yrs BP (Frugone-Alvarez et al., 2017; Jenny et al., 2002b;2003;Villa-Martínez et al., 2003;Villagrán and Varela, 1990). Under this paleoclimatic scenario, regional pre-Columbian societies experienced significant changes in subsistence strategies, social organization, technologies and occupation patterns. Indeed, these populations underwent continuous demographic growth with a progressive incorporation of agriculture (Figure 4a). The obtained plot for SPD reveals four distinctive paleodemographic events that display the intensity of human activities over the region (Figure 4a).
The first event (3500-2400 cal yrs BP) is characterized by low population levels. Despite discrepancies in the paleoclimate conditions throughout this region, existing reconstructions concur that during this population phase cold but highly variable humid conditions prevailed (de Jong et al., 2013;Jenny et al., 2002b;Maldonado and Villagrán, 2002;Maldonado and Villagrán, 2006;Martel-Cea et al., 2016;Villa-Martínez et al., 2003). Since 2350 cal yrs BP human activities increased steadily, reaching a stationary phase between 1600 and 1250 cal yrs BP (Figure 4a). This second population event is concurrent with a period of augmented, but highly variable storminess (Abarzúa et al., 2010;Jenny et al., 2002a;Villagrán, 2002, 2006;Martel-Cea et al., 2016;Villa-Martínez and Villagrán, 1997;Villagrán and Varela, 1990). Population levels rose rapidly during the third paleodemographic phase (1250-500 cal yrs BP), peaking at 1000-500 cal yrs BP during the MCA (Figure 4a), a period characterized by slightly drier and warmer conditions (de Jong et al., 2013;Garreaud et al., 2017;Jenny et al., 2002a;Maldonado and Villagrán, 2002;Torres et al., 2008;von Gunten et al., 2009b). Soon after colder and wetter conditions returned over much of central Chile during the LIA (Carrevedo et al., 2015;Christie et al., 2011;Garreaud et al., 2017;Jenny et al., 2002b;Le Quesne et al., 2009;Villa-Martínez et al., 2004;von Gunten et al., 2009b) population levels decreased during the fourth paleodemographic event (Figure 4a). This population crash coincides with the expansion of the Inca Empire up to 35°S since 500 cal yrs BP, and the subsequent European colonization of the region by the mid-16 th century (Figure 4b, Uribe and Sanchez, 2016). Again, we believe that such pattern in the intensity of human activities arises in part from research biases in the accumulation of chronometric data (see Text S1, Supplemental Material 1). Ethnohistorical evidence relates either the Inca or Spaniard occupation to high population densities across the region (Bengoa, 2003;Stehberg and Sotomayor, 2012). Still, historical census data indicate a slow population growth since the 1800s, with a subsequent acceleration by 1940 AD (Table S5, CELADE, 2005;McCaa, 1972).
Hunter-gatherer groups prevailed over the territory during the first population event (3500-2400 cal yrs BP, Figure 4b), displaying diverse mobility patterns, but with an apparent tendency towards semi-sedentary settlements around highly productive and predictable environments (e.g. coast, lakes, rivers) (Adan et al., 2016;Cornejo et al., 2016). Even though paleodemographic estimates indicate low values during this phase (Figure 4a), there are signs for emerging human-induced alterations in regional ecosystems. Archeobotanical data retrieved from the upper Maipo river basin (∼2400 masl, Figure 1) reveal evidence for domesticated C. quinoa by 3500-3000 cal yrs BP (Figure 4b, Planella et al., 2005Planella et al., , 2011Planella and Tagle, 2004). At 3400 cal yrs BP the first colonization pulse into small offshore islands occurred (Figure 4b) such as Isla Mocha at 38°S, which resulted in unintended introduction of freshwater and terrestrial invertebrates from the mainland as well as medium-size mammals such as Myocastor coypus and Puda pudu (Campbell, 2015b;Jackson et al., 2013;Quiroz and Sánchez, 2004). Over the Longitudinal Valley, charcoal records from Laguna Tagua Tagua and L. Aculeo (34°S) show increased fire activity . I . II . III between 3200-2500 cal yrs BP (Figure 4c, Heusser, 1990;Villa-Martínez et al., 2003). The same pattern is verified at the coast at 32°S, represented by an increment in charcoal at 3200 cal yrs BP that is coeval with the establishment of locally permanent human settlements (Maldonado and Villagrán, 2002). In the L. Aculeo record, the amplitude of augmented charcoal accumulation rates is exceptionally high, comparable to the peaks detected in historic times (1630 AD-1950 AD, Figure 4c). An upward trend in the influx of micro-charcoal particles recorded in Tagua Tagua since 2900 cal yrs BP points to a major regime shift (RSI = -0.53, p-value < 0.05) in fire activity over the Mediterranean region during the past 7000 years (Figure 5a; Heusser, 1990). This implies that, by this time, the transition from a natural to a human-driven fire regime occurred, which was characterized by distinct troughs and peaks in burning incidence (Figures 4c-d, 5b).
Comparisons of reconstructed demographic and paleofire indexes reveal a weak (Spearman's rho 0.18-0.26) and non-significant correlation over the last 3500 cal yrs, even when this relationship is tested independently during either Pre-Columbian or historical times (Figure 5c).
A marked spatial-temporal occupational discontinuity of the coast between 37° and 39°S is documented at 3000-2250 cal yrs BP (Campbell and Quiroz, 2015). North and south of this latitudinal range, the coast remained occupied by hunter-gatherers that settled along marine terraces, capes and littoral areas up to 3 km from the shore. These groups exploited coastal woodlands for raw materials and fuelwood, estuarine/riparian plant resources, camelids (Lama guanacoe) and mostly marine resources such as mollusks (Mesodesma donacium, C. concholepas, Fisurrella spp, Tegula atra, Chiton spp), sea urchins (Loxechinus albus), fish and cirripede crustaceans (Austromegabalanus psittacus) (Cornejo et al., 2016;Jerardino et al., 1992;Méndez, 2002;Mendez and Jackson, 2004). The recurrent occupation of distinct littoral areas to harvest intensively intertidal resources since 3500 cal yrs BP onwards transformed the coast physiography through the removal and alignment of massive stones to prepare fireplaces, but more importantly by the accumulation of several cultural shell-middens (Mendez and Jackson, 2004). Because such anthropogenic soils were typically formed near or onto forested areas, the contact with nutrient-rich marine debris probably altered the geochemistry of vegetated soils by increasing calcium carbonate, phosphorous and nitrogen inputs as reported in other coastal areas of the world (see Erlandson, 2013). Unfortunately, the potential impact of such anthropogenic soils has never been evaluated in the region.
Although the hunting-gathering strategy or mixed foraging-production economies persisted for another millennium (Figure 4b), agricultural activities increased in importance since 2150 cal yrs BP Roa et al., 2015). For instance, the first pottery activities appeared almost synchronically by 2200 cal yrs BP along the entire region (Figure 4b, Campbell and Quiroz, 2015;Marsh, 2017). Mesoamerican and Andean-American edible cultigens such as maize, Cucurbitaceae (squash) and Phaseolus spp complemented the C. quinoa horticulture production since 1750 cal yrs BP Falabella et al., 2007;Tykot et al., 2009). Paleoenvironmental records attest for agricultural landuse changes over the region. Charcoal peaks, high phosphorous concentrations and traces of maize pollen-type in the El Valle peat-bog record (38°S) point towards slashand-burn practices, soil erosion and crop production since 2000 cal yrs BP (Abarzúa et al., 2014). In Mocha Island, farming activities resulted in the introduction of cultivated species and increased frequency of fires that compromised the long-term regeneration of the native Aextoxiconetumtemperate forest (LeQuesne et al., 1999). Actually, the Huairavos lacustrine record shows a prominent peak in charcoal accumulation, marked decrease in arboreal species and increased representation of Amaranthus pollen types starting at ∼1600 cal yrs BP (LeQuesne et al., 1999). Far north (33°S), the concomitant increase in charcoal influx and reduction in arboreal taxa detected in Laguna Chepical suggests that woodlands from the Longitudinal Valley were deliberately burned between 1750 and 1550 cal yrs BP (Figure 4d, Martel-Cea et al., 2016). At the coast, malacological data from archeological shell-middens (33°S) show signs of intense harvesting of rocky-intertidal and shallow subtidal resources and overexploitation of C. concholepas and Fisurella limbata. Indeed, mean sizes of both mollusks decreased noticeably between 2500 and 1300 cal yrs BP due to long-term pervasive gathering (Jerardino et al., 1992).
By the second demographic event there is an overall heavier reliance on the production subsistence strategy, and in turn increased landscape transformations (Figure 4b).
Since 1200 cal yrs BP, almost all of the population was sedentary, settling around wetlands and riverine systems to sustain farming activities -mostly maize-throughout the coast and the Longitudinal Valley (Alfonso-Durruty et al., 2017;Dillehay and Saavedra, 2003;. Several archeological sites exhibit evidence for low-scale copper manufacture (Campbell and Latorre, 2003;Mera et al., 2015), but data on the extent of mining and metalworking processes are scarce. These agrarian populations maintained small-scale herding of camelids which were maintained along cultivated areas, foddering on mostly maize and other agricultural byproducts (Falabella et al., 2008;López et al., 2015). Meanwhile, the introduction of domesticated chickens from Polynesia around 600-510 cal yrs BP led to an incipient poultry farming of the nativedomestic Araucana breed in the southern portion of central Chile (Figure 4b; Storey et al., 2013Storey et al., , 2007. Starting at 860 cal yrs BP, southern-central varieties of maize and C. quinoa were extensively cultivated onto raised-canalized fields set along floodplain wetlands (Abarzúa et al., 2014;Dillehay et al., 2007). These raisedcultivation fields were associated with permanent villages and prominent public architecture complexes, whose accumulation led to layered anthropogenic soils consisting of local and extra-local sediments, pottery, charcoal and faunal remains (Dillehay et al., 2007;Dillehay and Saavedra, 2003). In Mocha Island, such cultural landforms appear since 960 cal yrs, and were formed through deliberated transport and accumulation of large amounts of sediments from nearby Miocene-Pliocene sedimentary sequences (Campbell and Pfeiffer, 2017).
Intense farming activities by 1200-500 cal yrs BP significantly altered the soil properties coastal of the floodplain wetlands that sustained raised-canalized fields, including alkalization and high contents of nitrogen, phosphorous, manganese and calcium (Dillehay et al., 2007). Sustained increase in phosphorous concentration in the El Valle record since 740 cal yrs BP suggest that the transformation of native forest into farmlands led to important soil erosion (Abarzúa et al., 2014). In the Laguna Espejo record (40°S), this process is marked by high deposition in both Zr and Rb terrigenous elements at 900 cal yrs BP (Jana, 2014). Amplified forest degradation is evident in offshore islands since 1150 cal yrs. Peat-bog sediments from Santa María Island (38°S) reveal marked reductions in native shrub/tree taxa due to either widespread forest burning or introduction of crops (maize, Solanum sp, Chenopodium sp) (Massone et al., 2012). These transformations were accompanied by intended translocations of camelids (L. guanacoide), small felids (Leopardus sp.) and carnivores (Lycalopex sp, Galictis sp.) into both Mocha and Santa Maria Islands (Campbell, 2015b).
The arrival of the Inca Empire by 1450 AD gave an important impulse to the productivity of the subsistence economy particularly in the northern area of central Chile. Zooarcheological evidences from el Mauro Valley (32°S) evidence medium-scale camelid livestock production within farmlands including draught llamas and increased exploitation of byproducts derived either from domesticated or wild individuals (López et al., 2015). Because δ 15 N and δ 13 C values in these domesticated camelids are comparatively higher than previous periods (López et al., 2015), it seems likely that animal fertilizers (e.g. camelid manure) started to be incorporated to enhance crop production. The introduction of innovative farming technology (i.e. terrace cultivation, irrigation networks), consolidation of the southernmost branch of the Inca Road and the emergence of small urban centers (Pavlovic et al., 2004;Sánchez, 2001) could have exacerbated the pressure on watersheds and/or woodlands. A tree-ring reconstruction of fire occurrences at the Cachapoal basin (34°S) is consistent with this growing pressure on inland ecosystems, evincing an upward trend in human-induced fires from 1450 AD (Bustos-Schindler et al., 2010). Actually, it was found that such increase in anthropogenic burning by 1450 AD led to a distinct regime shift detected over the Mediterranean region during the past 3500 years (Figure 5b). Nevertheless, this transition, which extended until the Industrialized era (1950 AD), is not statistically significant (p-value > 0.05). In terms of ore exploitation, metallurgy production became heterogeneous, focused mostly on copper and tin bronze as well as, to a lesser extent, silver and gold (Campbell, 2015a;Latorre and Lopez, 2011;Plaza and Martinón-Torres, 2015). Both leach smelting and direct metal sculpting processes were involved, but to what extent the associated impacts in terms of anthropogenic trace metal emissions have not yet been explored.
The Spanish colonial economy imposed a new pattern of impacts through the introduction of Old World crops (i.e. wheat), exotic plants (i.e Rumex sp) and livestock (cattle, horses) that rapidly caused the extinction of endemic cereals (Bromus mango) or domesticated/wild camelids (Torrejón and Cisternas, 2002;Vargas et al., 2017). During the ensuing centuries, the expansion of colonial and republican urban centers along the coastline, navigable rivers, gold/silver/copper mining reserves and/or fertile areas for livestock and crop production led to substantial landscape transformations (Torrejón and Cisternas, 2002;Torrejón et al., 2004). Unprecedented frequencies and intensity of fires are evident since 1650 AD onwards (Figure 4c-d; Abarzúa et al., 2014;Aravena et al., 2003;Gonzalez, 2005;Martel-Cea et al., 2016;Massone et al., 2012;Villa-Martínez et al., 2003). The paleofire time-series for the Mediterranean region (33°-34°) suggests a declining trend in anthropogenic fires between 1750 AD-1850 AD, which quickly reversed, however, to exceptional incidence levels by 1950 AD (Figure 5b).
Manufacturing and transport innovations derived from the European Industrial Revolution were rapidly introduced in central Chile, and by 1840 AD a large-scale coal mining industry boosted in the Arauco basin (37°S) to satisfy the incipient national industrialization. Concurrently, generalized deforestation, soil erosion and the presence of exotic taxa (e.g Pinus radiate, Rumex acetosella) are recorded after 1850 AD between 32° and 40°S (Carrevedo et al., 2015;Frugone-Alvarez et al., 2017;Jana, 2014;Jenny et al., 2002b;Martel-Cea et al., 2016;Moreno and Videla, 2016;Torres et al., 2008;Urrutia et al., 2010;Vargas et al., 2017;Villa-Martínez, 2002). Although signs of eutrophication are present since 1890 AD in some lacustrine basins (Jana, 2014;Martel-Cea et al., 2016;Urrutia et al., 2010Urrutia et al., , 2000, this phenomenon became important at a regional scale during the late 20 th century (Carrevedo et al., 2015;Frugone-Alvarez et al., 2017;von Gunten et al., 2009a). The same pattern is observed for the evolution of air pollution derived from fossil fuel combustion and copper extraction. Geochemistry data from lakes located around El Teniente mining operation and Santiago city -Chile's capital and largest city-show increased deposition of spheroidal carbonaceous particles and Cu-excess beginning at 1900 AD (Figure 6), shortly after industrial and mining development required fossil fuel combustion and largescale smelting and refining processes (von Gunten et al., 2009a). Nevertheless, significant transitions in the emissions of black carbon are evident after the nationalization of the Chilean copper industry (1962 AD, RSI = -1.2, p-value < 0.05) and then by 1992 AD (RSI = -2.6, p-value < 0.05) (Figure 6).
Discussion
Our synthesis of geohistorical data suggests that pre-Columbian societies played an active role in shaping natural environments in northern and central Chile over the last three millennia. In support of our working hypothesis, we found that past inhabitants progressively escalated their capacity in transforming ecosystems as socio-cultural complexity and energy consumption/production increased throughout time (Figures 2-6). Evidences distilled here show the cumulative impact of past human activities on the evolution of national ecosystems, and support the emerging notion that the Anthropocene derives from long-term processes that have operated continuously since prehistoric times (Boivin et al., 2016;Braje and Erlandson, 2013a;Catlin, 2016;Kennett and Beach, 2013;Piperno et al., 2015;Rick et al., 2013;Rosen et al., 2015;Verstraeten, 2014). By this, we are not claiming that the onset of the Anthropocene occurred at some point of the Pre-Columbian era or that past societies were capable of driving the dynamics of biophysical patterns. Archeological and paleoenvironmental records revised here, however, suggest that anthropogenic landscapes have been created continuously in Chile over the last 3000 years, although at a much slower rate than today.
We are aware that our review is subject to some limitations that could certainly be overcome through future research. An important amount of the available data is qualitative, thus providing information about relative rather than absolute changes. Several geohistorical records are fragmented and disparate either at spatial or temporal scales and biased towards addressing natural environmental variability or the vulnerability of ancient societies to climate fluctuations. For instance, although biological invasions/extirpations, and eutrophication have become emblematic of human footprints on modern Chilean ecosystems (Armesto et al., 2010;Casanova et al., 2013;Lara et al., 2009), their deep-time trajectories have received little attention. Therefore, we would encourage a national interdisciplinary research agenda oriented at obtaining broad spectrum and spatial-temporally resolved records that portray the long-term coevolution between sociocultural and environmental agents. Researchers are then challenged to improve pre-existing and novel reconstructions by considering new methodological approaches to yield crucial data. We concur with Verstraeten (2014) that in order to gain insights into the dynamics of human-environmental interactions, strong efforts should be made to generate quantitative data.
Paleodemographic estimates reveal an overall increase in the intensity of human activities on the landscape and in energy consumption/production during the last 3000 years, with a marked acceleration at ∼1900-600 cal yrs BP (Figures 2-6). This pattern occurred under relatively adverse hydroclimate variations in central and northern Chile. Pre-Columbian societies were able to buffer this paleoclimate scenario by adjusting and/or incorporating adaptative strategies, technologies or cultural practices that guaranteed resource access and social wealth.
Certainly, such progressive changes in socio-cultural complexity over time enhanced and improved the capacity for engineering different components of biophysical systems either at macro or meso regional scales. Exceptions are the decoupling relationships between paleodemographic patterns and fire activity (Figure 5c) or the formation of anthropogenic soil in coastal areas, which have also been observed for hunter-gatherers from different regions of the world (Erlandson, 2013;Glikson, 2013;Lightfoot and Cuthrell, 2015;Pinter et al., 2011;Ramsey et al., 2015;Rick et al., 2013;Williams et al., 2015). Even though hunter-gatherer groups maintained low population levels, these were able to set up an anthropogenic fire regime by 2900 cal yrs BP in Mediterranean Chile ( Figure 5a) and lead to disproportionate impacts on the littoral morphology of northern and central regions (Figures 2a and 4a). Conversely, the coupled feedback between the progressive scaling up in socio-cultural complexity and ecosystem engineering is best represented by the positive and significant correlation obtained between demographic and pollution levels during either Pre-Columbian or historical times (Figure 3b). In fact, metalloid air-pollution increased throughout time as the results of the interplay between the intensity of metallurgical activities and social complexity, which is expressed as technological development, increased spatial aggregation and specialization, food-production capacities, and population levels (Figures 2 and 3). Such relationship relies on the fact that the prosperity of the metallurgy industry during Pre-Columbian times involved sophisticated processes (e.g. smelting, cupellation) in delimited production areas that required highly specialized labor and food surpluses (Lechtman, 2013). On the other hand, this process during historical times is explained by socio-economic pressures imposed by centralized (Colonial, Republican) governments that imported technologies (i.e. explosives, mechanical equipment) to optimize the extraction and refining of metalloids.
Because pre-Columbian societies confronted different bioclimate settings, challenges and socio-cultural backgrounds among and between regions, there is not a generalized expression of a unique human-environment interaction in Chile. Besides some convergent processes over the territory -i.e. the recurrent long-term pressure on predictable and productive systems (littoral and freshwater)-regional idiosyncratic patterns may have also operated. The deliberate clearing and burning of native forests either on offshore islands or on the mainland resulted in an inherent key land-management practice in central Chile during the past 3000 years. Except for the transitory use of burning practices to manage livestock forage production recorded in the high-Andean Cosapilla peatland (Domic et al., 2018), human fire regimes or large-scale land clearing were strategies that were absent from the northern region, most likely due to the limited vegetation cover. Meanwhile, Atacama Desert populations exploited ecosystem services through the spatial management of water availability from watersheds or aquifers, and the implementation of agroforestry to green/fertilize this inhospitable landscape. Intensive exploitation of target shellfish and other marine resources was a key strategy for coastal populations, which resulted in overfishing, a profound transformation of the littoral morphology that probably altered the nutrient-cycling in nearshore habitats. Northern and coastal Chile cases indicate that both positive and negative effects emerging from the culturalenvironmental interaction are usually intertwined.
We posit that human behavior patterns modified ecosystems over the past 3000 years in northern and central Chile, precluding the existence of pristine environments well before the Industrial Revolution. A logical corollary of this is that cultural niche construction has been a core process underpinning the Anthropocene, and that trends after 1850 AD represent an unprecedented shift in human-environment interaction resulting from the coupled positive feedback loop between cultural and ecological inheritances (Balter, 2013;Braje, 2016;Ellis, 2015;Ellis et al., 2018;Sawyer, 2015;Smith and Zeder, 2013). Our synthesis suggests that most of present-day symptoms of the human-dominated state of Chilean environments have roots in Pre-Columbian processes the intensity of which scaled up over the last 3000 years, accelerating after the Spanish colonization and, more intensely, in recent decades. Such pattern is consistent with the reconstruction previously conducted by Armesto et al (2010) for long-term changes in land-use in central and southern Chile. Perhaps the most striking trend is the observed coevolution between metallurgy intensity in northern Chile and heavy metal anthropogenic emissions that is starting to appear in different south American and even Antarctic geochemical records. Two long-term by metalloids pollution events (i.e. regime shifts I and II in Figure 3a) are detected ∼1000 years before the industrialization and massive mining-smelting operations were established in Chile. Nonetheless, pollution increased at a devastating pace at 1975 AD (regime shift III, Figure 3a), and throughout the last two decades (1985 AD-2005 AD) has maintained levels not seen in Pre-Industrial times. Eutrophication of lacustrine systems and black carbon emissions from the burning of fossil fuels appear to be good candidates for modern impacts from the incipient industrialization after the late 19 Th century, but further investigations on their particular historical trajectories are required to confirm this pattern.
This study shows that the integration of geohistorical records for past societal behavior and human-driven landscape transformations into this perspective, provides the means to contextualize inherited, recurrent or exceptional properties of the recent socio-environmental history of Chile. We indeed verify that long-term dynamics of socioecological systems have led to inexorable multidimensional and idiosyncratic features for the Anthropocene at national or regional scales. Different transitions in the human-environment interaction brought about by changes in cultural and ecological inheritances (i.e. regimen shift in the cultural niche construction, sensu Ellis 2015) are evinced on these spatial scales over the last three millennia, including the Neolithisation, the Inca expansion and the Spanish colonization. All of these were conducive to enhance the human domination of Chilean ecosystems. This entails that the Anthropocene cannot be viewed as a universal imprint of human actions that has arisen as an exclusive consequence of modern industrial societies. In the case of Chile, this phenomenon is intrinsically tied to historically and geographically diverse configurations in society-environment feedback relationships.
That past human impacts on biodiversity, soil/air quality, hydrological patterns, nutrient cycling, and landcover were not negligible, and that these escalated in intensity as production and economies increased in relevance throughout time is hardly new. Although cultural and ecological inheritances were heterogeneous in time and space, this trend appears as a convergent evolutionary pattern in several regions around world (Aikens and Lee, 2013;Braje and Erlandson, 2013a;Brewington et al., 2015;Laparidou and Rosen, 2015;McClure, 2013;Rick et al., 2013;Rosen et al., 2015;Streeter et al., 2015;Veena et al., 2014;Wagreich and Draganits, 2018). In the rest of the Americas, for instance, the acceleration of anthropogenic impacts on terrestrial and coastal ecosystems after the European colonization by the 16 th century is consistently recognized in North America (Dotterweich et al., 2014;Jones, 2015;Lightfoot et al., 2013;Stinchcomb et al., 2014), Amazonia (Arroyo-Kalin, 2012;Piperno et al., 2015;Roosevelt, 2013) and the Caribbean region (Rivera-Collazo, 2015). Thus, the coupled socio-environmental evolutionary approach adopted here complements previous efforts to visualize the Anthropocene in the deep-time (Armesto et al., 2010;Braje and Erlandson, 2013b;Crumley et al., 2015;Dearing et al., 2015;Verstraeten, 2014). Taken collectively with these other case studies, our work could contribute to the discussion about how the Anthropocene is defined globally, in terms of chronology, stratigraphic markers and attributes. We feel that this deep-time narrative has the potential to become a science-based instrument for shaping better-informed public and political discourses about the long-term socio-environmental history in Chile. But more importantly, it offers crucial "baselines" to delineate safe operating spaces (sensu Rockstrom et al., 2009;Steffen et al., 2015) for future generations as well as principles for the conservation and sustainable management of Chilean ecosystems.
Supplemental files
The supplemental files for this article can be found as follows: | 2019-06-07T22:36:27.656Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "f30ab56c8f20c85dfa81aa2bcca4e75268fe1337",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1525/elementa.353",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d248ec078cc14192a529a4a03d14c1eda1df74df",
"s2fieldsofstudy": [
"Environmental Science",
"History"
],
"extfieldsofstudy": [
"Geography"
]
} |
6174099 | pes2o/s2orc | v3-fos-license | C/EBP-δ regulates VEGF-C autocrine signaling in lymphangiogenesis and metastasis of lung cancer through HIF-1α
CCAAT/enhancer-binding protein delta (C/EBP-δ), a transcription factor, is elevated in carcinoma compared to normal tissue. This study reports a novel function of C/EBP-δ in lymphangiogenesis and tumor metastasis. Genetic deletion of C/EBP-δ in mice resulted in a significant reduction of lymphangiogenesis and pulmonary metastases, with a dramatic reduction of VEGF-C and its cognate receptor VEGFR3 in lymphatic endothelial cells (LECs). In contrast, no difference of VEGF-C in tumor tissues and bone marrow was observed between null and wild type mice. Consistently, forced expression of C/EBP-δ increased VEGF-C and VEGFR3 expression in cultured LECs. These findings suggest a specific and important role of C/EBP-δ in regulating VEGFR3 signaling in LECs. Furthermore, expression of C/EBP-δ in cultured LECs significantly increased cell motility, and knockdown of C/EBP-δ inhibited cell motility and lymphatic vascular network formation in vitro. Forced expression of VEGF-C, but not recombinant VEGF-C, rescued knockdown of C/EBP-δ-induced cell apoptosis, indicative of autonomous VEGF-C autocrine signaling essential for LEC survival. Moreover, hypoxia induces C/EBP-δ expression, and C/EBP-δ regulates HIF-1α expression. Blocking HIF-1α activity totally blocked CEBP-δ induced VEGF-C and VEGFR3 expression in LECs. Together, these findings reveal a new function of CEBP-δ in lymphangiogenesis via regulating VEGFR3 signaling in LECs.
Introduction
The metastatic spread of tumor cells is the most lethal aspect of cancer. Similar to angiogenesis, lymphangiogenesis is the formation of lymphatic vessels, during which lymphatic endothelial cells sprout to form new vessels from preexisting lymphatic vessels. Lymphangiogenesis plays important roles in tissue homeostasis, metabolism and immunity. Lymphatic vessel formation also contributes to pathological conditions such as tumor invasion to lymph nodes and metastasis (Alitalo et al 2005). The role of the lymphatic network in human diseases has received renewed interest largely due to the identification of specific signaling pathways that regulate the formation of lymphatic systems, which are vascular endothelial growth factor C (VEGF-C) and its cognate receptor VEGF receptor 3 (VEGFR-3). VEGF-C stimulates tumor lymphangiogenesis and metastasis by interacting with VEGF receptor 3 (VEGFR-3) (Alitalo et al 2005).
VEGF was known to regulate angiogenesis in a paracrine fashion, in which VEGF produced by non-endothelial cells in surrounding tissues binds to its cognate receptors on endothelial cells to activate angiogenic signaling pathway. However, it was recently found that abrogation of endogenous VEGF production in endothelial cells leads to apoptosis and thrombosis. And interestingly, addition of endogenous, but not exogenous, VEGF rescued the phenotype (Helotera andAlitalo 2007, Lee et al 2007). As endogenous and exogenous VEGF function through the same receptor, VEGFR2, on endothelial cells, these findings imply that autocrine and paracrine signaling events by VEGF in endothelial cells possess different and non-overlapping functions. Currently, it is unclear whether VEGF-C possesses a similar cell-autonomous signaling role in LECs.
Hypoxia is commonly associated with many tumors. The transcription factor hypoxiainducible factor-1 (HIF-1) is a major regulator of tissue response to hypoxia (Semenza 1998). HIF-1 regulates tumor progression by up-regulating its target genes, including genes associated with angiogenesis and lymphangiogenesis (Katsuta et al 2005). Hypoxia promotes lymphangiogenesis in human breast cancer and lung carcinoma (Schoppmann et al 2006, Simiantonaki et al 2008. In various types of cancer, HIF-1α expression is positively correlated with lymphatic metastasis (Kurokawa et al 2003, Kuwai et al 2003. C/EBP-δ is a transcription factor that belongs to the C/EBP family. It is strongly induced by inflammatory cytokines, and plays a role in inflammation (Poli 1998, Rabek et al 1998, Takata et al 2002. The level of C/EBP-δ in carcinoma is elevated compared to surrounding normal tissue (Kim andFischer 1998, Milde-Langosch et al 2003), indicating a positive role of the gene in tumorigenesis. However, the underlying molecular mechanism remains unclear. One major function of C/EBP-δ is regulating gene expression, such as pro-apoptotic gene expression during mammary gland involution (Thangaraju et al 2005), and plateletderived growth factor receptor expression that affects smooth muscle cell proliferation (Kitami et al 1999, Yang et al 2001. In addition, C/EBP-δ expression is elevated under hypoxic conditions in both neonatal and adult brains (Tang et al 2006).
In this study, we demonstrate a previously unknown function of C/EBP-δ in promoting lymphangiogenesis and lung metastasis via regulating VEGFR3 signaling in LECs. We show that C/EBP-δ is expressed in LECs and regulates lymphatic angiogenic gene expression through HIF-1α. Genetic deletion of C/EBP-δ impairs lymphangiogenesis and metastasis. Thus this study links hypoxia to VEGF-C signaling in lymphangiogenesis and metastasis through C/EBP-δ.
Materials and Methods
Mice and cell lines C57BL/6J mice from Jackson Labs and C/EBP-δ null mice in the C57BL/6 background from the NCI (Sterneck et al 1998) were housed in a pathogen-free unit at the Vanderbilt University School of Medicine in compliance with Institution of Animal Care and Use Committee (IACUC) regulations. Six-week-old female mice were injected with 1 × 10 5 3LL cells in 100 μL PBS via tail vein. Fourteen days after inoculation, lungs were excised, and the number of pulmonary tumor colonies was counted and lungs were weighed. Human lung lymphatic endothelial cells (HMVEC-LLy) were purchased from Lonza (Walkersville Inc), and cultured according to manufacture's protocol. All experiments were performed on cells between 3-7 passages. Lewis lung adenocarcinoma cells (3LL) were maintained in DMEM supplemented with 10% serum.
Immunohistochemistry
Tumors were harvested, processed and paraffin tissue sections were stained with an antibody against LYVE-1 (MBL), and counterstained with hematoxylin. The number of LYVE-1+ lymphatic vessels was counted in 10 randomly selected 200X fields under microscopy.
Isolation of pulmonary lymphatic endothelial cells
Age and sex matched wild type and C/EBP-δ null mice were sacrificed. Single cell suspension was made from lungs as described (Kamiyama et al 2006). Cell suspension was incubated with LYVE-1-PE antibody from MBL. Positive cells were sorted in a FACStarPlus® flow cytometer (Becton Dickinson, Franklin Lakes, NJ). Cell purity was confirmed by immunostaining with a LYVE-1 antibody. Cells used in each study were greater than 95% pure.
Expression and knockdown of C/EBP-δ
HMVEC-LLy cells were transfected using the Nucleofector Kit (Lonza, VPB-1002). 24 hours after transfection, the cells were treated under normoxia or hypoxia conditions for 48 hours followed by analysis of gene expression. For overexpression, the cells were transfected with PCDNA 3.1-C/EBP-δ plasmid or empty vector as control. For knockdown, shRNA plasmid DNA specific for C/EBP-δ and non-specific shRNA plasmid obtained from Sigma-Aldrich were used. Puromycin at .5μg/ml was used for enrichment of shRNA transfected cells prior to each experiment. Geldanamycin purchased from Marligen Biosciences Inc. was used to inhibit HIF function (5 μM).
In vitro lymphangiogenic assays
HMVEC-LLy cell migration was performed in Transwells with recombinant VEGF-C at 50 ng/ml. Migrated cells were counted in 10 randomly selected high power fields after 5 hours of incubation. Lymphatic vascular tubule formation was done in 3-D culture on top of growth factor reduced Matrigel (Becton Dickinson, Bedford, MA). Tubule structure was photographed 24 hours after cell plating under microscopy. Vascular cross points were counted in 10 randomly selected fields under microscopy.
Flow cytometric analysis of apoptotic populations with staining of annexin V-FITC and propidium iodide
The frequencies of apoptotic cells were determined using Annexin V-FITC and Propidium Iodide (PI) staining. Flow cytometry was performed with a FACScalibur flow cytometer, and the results were analyzed with Cell Quest Pro software (Becton Dickinson, Bedford, MA).
RT-PCR and Real-Time RT-PCR analysis
Total RNA was isolated using RNeasy Quick spin columns (QIAGEN, CA). cDNA fragments of VEGF-C, VEGFR3, C/EBP-δ HIF-1α and β-actin were amplified using Taq DNA polymerase.
Statistical Analysis
The results are presented as means ± SE for each sample. The statistical significance of differences was determined by Student's two-tailed t test in two groups, and done by oneway ANOVA in multiple groups and two-factor factorial ANOVA. All data were calculated with a Statview 5.0 (Abacus Concepts, Berkeley, CA) statistical software package run on a Windows computer. The differences were considered statistically significant when p-value<0.05.
Genetic deletion of C/EBP-δ in mice resulted in a reduction of lymphangiogenesis and pulmonary metastasis of lung cancer
C/EBP-δ is strongly induced by inflammatory cytokines (Rabek et al 1998, Takata et al 2002, and the levels of C/EBP-δ in carcinoma are significantly higher than surrounding normal tissues (Kim andFischer 1998, Milde-Langosch et al 2003), suggesting a positive role of the gene in tumorigenesis. To test this hypothesis, we used C/EBP-δ null mice. Mice without the C/EBP-δ gene are viable, grossly normal and fertile except subtle defects in adipocyte differentiation, mammary gland involution, and specific types of learning and memory (Johnson 2005, Sterneck et al 1998, Tanaka et al 1997. We injected 3LL tumor cells in C/EBP-δ null mice and wild type mice through tail vein, followed by assaying lung colonization of tumor cells, a commonly used model for vascular metastasis. Interestingly, there was a significant reduction of tumor metastasis measured by counting lung surface metastases and lung weight ( Figure 1A-1C) As the lymphatic networks are critical for metastasis, we therefore examined the effects of inactivation of C/EBP-δ on lymphangiogenesis. Immunohistological staining of pulmonary metastases sections revealed fewer Lyve-1 positive lymphatic vessels in tumors from the C/ EBP-δ null mice than the wild type mice ( Figure 1D and 1E). These findings support a positive role of C/EBP-δ in tumor malignancy via regulation of lymphangiogenesis. It suggests that defective tumor lymphangiogenesis associated with C/EBP-δ inactivation contributes to reduced tumor metastasis.
C/EBP-δ specifically regulates VEGF-C and VEGFR3 expression in lymphatic endothelial cells
Since C/EBP-δ is known to regulate gene expression, we examined the potential role of C/ EBP-δ in lymphangiogenic gene expression by comparing tissues isolated from wild type and null mice. We did not see any differences in VEGF-C expression in tumors between the two groups (Figure 2A), nor did we see any difference in VEGF-C expression in bone marrow between the null and wild type mice ( Figure 2B). Surprisingly, an analysis of VEGF-C expression in freshly isolated murine pulmonary lung LECs (pooled from 5 mice in each group) revealed a dramatic reduction of VEGF-C in C/EBP-δ null cells compared to wild type cells ( Figure 2C and 2D). Similarly, there is a clear reduction of VEGFR3 in the C/EBP-δ null LECs ( Figure 2C and 2D). Moreover, forced expression of C/EBP-δ in primary human LECs promoted VEGF-C and VEGFR3 expression in these cells ( Figure 2E). These findings point to a specific role of C/EBP-δ in regulating lymphangiogenic gene expression in lymphatic endothelial cells. As VEGF-C/VEGFR3 signaling plays a major role in lymphangiogenesis, the findings illustrate a novel function of this transcription factor in lymphangiogenesis via regulating VEGF-C and VEGFR3 expression in LECs.
C/EBP-δ regulates lymphangiogenesis and VEGF-C antocrine signaling in lymphatic endothelium
To confirm the role of C/EBP-δ in lymphangiogenesis, we employed in vitro lymphangiogenic assays by measuring cell migration and vascular network formation. Forced expression of C/EBP-δ in human LECs ( Figure 3A) resulted in a slight but significant increase of cell migration in response to VEGF-C stimulation compared to vector control treated group ( Figure 3C). Conversely, knockdown of endogenous C/EBP-δ with a C/EBP-δ targeted shRNA construct ( Figure 3B) led to a significant inhibition of cell migration compared to controls ( Figure 3D). Although we did not observe a change in lymphatic vascular network formation in a 3-D Matrigel assay with forced expression of C/ EBP-δ in human LECs ( Figure 3E and 3F), knockdown of endogenous C/EBP-δ in LECs caused a significant inhibition of lymphatic vascular network formation ( Figure 3G and 3H). Together, the data reveal a direct role of C/EBP-δ in lymphangiogenesis through an effect on lymphatic endothelial cell motility and vascular assembly.
In addition, VEGFR3 signaling is known to provide a survival signal for lymphatic endothelial cells (Makinen et al 2001), and there seemed to be more apoptotic cells with C/ EBP-δ knockdown in LECs present in the vascular network formation assay ( Figure 3G). Therefore, we examined the effects of C/EBP-δ on LEC survival using flow cytometry. We found that knockdown of C/EBP-δ in human LECs using specific shRNA constructs significantly increased apoptosis measured by PI and Annexin V staining when compared to control vector transfected cells ( Figure 4A and 4C), confirming a function of C/EBP-δ in lymphatic endothelial survival. As VEGF-C confers a survival signal in lymphatic endothelium (Makinen et al 2001) and C/EBP-δ regulates VEGF-C expression in LECs, we attempted to rescue the phenotype with addition of recombinant VEGF-C protein. Surprisingly, recombinant protein only increased cell survival slight and it did not reach statistical significance when compared to C/EBP-δ knockdown cells ( Figure 4B and 4C).
Deletion of endogenous VEGF production in endothelial cells leads to apoptosis, which cannot be compensated for by exogenous VEGF, even though endogenous and exogenous VEGF both function through the same receptor, VEGFR2 (Helotera andAlitalo 2007, Lee et al 2007). These findings imply that VEGF autocrine signaling does not fully overlap with paracrine signaling. Thus, we reasoned a similar mechanism of VEGF-C autocrine signaling in lymphatic endothelial cell survival as deletion of C/EBP-δ resulted in a lymphatic endothelial specific reduction of VEGF-C ( Figure 2C and 2D). To test the hypothesis, we transfected cells with a VEGF-C expression vector in C/EBP-δ knockdown LECs, followed by analysis of cell survival. Interestingly, forced expression of VEGF-C in LECs totally rescued the phenotype ( Figure 4B and 4C). Taken together, these findings support that C/ EBP-δ regulates cell-autonomous VEGF-C autocrine signaling in lymphatic endothelium, which cannot be compensated for by paracrine VEGF-C signaling.
C/EBP-δ regulates VEGF-C and VEGFR3 expression in lymphatic endothelial cells through HIF-1α
To further dissect the gene regulation mechanism, we first determined the potential effects of C/EBP-δ on HIF-1 transcription factor, as HIF-1 functions as a key regulator in angiogenic gene expression. We found that deletion of C/EBP-δ in murine pulmonary LECs resulted in a significant reduction of HIF-1α levels when compared to cells isolated from wild type mice ( Figure 5A). Interestingly, exposure of cultured human LECs to hypoxic conditions induced expression of C/EBP-δ ( Figure 5B). Thus, the results point to a positive feedback regulation of HIF-1α in hypoxia through C/EBP-δ in lymphatic endothelium.
Consistently, forced expression of C/EBP-δ in LECs increased the production of VEGF-C and VEGFR3 under both normoxic and hypoxic conditions with more pronounced gene induction under hypoxia ( Figure 5C). Conversely, knockdown of C/EBP-δ inhibited the production of VEGF-C and VEGFR3 in normoxic conditions. More importantly, it totally blocked hypoxia-induced production of VEGF-C and VEGFR3 in lymphatic endothelial cells ( Figure 5D), illustrating a key role of C/EBP-δ in lymphangiogenic gene expression under hypoxia. As HIF-1 is a master regulator of angiogenic gene expression in hypoxia, we next examined whether HIF-1 is responsible for C/EBP-δ mediated gene expression in LECs by using a HIF-1α specific inhibitor. We found that neutralization of HIF-1α activity significantly blocked C/EBP-δ induced VEGF-C and VEGFR3 production ( Figure 5E). It also reduced the basal levels of C/EBP-δ in these cells, consistent with the observation that hypoxia upregulates the production of C/EBP-δ ( Figure 5E). Collectively, these data suggest that C/EBP-δ regulates VEGF-C and VEGFR3 expression through HIF-1 transcription factor.
Discussion
Cancer metastasis is a hallmark of malignancy contributing to about 90% of human cancer deaths. The lymphatic system is important for the metastatic spread of cancer. Therefore, understanding the underlying molecular mechanisms of lymphangiogenesis is highly important. In this study, we report a specific and significant function of C/EBP-δ in regulating VEGF-C/VEGFR3 signaling in lymphatic endothelium. C/EBP-δ is expressed in lymphatic endothelial cells. It regulates the production of VEGF-C and VEGFR3 in lymphangiogenesis and tumor metastasis. Interestingly, forced expression of VEGF-C, but not recombinant VEGF-C, rescued inactivation of C/EBP-δ-induced cell apoptosis, indicative cell autonomous VEGF-C signaling in lymphatic endothelial survival. Further analysis reveals that hypoxia induces C/EBP-δ expression and deletion of C/EBP-δ inhibited HIF-1α transcription, suggesting a positive feedback regulation of hypoxia response in lymphatic endothelial cells ( Figure 6). Interestingly, blocking HIF-1α activity totally blocked CEBP-δ induced VEGF-C and VEGFR3 expression in lymphatic endothelial cells. These findings link hypoxia to lymphangiogenesis through C/EBP-δ and VEGF-C autocrine signaling ( Figure 6). C/EBP-δ belongs to the C/EBP transcription family, and is present in a variety of cells at very low levels in normal situations. However, it is rapidly induced by a variety of stimuli, such as inflammatory cytokines (Ramji and Foka 2002). As inflammation is directly linked to tumor initiation and progression, it is no surprise that C/EBP-δ levels are significantly higher in carcinomas than surrounding normal tissues (Kim andFischer 1998, Milde-Langosch et al 2003), indicating a positive role of C/EBP-δ in tumor development. In the present study, we found that inactivation of C/EBP-δ in mice led to a significant reduction of VEGF-C/VEGFR3 signaling in lymphatic endothelium and pulmonary metastasis. These findings provide molecular mechanism supporting a positive role of C/EBP-δ in tumor progression through regulating lymphatic angiogenic gene expression and lymphangiogenesis.
Clinical and preclinical findings have long suggested tumor-associated lymphatics are a key component of tumor metastasis (Stacker et al 2002). Lymphangiogenesis has traditionally been overshadowed by angiogenesis due to a lack of lymphangiogenic factors, as well as suitable markers to distinguish blood from lymphatic vascular endothelium. However, the field has advanced rapidly after the identification of VEGF-C and VEGFR3 signaling in lymphangiogenesis (Alitalo and Carmeliet 2002). VEGF-C -/embryos lack lymphatic vessels and die prenatally because of severe tissue edema (Karkkainen et al 2004), whereas VEGFR3 -/mice die from defective vascular remodeling before the establishment of lymphatic vessels (Dumont et al 1998). The degree of tumor lymphangiogenesis and VEGF-C and VEGFR3 levels are highly correlated with the extent of lung metastasis (Akagi et al 2000, Alitalo and Carmeliet 2002, Yonemura et al 1999. Overexpression of VEGF-C increases tumor metastasis (Skobe et al 2001), and conversely, blocking VEGFR3 function neutralizes metastasis (He et al 2002). Despite the importance of VEGFR3 signaling in lymphangiogenesis, the transcriptional machinery regulating lymphangiogenic gene expression is not very clear. The current study identifies C/EBP-δ as a transcription factor important for the production of VEGF-C and VEGFR3 in lymphatic endothelium. We show that C/EBP-δ is expressed in lymphatic endothelial cells, and deletion or knockdown of C/ EBP-δ impaired, and conversely forced expression of the gene increased, the lymphatic angiogenic gene expression in lymphatic endothelial cells. C/EBP-δ in lymphatic endothelial cells has a direct role in cell motility, vascular network formation and cell survival, consistent with the function of VEGFR3 signaling in lymphatic endothelium.
VEGF was thought to function in a paracrine fashion to regulate angiogenesis. Surprisingly, genetic deletion of endogenous VEGF production in vascular endothelial cells leads to apoptosis, despite that total levels of VEGF are not detectably altered. (Helotera andAlitalo 2007, Lee et al 2007). This indicates a non-overlapping, cell-autonomous and essential function of VEGF autocrine signaling in vascular endothelial survival and homeostasis. Interestingly, we found that loss of C/EBP-δ dramatically reduced the levels of VEGF-C in lymphatic endothelial cells without detectable changes of VEGF-C levels in tumor tissues and bone marrow cells, implying a specific role of C/EBP-δ in regulation of endogenous VEGF-C expression in lymphatic endothelial cells. Importantly, recombinant VEGF-C failed to rescue the phenotype associated with C/EBP-δ inactivation in lymphatic endothelial cells, but forced expression of VEGF-C totally rescued the defective phenotype. Thus our data reveal that VEGF-C possesses a similar cell autonomous autocrine signaling mechanism in lymphatic endothelial cell survival as VEGF does in vascular endothelial homeostasis. C/EBP-δ regulates this activity through regulation of VEGF-C expression specifically in lymphatic endothelial cells. Current effort is targeted at understanding the cell type specific gene regulation mechanism manifested by C/EBP-δ.
Hypoxia is a common feature of tumors, and it is a potent regulator of angiogenesis through activating HIF transcription factor. Findings also report a similar function of hypoxia in modulating lymphangiogenesis (Irigoyen et al 2007, Ota et al 2007. HIF-1α promotes lymphatic metastasis via regulation of VEGF-C in human cancer (Katsuta et al 2005). Here, we show that C/EBP-δ is upregulated under hypoxia, and inactivation of C/EBP-δ reduced HIF-1α levels in lymphatic endothelial cells. In addition, neutralization of HIF-1α activity totally blocked C/EBP-δ mediated expression of VEGF-C and VEGFR3 in lymphatic endothelial cells. This observation corresponds with the finding that hypoxia-driven VEGF autocrine signaling is perturbed by a deficiency in endothelial cell HIF-1α (Tang et al 2004). In addition, we also examined the expression of HIF-2 as this transcription factor also regulates VEGF expression and angiogenesis. We found that overexpression of C/EBP-δ in lymphatic endothelial cells has no effects on HIF-2a expression in normoxia and hypoxia (data not shown), suggesting HIF-2 unlikely plays a role in the gene regulation. Based on these findings, we suggest that hypoxia stabilizes basal levels of HIF-1α that activate C/ EBP-δ transcription. In return, C/EBP-δ induces HIF-1α transcription that regulates lymphangiogenic gene transcription, thereby forming a positive feedback regulation.
In summary, this study shows that C/EBP-δ, a transcription factor, regulates VEGF-C autocrine signaling in lymphatic endothelial cells. Consistent with the hypothesis that autocrine signaling is triggered by stress, we show that hypoxia upregulates expression of C/ EBP-δ, thereby forming a positive feedback regulation mechanism that propagates angiogenic signals in lymphatic endothelium. These data provide molecular evidence linking C/EBP-δ to lymphangiogenesis and tumor metastasis. Thus, C/EBP-δ is an appealing target for tumor therapy, providing a means to control VEGFR3 signaling in lymphangiogenesis. Yonemura Y, Endo Y, Fujita H, Fushida S, Ninomiya I, Bandou E, et al Inactivation of C/EBP-δ in mice inhibits tumor lymphangiogenesis and pulmonary metastasis of lung cancer. 1×10 5 3LL cells were injected via tail vein into 6-week old female wild type and C/EBP-δ null mice. Fourteen days later, lungs were harvested and imaged (Panel A). Representative images are shown. Arrows point to tumor nodules. Tumor metastasis was quantified by counting the number of lung surface metastases (Panel B) and measuring the mass of the lungs (Panel C). Data are expressed as mean ± SD. n=10 mice per group, *p<0.05. Tumor nodules in lungs were subjected to immunohistochemical analysis C/EBP-δ regulates VEGF-C and VEGFR3 expression in lymphatic endothelial cells. Total RNA was isolated from tumor tissues (Panel A) and bone marrow (Panel B) of wild type and C/EBP-δ null mice, and subjected to semi-quantitative RT-PCR for VEGF-C. Pulmonary lymphatic microvascular endothelial cells were isolated from age and sex matched wild type and C/EBP-δ null mice (pooled from 5 mice per group), and purified with flow cytometry cell sorting using anti Lyve-1-PE antibodies. Total RNA isolated from the murine lymphatic endothelial cells was subjected to semi-quantitative RT-PCR (Panel C) and real-time PCR (Panel D) for VEGF-C and VEGFR-3. *p<0.05, **p<0.01. HMVEC-LLy cells were transfected with either empty vector or C/EBP-δ expression vector for 48 hours. Total RNA was isolated and subjected to semi-quantitative RT-PCR for genes indicated (Panel E). Each experiment was repeated at least three times. Representative images are shown. C/EBP-δ regulates lymphangiogenesis in vitro. HMVEC-LLy cells were transfected with a C/EBP-δ expression vector (C/EBP-δ, Panel A), or knockdown using specific shRNA construct for C/EBP-δ (K'C/EBP-δ, Panel B). Empty vector and non-specific shRNA were used as controls. Expression of C/EBP-δ in lymphatic endothelial cells was determined by semi quantitative RT-PCR. Lymphatic endothelial cell migration was measured in forced expression and knockdown cells using Transwell assays in response to 50 ng/ml of VEGF-C stimulation. Migrated cells were counted in 10 randomly selected 200X high power fields under microscopy after a 5-hour incubation (Panel C and Panel D). **p<0.01. Lymphatic vascular network formation was assessed using the Matrigel assay. Vascular network formation in Matrigel was imaged under microscopy 24 hrs after cell plating in C/EBP-δ forced expression cells (Panel E) and C/EBP-δ knockdown cells (Panel G). Vascular cross points were counted from 10 randomly selected high power fields under microscopy (Panel F and Panel H). The data were collected from three independent experiments performed in triplicate, and expressed as mean ± SD. **p<0.01. C/EBP-δ regulates VEGF-C autocrine signaling in lymphatic endothelial cell survival. HMVEC-LLy cells were transfected with either shRNA vector for C/EBP-δ (K'C/EBP-δ) or control vector for 48 hours, followed by incubation with PI and antibody against Annexin V. Cell apoptosis was assessed by flow cytometry (Panel A). C/EBP-δ knockdown HMVEC-LLy cells were either incubated with 50 ng/ml of recombinant VEGF-C or co-transfected with a VEGFC expression vector. Cell apoptosis was assessed by PI and Annexin V staining and flow cytometry (Panel B). Representative images were shown. Live cells were quantitated in each group (Panel C). Each experiment was done in duplicate and repeated three times. Data are expressed as mean ± SE. *p<0.01 vs control, **p<0.01 vs K'C/EBP-δ. C/EBP-δ regulates VEGF-C and VEGFR3 production in lymphatic endothelial cells through HIF-1α. Purified murine pulmonary LECs from wild type and C/EBP-δ null mice were subjected to semi-quantitative RT-PCR for HIF-1α and C/EBP-δ expression (Panel A). HMVEC-LLy cells were cultured under normoxia (20% O 2 ) or hypoxia (1% O 2 ) for 24 hours. Semi-quantitative RT-PCR was performed to measure C/EBP-δ (Panel B). HMVEC-LLy cells were transfected with either empty vector or C/EBP-δ expression vector for 24 hours. The cells were then cultured under normoxia (20% O 2 ) or hypoxia (1% O 2 ) for another 48 hours. Transcript levels of VEGF-C, VEGFR3, HIF-1α and C/EBP-δ were measured by semi-quantitative RT-PCR (Panel C). HMVEC-LLy cells were transfected with either control shRNA or shRNA for C/EBP-δ for 24 hours. The cells were then cultured under either normoxia (20% O 2 ) or hypoxia (1% O 2 ) for another 48 hours. Transcript levels of VEGF-C, VEGFR3, HIF-1α and C/EBP-δ were measured by semi-quantitative RT-PCR (Panel D). HMVEC-LLy cells were transfected with either empty vector or C/EBP-δ expression vector for 24 hours. The cells were treated with or without HIF-1α inhibitor (5 μM of geldanamycin) for another 48 hours under hypoxic conditions. Transcript levels of | 2017-11-08T01:35:25.579Z | 2011-04-25T00:00:00.000 | {
"year": 2011,
"sha1": "86d16d08bd901183c19e9cbe93417bda3d577c43",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/onc2011187.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "86d16d08bd901183c19e9cbe93417bda3d577c43",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
13397516 | pes2o/s2orc | v3-fos-license | Validation of the Fibromyalgia Survey Questionnaire within a Cross-Sectional Survey
The Fibromyalgia Survey Questionnaire (FSQ) assesses the key symptoms of fibromyalgia syndrome. The FSQ can be administrated in survey research and settings where the use of interviews to evaluate the number of pain sites and extent of somatic symptom intensity and tender point examination would be difficult. We validated the FSQ in a cross-sectional survey with FMS patients. In a cross-sectional survey, participants with physician diagnosis of FMS were recruited by FMS-self help organisations and nine clinical institutions of different levels of care. Participants answered the FSQ (composed by the Widespread Pain Index [WPI] and the Somatic Severity Score [SSS]) assessing the Fibromyalgia Survey Diagnostic Criteria (FSDC) and the Patient Health Questionnaire PHQ 4. American College of Rheumatology 1990 classification criteria were assessed in a subgroup of participants. 1,651 persons diagnosed with FMS were included into analysis. The acceptance of the FSQ-items ranged between 78.9 to 98.1% completed items. The internal consistency of the items of the SSS ranged between 0.75–0.82. 85.5% of the study participants met the FSDC. The concordance rate of the FSDC and ACR 1990 criteria was 72.7% in a subsample of 128 patients. The Pearson correlation of the SSS with the PHQ 4 depression score was 0.52 (p<0.0001) and with the PHQ anxiety score was 0.51 (p<0.0001) (convergent validity). 64/202 (31.7%) of the participants not meeting the FSDC criteria and 152/1283 (11.8%) of the participants meeting the FSDC criteria reported an improvement (slightly too very much better) in their health status since FMS-diagnosis (Chi2 = 55, p<0.0001) (discriminant validity). The study demonstrated the feasibility of the FSQ in a cross-sectional survey with FMS-patients. The reliability, convergent and discriminant validity of the FSQ were good. Further validation studies of the FSQ in clinical and general population settings are necessary.
Introduction
The publication of American College of Rheumatology (ACR) preliminary diagnostic criteria for fibromyalgia syndrome (FDC) [1] eliminated the tender point examination required for the clinical diagnosis of FMS by the ACR 1990 classification criteria [2]. Because most of the ACR 2010 items can be obtained by selfadministration, the FDC were slightly modified so that complete self-administration would be possible by the Fibromyalgia Survey Diagnostic Criteria (FSDC). The FSDC were developed in a longitudinal study of patients of the National Data Bank for Rheumatic Diseases by substituting a count of three symptoms for the physician's (0-3) evaluation of the extent of somatic symptom intensity by a questionnaire assessing the number of pain sites and somatic symptom severity. Patients who satisfy FSDC meet the following 3 conditions: 1) Widespread Pain Index (WPI) $7/19 pain sites and Symptom Severity Score (SSS) $5/12 or WPI between 3-6/19 and SSS $9/12; 2) Symptoms have been present at a similar level for at least 3 months; 3) The patient does not have another disorder that would otherwise sufficiently explain the pain [3]. The conditions 1 and 2 can be assessed by the Fibromyalgia survey questionnaire (FSQ) including the WPI and SSS. The sum of the WPI and the SSS constitutes the Fibromyalgianess Scale (FS) or polysymptomatic distress scale as a measure of physical and psychological symptom intensity (distress) which can be applied to every disease. The FS can be used to track disease status. The assessment of the key symptoms of FMS by the FSQ allows administration in survey research and settings where the use of interviews to evaluate the number of pain sites and extent of somatic symptom intensity would be difficult.
In this study we provided the first translation of the FSQ into German language and validated the FSQ for the first time in a cross-sectional survey with FMS-patients in Germany.
Clinical institutions
Participants of the study were recruited by the two largest German FMS-self help organisations and nine clinical institutions. The specialties of the clinical institutions were pain medicine and psychotherapy (N = 3), rheumatology (N = 2), complementary and alternative medicine (N = 2), physical therapy (N = 1) and pain therapy (N = 1). The settings were outpatient (N = 6), inpatient (N = 2) and day clinic (N = 1). The levels of care were secondary (N = 6) and tertiary care (N = 1) and rehabilitation (N = 1).
From November 1, 2010 to April 30, 2011 all consecutive patients with an established or first diagnosis of FMS of the participating study centres were asked by the physicians of these centres to take part in the study. All participating physicians had more than 10 years exprerience in the management of FMSpatients. The questionnaires were handed out by the physicians of the centres with a standardized letter explaining the focus of the study. The questionnaires were returned by the patients in a closed and anonymous envelope and kept away from the charts. In 4 centres a tender point examination was performed according to a standardised protocol [4].
Self-help organisations
The package of questionnaires was sent by the central office of the German League for people with Arthritis and Rheumatism to their regional offices with the request that the leaders of the local self-help groups distribute the FSQ during the meetings to the group members (FMS-patients). Group members were asked to fill out the questionnaires separately outside the group meetings and not to discuss it with other group members.
The German Fibromyalgia Association included the package in the issue 4/2010 of its member journal ''Optimist'' dispatched by post to all members.
The questionnaires were returned by the patients by post to the central office. Moreover, the questionnaires were available on the homepages of both self-help organisations. After downloading and completing they could be sent by mail, fax or email to the central offices. Employees of both central offices removed the personal identifying information and sent the questionnaires to the coordinating study centre.
Inclusion-and exclusion criteria
Members of the self-help organisations should report that the diagnosis of FMS had been established by a physician. Participants without (reported) physician diagnosis of FMS were excluded.
The patients of the study centres should have been previously or currently diagnosed with FMS according to the ACR 1990 classification criteria [3] or the Association of the Medical Scientific Societies in Germany (AWMF) criteria [5]. In four study centres the ACR 1990 criteria (2) were reevaluated during study examination. A diagnostic work-up including a complete physical examination and defined laboratory tests according to the German guideline on the management of FMS were performed in every patient of the study centres in the past or during the study [6]. Patients with somatic diseases sufficiently explaining the pain sites of the WPI (e.g. highly active inflammatory rheumatic disease) and patients who were not able to read German were excluded.
Questionnaires
Demographic data (age, sex, family status, educational level, current professional status, member of a FMS-self help organisation) and medical data (Years since chronic widespread pain and FMS-diagnosis] were assessed by a questionnaire used in a previous multicenter FMS -study [5]. Patients were asked how their health status has changed over the years since the diagnosis of FMS according to their opinion (1 = very much worse, 2 = much worse, 3 = slightly worse, 4 = no change, 5 = slightly better, 6 = much better, 7 = very much better).
The FSQ included the Symptom Severity Score (SSS) with 3 major symptoms (fatigue, trouble thinking or remembering, waking up tired [unrefreshed]) which can be coded 0-3 (0 = not present to 3 = extreme) and three additional symptoms (Pain or cramps in lower abdomen, depression, headache), which can be coded to be present (1) or not present (0) (total suscore 0-3). These three items are surrogates for somatic symptom burden item of the ACR 2010 criteria (reference). The SSS ranges from 0-12. The Widespread Pain Index (WPI) includes 19 non-articular pain sites [2] (see table 1). The English version of the SSS had been forward and backtranslated by four German physicians, two of whom had worked for several years in the USA. We used the validated German version of the WPI [7].
The 4-item Patient Health Questionnaire-4 (PHQ-4) is an ultrabrief self-report questionnaire that consists of a 2-item depression scale (PHQ-2) and a 2-item anxiety scale (GAD-2). A score of 3-orgreater on the depression subscale represents a reasonable cutpoint for identifying potential cases of major depression or other depressive disorder, a score of 3-or-greater on the anxiety subscale represents a reasonable cut-point for generalized anxiety, panic, social anxiety, and posttraumatic stress disorders. The PHQ 4 total score can serve as measure for psychological distress [8]. We used the validated German version of the PHQ 4 [9].
Validation methods and hypotheses
The methods used to validate the FSQ were as follows: Patient acceptability (acceptance) of the FSQ was assessed by the proportion of missing or invalid items. The proportion of missing or invalid items should be approximately equal to those in surveys of German patients with chronic liver diseases [10] and celiac disease [11].The reliability of the SSS was assessed by internal consistency (Cronbach's a coefficient) which measures the overall correlation between items within a scale. A level of 0.7 and higher is considered desirable [12].Face (content) validity was assessed by the think aloud technique [13] of five phycians (pain medicine, psychosomatic medicine, rheumatology) and five FMS-patients of local self-help groups not participating in the study who verbalized their thoughts processes while filling out the FSQ.Convergent validity of the SSS and FS was determined by the Pearson correlation with the total sum score of the PHQ 4. The convergent validity is fulfilled when the scale scores for related concepts show moderate to high correlation (correlation coefficient 0.4 to 0.8) [12].Convergent validity of the FSDC was determined by comparing the concordance rates of self-reported diagnosis of FMS made by a physician (members of self-help organisations) with the FSDC and of physician -established diagnosis of FMS (participants of clinical centres) with FSDC. Based on previous studies of the concordance rates of different FMS-diagnostic criteria [5,14] we expected concordance rates between 70-80%.Discriminant validity was tested by the following hypothesis: Longitudinal studies demonstrated that persons diagnosed with FMS can switch between criteria positive and criteria negative states [15]. Therefore we assumed, that patients who will not meet the FSDC at the time of evaluation, will report more frequently that their health status has improved since the diagnosis of FMS.
Statistical analysis
The data were entered by four pairs of study assistants into a preconstructed excel-data sheet. The entering of data was checked by two authors at random and on plausibility during descriptive data analysis. Missing items of the SSS, WPI and PHQ 4 were coded as zero. Patients were excluded from analysis if all items of SSS and/or WPI and/or PHQ4 were not answered.
Support
The participants of the study did not receive any reimbursement. Material costs were covered by the participating institutions.
Ethics
The requirements of data protection and medical professional secrecy were respected by all study investigators. All participants gave their informed written consent to the study. The study had been specifically aproved by the ethical committee of the Ludwig Maximilian Universität München and by the review boards of all study centers.
Study participants
There were no data available concerning how many patients contacted by the self-help organisation did not meet the inclusion criteria or refused to take part in the study. The German League for people with Arthritis and Rheumatism estimated that approximately 10 000 of their members were FMS-patients. The German Fibromyalgia Association is reported to have approximately 4000 members with FMS.
123 patients of the clinical samples did not meet the primary inclusion criteria and 40 of contacted patients refused to take part in the study. 1694 persons returned the questionnaires, of which 1143 (69.2%) had been contacted via self-help organisations. 43 of 1694 contacted persons were excluded due to total missing items in the WPI (N = 40) or SSS (N = 3). The questionnaires of at least 10 persons who were excluded due to missing WPI-items did not include the WPI due to an organisational mistake. 1651 persons were included into analysis.
The total study sample was composed of mainly middle-aged women with a long duration of CWP and FMS-diagnosis. In 30 patients, FMS was diagnosed recently for the first time (see table 2). 881/1633 (54.6%) participants scored . = 3 on the PHQ 4 Reliability (Internal consistency). Cronbach's alpha of the SSS was 0.65 and of the FS was 0.71.
Face validity. Two patients felt insecure where to indicate pain in the elbows and knees in the WPI because these pain sites were not mentioned. Two physicians felt puzzled by the different time frames of the FSQ. One physician wondered why abdominal pain was assessed both in the SSS and in the WPI.
Convergent validity. The Pearson correlation of the SSS with the PHQ 4 total score was 0.56 (p,0.0001) and of the FS with PHQ4 total score was 0.48 (p,0.001).
The diagnosis of FMS according to the ACR 1990 criteria was reevaluated at the date of appointment in 128 patients in 4 study centres with previously or actually diagnosed FMS. The mean of TPC was 13.8 (SD 3.5) (range 0-18). 107/128 (83.6%) partic-ipants met the ACR 1990 classification criteria of FMS. The concordance rate of the FSDC and ACR 1990 criteria was 72.7% (see table 3).
Summary of main findings
In this study we provided the first translation of the FSQ into another language and validated the FSQ for the first time in a cross-sectional survey with 1651 FMS-patients. The acceptability, reliability and validity of the FSQ met the predefined quality criteria pointed out in the validation hypotheses.
Relation to other studies
The proportion of missing items in the SSS ranged from 1.9 to 21.1%. The item with 21% missing was the headache item. We cannot explain the low completion rate of this item. The range was higher than the one of a disease specific health-related questionnaire in a survey with 522 patients with celiac disease which was 0.2-11.2 [11] and with 202 chronic liver patients was 0.4-2.8% [10].
The overall concordance rate of physician made diagnosis of FMS and FSDC was 85.5%. In a subsample the concordance of FSDC with ACR 1990 criteria was 72.7%. These concordance rates were similar with the ones of a study in an US rheumatologic practice in which the concordance rate of the ACR 1990 and clinical criteria of FMS was 72% [14].
The study results highlight the problem of defining cut-off values for continuous symptom disorders such as FMS. 14.5% of the patients who had once been diagnosed with FMS did not meet the FSDC criteria of FMS at the time of the survey. Most notably, 32% of these patients reported an improvement of health status since FMS-diagnosis. In a longitudinal study with 1,555 fibromyalgia patients meeting the FSDC citeria at study entry conversion from and to criteria positive status was common during 7,448 semi-annual observations for up to 11 years. During follow-up 716 patients (44.0%) failed to meet criteria at least once, and at study closure 24.3% failed to meet criteria [1]. In the long-term management of these patients the assessment of the amount of distress (''fibromyalgianess'' or polysymptomatic distress) is possible by summing the SSS and the WPI [3].
The study confirms the high levels of distress reported by FMS patients and the concepualisation of FMS a continuum disorder which can be located at the extreme end of the continuum of distress [16,17].
Limitations
The study did not include FMS-patients from primary care settings.
There is no gold standard how to deal with missing values. We decided not to use imputation methods because the percentage of missing values was one criterion of validation.
Testing of the test-retest reliability and responsiveness to change was not possible within a cross-sectional survey. The test-retest reliability of a WPI $7 in FMS-patients of different clinical settings was 100% and the intraclass correlation of the WPI was 0.78 over a period of 4-12 weeks [7].
Because there is no gold standard of the clinical diagnosis of FMS [18], the assessment of the criterion validity of the FSDC were not possible. We used the ACR 1990 criteria as a common standard for the assessment of convergent validity. Due to the study design it was not possible to perform a TPC in all patients.
Conclusions
Research. The study demonstrated the feasibility, reliability and validity of the FSQ in a survey with German FMS-patients. Further validation studies of the FSQ in other countries and/or languages are necessary. A standardization of the different time frames of the questions of the FSQ should be considered.
Clinical practice. The FSDC are not be used for selfdiagnosis or as substitute for a physician's diagnosis. The FSQ can be used to gather information about the key symptoms of FMS and the extent of somatic symptom reporting, but the interpretation and assessment of questionnaire validity belongs to the physician [19]. | 2017-04-07T03:56:37.678Z | 2012-05-25T00:00:00.000 | {
"year": 2012,
"sha1": "95b834ebb5a1e87c85f92cf30656615dc1c46a31",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0037504&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95b834ebb5a1e87c85f92cf30656615dc1c46a31",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237290154 | pes2o/s2orc | v3-fos-license | Chylothorax following posterior low lumbar fusion surgery: A case report
BACKGROUND Postoperative chylothorax is usually regarded as a complication associated with cardiothoracic surgery; however, it is one of the rare complications in orthopedic surgery. This case report describes a female patient who developed chylothorax after a successful L4-S1 transforaminal lumbar interbody fusion surgery. The etiology, diagnosis, and treatment were analyzed and discussed. CASE SUMMARY A 50-year-old woman was admitted with repeated back and leg pain. She was diagnosed with L4 degenerative spondylolisthesis, L4/L5 and L5/S1 intervertebral disc herniation and L5 instability, and underwent successful posterior L4-S1 instrumentation and fusion surgery. Unfortunately, thoracic effusion was identified 2 d after operation. The thoracic effusion was finally confirmed to be chylous based on twice positive chyle qualitative tests. The patient was discharged after 12-d persisting drainage, 3-d total parenteral nutrition and fasting, and other supportive treatments. No recurring symptoms were observed within 12 mo follow-up. CONCLUSION Differential diagnosis is crucial for unusual thoracic effusion. Comprehensive diagnosis and treatment of chylothorax are necessary. Thorough intraoperative protection to relieve high thoracic pressure caused by the prone position is important.
INTRODUCTION
Chylothorax, a rare postoperative complication in adult patients, is the result of chyle leakage from the thoracic duct or its collateral branches into the pleural cavity. Chylothorax causes dyspnea, heart failure, hemodynamic disorder, malnutrition, immune suppression, and even death [1]. Chylothorax is usually caused by traumatic and non-traumatic factors including sharp and even blunt trauma to the thorax, iatrogenic factors following surgery, and thoracic tumors [2,3]. In a retrospective analysis of chylothorax in 203 patients, iatrogenic factors including esophagectomy, surgery for congenital heart disease, lung cancer resection and mediastinal mass resection accounted for 38.9% of cases [3]. To date, the clinical diagnosis and treatment of chylothorax are unclear [4][5][6]. Although postoperative chylothorax is usually regarded as a complication associated with cardiothoracic surgery[4], chylothorax has rarely been confirmed in spinal surgery, where the anterior surgical approach to the cervical, thoracic, or thoracolumbar vertebrae is a risk factor for lymphatic duct injury [7][8][9]. Interestingly, chylothorax occurred after L4-S1 transforaminal lumbar interbody fusion (TLIF) surgery.
We investigated the diagnosis and treatment of chylothorax. The diagnosis was confirmed by twice positive chyle qualitative tests, and was successfully treated with persistent drainage, total parenteral nutrition and fasting. We speculated that increased intrathoracic pressure caused by the prone position during surgery was involved in the etiology of chylothorax.
Chief complaints
A 50-year-old female was admitted due to repeated back and leg pain for more than 2 years.
History of present illness
The patient had persistent back pain without inducement for 2 years with recurrent episodes of lower limb pain without numbness. She had visited another hospital and was diagnosed with L4 spondylolisthesis. Conservative treatments were prescribed without relief. Therefore, she visited our hospital for further treatment.
History of past illness
The patient was previously healthy.
Personal and family history
No relevant disorders were identified. The patient had not delivered any children.
Physical examination
At the time of admission, the patient's vital signs included blood pressure of 113/77 mmHg, heart rate of 85 bpm, and temperature of 36.3˚C (head). Her physical status August 6, 2021 Volume 9 Issue 22 included height of 1.55 m, weight of 55 kg, and body mass index of 22.89 kg/m 2 . Spine examination revealed no spinal curve, no decreased muscle strength, and no abnormal reflex. However, L4-L5 interspinous pressing pain was observed accompanied by radiating pain and hypoesthesia in the right medial leg.
Laboratory examinations
Routine blood tests revealed normal blood cell contents. Prothrombin, D-dimer and partial thromboplastin time were normal. Serum C-reactive protein was normal at 1.69 mg/dL (reference range < 5 mg/dL) and interleukin-6 was 1.88 pg/mL (reference range 0-7 pg/mL). Blood biochemistry as well as urinary and fecal analysis were normal.
Imaging examinations
Radiologic examinations including X-ray, computed tomography (CT) and magnetic resonance imaging indicated spondylolisthesis of L4, instability of L5 and disc herniation of L4/L5 and L5/S1. Electrocardiogram, B ultrasound of the abdomen and chest x-ray ( Figure 1A and B) were also normal.
OUTCOME AND FOLLOW-UP
On the first postoperative day, her heart rate and peripheral oxygen saturation (SPO 2 ) were normal. Routine postoperative blood and blood biochemistry tests showed that the concentrations of hemoglobin, total protein, and albumin were 108 g/L (115-150 g/L), 48.8 g/L (65.0-85.0 g/L), and 32.3 g/L (40.0-55.0 g/L), respectively. However, the patient developed hypotension that fluctuated between 80-90/50-60 mmHg, which may have been caused by blood and fluid loss, as well as residual muscle relaxant, due to the 4.5-h operation time. Consequently, the patient was managed with infection prevention, intravenous infusion of crystal and colloidal liquid and albumin, as well as oral feeding. On postoperative day 2, even with oxygen 3 L/min nasally, the patient had severe dyspnea and a dry cough. Physical examination revealed a body temperature of 37 ˚C (head), heart rate of 80 bpm, blood pressure of 90/60 mmHg, and SPO 2 of 90%. A high-resolution chest CT scan was immediately performed following detection of reduced respiratory sounds, indicating that there was a small to moderate amount of effusion in both sides of the pleural cavity. However, her preoperative frontal and lateral chest films were normal ( Figure 1). Therefore, an intercostal tube was inserted into the right side of the pleural cavity for persistent drainage, and 250 mL of yellowish odorless liquid was drained which looked like interstitial fluid, it was subsequently analyzed and no bacterial infection was found ( Figure 2). Her albumin level decreased sharply from 50.6 g/L pre-operation to 32.3 g/L post-operation, resulting in hypo-osmolality, consequently leading to the leakage of tissue fluid from blood vessels to the pleural cavity, which may have been the cause of pleural effusion and hypotension. Unfortunately, the drainage volumes in the first 4 d were 700 mL, 600 mL, 300 mL, and 500 mL, respectively, with no significant decrease. Her vital signs were relatively stable with a temperature of 37 ˚C, heart rate of 80 bpm, blood pressure of 100-111/60-66 mmHg, and SPO 2 of about 95%. Repeated chest CT showed a small amount of effusion in the right hemithorax and a moderate amount in the left hemithorax ( Figure 3). Another tube was inserted into the left hemithorax for persistent drainage, and 200 mL of pleural effusion was drained, which had the same appearance as that in the right hemithorax ( Figure 2B). The concentrations of total protein and albumin were 57.2 g/L and 41.2 g/L, respectively, which demonstrated good nutritional status and excluded the etiology of hypo-osmolality. Laboratory analyses showed that the effusion fluid from the right and left hemithorax was August 6, 2021 Volume 9 Issue 22 transudative with no bacteria. The diagnosis of chylothorax was confirmed based on twice positive chyle qualitative tests. Conservative treatments were prescribed, including fasting and adequate supportive therapy (such as total parenteral nutrition). The drainage decreased significantly to dozens of milliliters after fasting for 2 d (Figure 4). She was re-fed after 3 d of fasting with no increase in drainage over the next 3 consecutive days ( Figure 5). Finally, the tubes were removed and no recurrence was observed after 2 d observation. The patient was successfully discharged on postoperative day 16, and has been followed up for 12 m without recurrence ( Figure 6).
DISCUSSION
Postoperative chylothorax occurs after cardiothoracic surgery and anterior spinal surgery [3]. However, chylothorax after posterior L4-S1 lumbar instrumentation and fusion surgery has not been reported before. August Anatomically, a typical path of the thoracic duct is from the cisterna chyli at the level of L2, up through the aortic hiatus at T12 level to the thorax. It then ascends along the thoracic spinal column on the right-anterior side between the thoracic aorta and the azygos vein to T5 or T6 level, crossing to the left side of the thoracic vertebrae. It finally joins the left jugular and subclavian veins [10]. Based on the anatomy, rupture or obstruction of the thoracic duct causes chyle leakage resulting in the development of chylous ascites or pleural effusion. The most common traumatic causes of chylothorax are cardiothoracic surgical procedures. Non-traumatic causes include malignancy, tuberculosis, liver cirrhosis, and malformations [3,11,12]. Romero et al [13] reported that chylous ascites could present as chylothorax in patients with liver cirrhosis, due to communications between the thoracic and peritoneal cavity and lower pressure in the pleural cavity. Surgery was not the etiology in our case, as the surgical site was L4 to S1 posteriorly, much lower than the typical cisterna chyli location (L2 level), and TLIF surgery does not expose tissues anterior to the lumbar vertebrae. Additionally, although variations in the lymphatic system are complicated, we excluded this cause due to assurance of no breakthrough of anterior annulus fibrosus. August Non-traumatic etiologies (underlying disorders) were also excluded as there was no evidence of these disorders causing chylothorax. With conservative management including drainage, fasting, total parenteral nutrition and other supportive treatments, the patient recovered within 12 d of diagnosis. As the patient refused additional examinations, the specific etiology is unknown. The authors have assumed that the 4.5-h-long prone position during surgery might have induced hyperpressure in the thoracic cavity, causing damage to the thoracic lymphatic system as blunt trauma [14,15]. The procedure for placing a patient in the prone position for posterior lumbar August fusion surgery in our department is as follows: The patient is reversed into the prone position, and the self-made, soft, square-frame cushion ( Figure 7A and B) is placed under the abdomen to reduce abdominal pressure after the anesthesia takes effect. Thus, the main load-bearing parts are the pubis, bilateral ilia, bilateral abdomen, and lower chest wall. The potential etiology of chylothorax in this patient was thought to be the intraoperative prone position which induced consistent hyperpressure in the thoracic cavity. Therefore, we conclude that intraoperative protection is crucial. Protection of the abdomen alone is not enough in the prone position during surgery, as the chest should be protected to reduce thoracic cavity pressure especially during lengthy operations. Therefore, the self-made cushion was replaced with a newer and better cushion ( Figure 7C and D). This patient was considered to have idiopathic chylothorax.
Patients with chylothorax usually have dyspnea (the most common symptom), chest pain, and a nonproductive cough as in most types of pleural effusion [16]. Our patient initially had dyspnea and a dry cough, but no chest pain. A chest CT scan revealed small to moderate effusion in the bilateral pleural cavity. With no previous experience of chylothorax, we initially thought that the chylothorax was caused by hypoproteinemia (albumin concentration of 32.3 g/L). The diagnostic methods used for chylothorax include typical milky appearance of pleural fluid, laboratory analysis, and lymphangiography for location when necessary. White milky pleural effusion is a characteristic but unreliable diagnostic method, as less than half of patients with chylous effusion present with this characteristic appearance, and pseudochylothorax with abundant cholesterol looks milky [17]. There are currently more precise laboratory tests available. Chylomicrons are a sign of chyle postprandially. The gold standard for the diagnosis of chylothorax is the presence of chylomicrons in the effusion. However, the evaluation of chylomicrons is not always available in some laboratories. Alternative and differential diagnostic criteria are as follows: pleural fluid triglyceride > 110 mg/dL; the ratio of pleural fluid to serum triglyceride > 1.0; and the ratio of pleural fluid to serum cholesterol < 1.0 [17,18]. In our case, the twice positive chyle qualitative tests indicated a chylothorax, even when the color of the effusion was not typically milky but yellowish. In addition, the patient responded well to drainage, total parenteral nutrition, and fasting. Despite positive qualitative tests, the pleural fluid triglyceride concentration was 14.18 mg/dL; the ratios of both right and left pleural fluid to serum triglyceride and cholesterol were 0. 16 1.0), 0.76/3.02 (< 1.0) and 0.74/3.54 (< 1.0), respectively. Only the cholesterol ratio met the alternative diagnostic criteria. As chyle is from digested fat and accumulates in gastrointestinal lymphatic vessels, our patient had fasted for 20 h preoperatively and had little food intake postoperatively, which may be the reason why the concentration of triglyceride in the pleural fluid was much lower than the normal reference value. Therefore, triglyceride concentration in pleural effusion is not always greater than 110 mg/dL, especially in postoperative patients who have fasted [19]. The management of chylothorax includes conservative treatment, pleurodesis, surgical ligation, and interventional embolization [5,20]. For non-traumatic cases, conservative management including drainage, total parenteral nutrition, mediumchain fatty acid diet, and fasting are effective [4]. Somatostatin or octreotide is used for congenital chylothorax in child patient most to reduce lipid absorption, which was reported effective [21,22]. However, its dose and course are not well established. We did not prescribe it because strict fasting contributes decreasing chyle most. In the review by Schild and Pieper [23], the success rate of conservative management varied from 16% to more than 75%. If conservative treatment fails after 2 wk or initial drainage output is more than 1000 mL/d, surgical or interventional methods should be considered [23]. In the clinical trial by Haniuda et al [24], which involved seven patients with postoperative chylothorax who had undergone pulmonary resection, six patients received successful surgical treatment after unsuccessful conservative therapy, which proved that aggressive surgical treatment is recommended for posttraumatic or post-surgical chylothorax[24]. Jeong et al [25] compared radiological interventional treatment to conservative treatment, and concluded that the former resulted in a shorter median drainage time, nil per os duration and median length of hospital stay. Unfortunately, interventional treatment is only available in a few large centers. All the treatments above are not complete effective. Refractory chylothorax occurred sometimes. Lai et al [26] introduced a safe and effective modified pleurodesis method for treating refractory chylothorax. Just one patient suffered a recurrence and was cured by second pleurodesis [26]. Combined conservative and surgical or interventional management is preferable to most clinicians[4]. Our patient received 12 d of comprehensive conservative management after the detection of chylothorax, which is consistent with previous reports of chylothorax due to other reasons.
CONCLUSION
Chylothorax following posterior lumbar fusion surgery is rare. In this case, unusual thoracic effusion mistakenly led us to consider hypo-osmolality. Differential diagnosis is crucial for unusual thoracic effusion. Chylothorax can be diagnosed by a positive chyle qualitative test combined with diagnostic treatment consisting of comprehensive conservative therapies. We believe that thorough intraoperative protection to relieve high thoracic pressure caused by the prone position is important. | 2021-08-26T05:26:01.921Z | 2021-08-06T00:00:00.000 | {
"year": 2021,
"sha1": "27a7ad729dee6429d1d7f3714f6151e6a5ca74a0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v9.i22.6522",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27a7ad729dee6429d1d7f3714f6151e6a5ca74a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6393305 | pes2o/s2orc | v3-fos-license | Quantitation of mitochondrial dynamics by photolabeling of individual organelles shows that mitochondrial fusion is blocked during the Bax activation phase of apoptosis
A dynamic balance of organelle fusion and fission regulates mitochondrial morphology. During apoptosis this balance is altered, leading to an extensive fragmentation of the mitochondria. Here, we describe a novel assay of mitochondrial dynamics based on confocal imaging of cells expressing a mitochondrial matrix–targeted photoactivable green fluorescent protein that enables detection and quantification of organelle fusion in living cells. Using this assay, we visualize and quantitate mitochondrial fusion rates in healthy and apoptotic cells. During apoptosis, mitochondrial fusion is blocked independently of caspase activation. The block in mitochondrial fusion occurs within the same time range as Bax coalescence on the mitochondria and outer mitochondrial membrane permeabilization, and it may be a consequence of Bax/Bak activation during apoptosis.
Introduction
Mitochondria form dynamic interconnected networks, and the relative rates of mitochondrial fusion and fission have been implicated in the regulation of their number, size, and shape (Mozdy and Shaw, 2003;Scott et al., 2003). Fragmentation of mitochondria occurs upon induction of apoptosis (Karbowski and Youle, 2003), and it has been suggested that activation of the mitochondrial fission machinery is one of the primary triggers of this process (Frank et al., 2001;Breckenridge et al., 2003). However, under physiological conditions, mitochondrial fission is counteracted by fusion leading to a dynamic stability of the mitochondrial network (Mozdy and Shaw, 2003;Scott et al., 2003), suggesting that mitochondrial fusion may be stimulated in response to the activation of fission machinery during apoptosis. In this report, we analyze the dynamics of mitochondria in healthy and apoptotic cells by visualization and quantification of mitochondrial fusion using a novel assay based on the dilution rate of mitochondria-targeted photo-activable GFP (mito-PAGFP; Patterson and Lippincott-Schwartz, 2002).
Direct visualization of mitochondrial fusion within single living cells
We explored the potential of mitochondrial matrix-targeted photoactivable GFP (PAGFP) to assay individual mitochondria fusion and fission events. PAGFP is a variant of the Aequorea victoria GFP that, after irradiation with 413-nm light, increases fluorescence ف 100 times when excited with 488-nm light (Patterson and Lippincott-Schwartz, 2002). PAGFP was fused to the mitochondrial matrix targeting sequence from subunit VIII of cytochrome c oxidase (mito-PAGFP). Low intensity 488-nm fluorescent patterns colocalized with the red fluorescent marker of the mitochondrial matrix, mito-DsRED2; in HeLa cells ( Fig. 1 A), human primary myocytes ( Fig. 1 B), rat hippocampal neurons ( Fig. 1 C), and several other cell types (not depicted) transfected with mito-PAGFP ( Fig. 1, A-C, pre). Photoactivation of regions of interest (ROIs; Fig. 1, white circles) by a short impulse of 413-nm light within mito-DsRED2-expressing mitochondria, followed by three-dimensional (3D) confocal imaging, revealed a dramatic increase in the green fluorescence localized within the mitochondrial network (Fig. 1, A-C, post) after excitation with 488-nm light confirming the proper mitochondrial localization and photoactivation of the mito-PAGFP fusion protein. The photoactivated protein redistributed, within seconds, out of the activation ROIs, but within restricted tubular shapes showing rapid diffusion of GFP in the mitochondrial matrix. The highest degree of mitochondrial connectivity was observed in myocytes ( Fig. 1 B), an intermediate degree in HeLa ( Fig. 1 A), Cos-7 cells, and primary fibroblasts (not depicted), and the lowest degree in the processes of primary hippocampal neurons (Fig. 1 C).
Mitochondrial fusion assays based on the fusion of two haploid cells of opposite mating types with mitochondria labeled by spectrally different fluorescent probes (e.g., GFP and RFP), followed by detection of mixing and colocalization of fluorescent probes that occurs on fusion of mitochondria from both parental cells, have been applied in yeast , and primary hippocampal neurons (C) were cotransfected with mito-DsRED2 (showing the mitochondria processed in Adobe Photoshop with emboss filter) and mito-PAGFP (green). To photoactivate PAGFP, regions marked with white circles (pre) were irradiated with 413-nm light as described in Materials and methods. Mito-PAGFP was imaged using 488-nm laser excitation before (pre) and 03ف s after (post) the 413-nm light photoactivation. Note the increase in the fluorescence of mito-PAGFP within the vicinity of the photoactivated area. HeLa cells (d-e) were cotransfected with mito-DsRED2 (shown processed in Adobe Photoshop with emboss filter) and mito-PAGFP (green). Mito-PAGFP within some of the mitochondria was activated with 413-nm light followed by time-lapse 3D confocal microscopy. Activated and nonactivated mitochondria (arrowheads) are shown 30 s before (a-e) and after (a'-e') mitochondrial fusion and intramitochondrial exchange of matrix contents (visualized by an increase in the amount of activated mito-PAGFP in nonactivated mitochondria). (B) The region indicated by the red circle in a mito-PAGFP-transfected HeLa cell (preactivation) was photoactivated, followed by time-lapse acquisition of images. Between 45 and 50 s the photoactivated mitochondrion divides (arrowheads), followed at 180 s by redistribution of mito-PAGFP from activated to nonactivated mitochondria (arrows,) indicating mitochondrial fusion. To highlight the fluorescence decrease in activated mitochondria and the increase in nonactivated organelles, images were false colored (right). (C) HeLa (a), Cos-7 cells (b), hippocampal neurons (c), WT MEFs (d), and Mfn1 Ϫ / Ϫ MEFs (e) were transfected with mito-PAGFP, followed by photoactivation of one to four ROIs per cell (white circles) and time-lapse confocal acquisition of a series of z-sections covering the entire thickness of the cell as described in Materials and methods. Images shown are pseudo- (Mozdy and Shaw, 2003). Use of a similar strategy in mammalian cells has been also reported. However, in mammalian systems, treatment with polyethylene glycol and cycloheximide (Legros et al., 2002;Chen et al., 2003; or viral infection (Ishihara et al., 2003) have to be applied to activate cell fusion. These conditions preclude data collection under normal growth and make it impossible to analyze mitochondrial fusion in cell types that cannot be fused in culture or during dynamic changes of growth conditions. It has been shown that under aerobic conditions photoactivated PAGFP remains stable for days (Patterson and Lippincott-Schwartz, 2002); consequently, when properly targeted it could serve as a potential tool for detection over time of single or several organelles within a cell. To test the applicability of mito-PAGFP as a probe for mitochondrial fusion, we activated ROIs with 413-nm light and performed 3D time-lapse confocal microscopy. An intramitochondrial exchange of matrix components that could occur as a result of mitochondrial fusion is detected in hippocampal neurons ( Fig. 2 A), HeLa cells (Fig. 2,A and B), and several other cell types (not depicted). Mitochondrial fusion-related matrix redistribution and changes in the fluorescence intensity of mito-PAGFP are highlighted in the pseudocolored images ( Fig. 2 B, right) where fusion of photoactivated "white" mitochondria with nonactivated "purple" organelles is followed by the formation of the "yellow/green" intermediates.
Mito-PAGFP-transfected HeLa ( preactivation values (Fig. 2 C, a), indicating a very high rate of mitochondrial fusion. A slower fusion rate is seen in neurons (16 out of 60 analyzed mitochondrial ROIs did not fuse after 1 h; Fig. 2 C, c). To confirm that the decrease in fluorescence of individual mitochondria was due to mitochondrial fusion, mixing and dilution of matrix contents were examined in cells lacking a crucial component of the mitochondrial fusion machinery, Mfn1 (Fig. 1, C [e] and D [e]; Chen et al., 2003). These cells that have punctate mitochondria due to inhibition of fusion and unrestrained fission displayed little or no decrease in mito-PAGFP fluorescence within photoactivated mitochondria over time (Fig. 2,C [e] and D [e]), validating the conclusion that the redistribution and changes in the mito-PAGFP fluorescence reflects mitochondrial fusion.
The aforementioned experiments enable for the first time the visualization in real-time of mitochondrial fusion in cultured cells and establish the applicability of confocal imaging of mito-PAGFP as an efficient way to quantitate the dynamics of the mitochondrial network in living cells. It has been reported that complete fusion of mitochondria assayed by the cell fusion method occurred 7-24 h after cytoplasmic fusion, with some mitochondrial fusion events detectable within 90-120 min (Legros et al., 2002;Chen et al., 2003;Mattenberger et al., 2003). Our results suggest that efficient mixing of matrix content under normal cell growth conditions occurs at a much higher rate. Average fluorescence intensities of photoactivated and nonactivated mitochondria in HeLa, Cos-7, primary hippocampal neurons, WT MEFs, and Mfn1 Ϫ / Ϫ MEFs were measured and plotted against time after photoactivation (Fig. 2 D). A gradual decrease in the fluorescence of photoactivated mitochondria leading to an equilibration of activated and nonactivated mitochondria is clearly visible in HeLa ( t 1/2 [mito-PAGFP fluorescence de-crease] ϭ 28.5 Ϯ 8.5 min), Cos-7 cells ( t 1/2 ϭ 27.3 Ϯ 5.8 min), and WT MEFs ( t 1/2 ϭ 31.2 Ϯ 13.5 min), but not in Mfn1 Ϫ / Ϫ cells ( t 1/2 Ͼ 60 min). Distinctly lower values were obtained with hippocampal neurons ( t 1/2 ϭ 60.7 Ϯ 14 min), reflecting a slower fusion rate.
Inhibition of mitochondrial fusion upon activation of apoptosis
Several recent works describe decreases of the mitochondrial network connectivity occurring early during apoptosis (Frank et al., 2001;Pinton et al., 2001;Karbowski et al., 2002;Breckenridge et al., 2003;James et al., 2003), suggesting a role of mitochondrial fission/fusion mediators in the regulation of some steps of this process. Although it has been reported that Drp1 and the fission machinery can participate (Frank et al., 2001;Breckenridge et al., 2003;James et al., 2003), the mechanism of the apoptotic fragmentation of mitochondria is not known. Inhibition of caspases by the broad specificity caspase inhibitor zVAD-fmk, which has been reported to effectively inhibit functional deterioration of mitochondria including the increase in reactive oxygen species generation and the loss of ⌬⌿ m (Ricci et al., 2003), does not affect mitochondrial fragmentation occurring during apoptosis (Karbowski and Youle, 2003). Mitochondrial shape and networks are a result of precise balancing of fusion and fission events, and it is believed that changes in the activity of fusion affects the dynamics of fission, and vice versa, leading to the tubular morphology of mitochondria. The fragmentation of mitochondria during apoptosis could be due to activation of fission, as has been suggested previously (Frank et al., 2001;Breckenridge et al., 2003), an inhibition of fusion, or both. Therefore, we examined the mitochondrial fusion rate in cells challenged with staurosporine (STS) and actinomycin D (ActD), stimuli known to activate Bax and Bak and, consequently, the mitochondria-dependent apoptotic pathway (Wei et al., 2001).
HeLa cells transfected with mito-PAGFP were pretreated with 75 M zVAD-fmk and treated with 12.5 M ActD or 1 M STS, followed by activation of mito-PAGFP within several cells at 1, 60, and 120 min and imaging of mitochondrial fusion in several cells over time. Control cells show unaltered mitochondrial fusion dynamics after three photoactivations over 180 min (Fig. 3, A and F). However, treatment with ActD (Fig. 3, B and F) or STS (Fig. 3, C and F) leads to the formation of two distinct groups of cells that could be clearly distinguished after the third activation: those with a fusion rate the same as untreated HeLa cells and those showing a complete inhibition of fusion.
During apoptosis, Bax translocates from the cytosol to mitochondria where it clusters at mitochondrial scission sites (Karbowski et al., 2002). Bax coalesces, together with another proapoptotic protein from the Bcl-2 family, Bak, into foci (Nechushtan et al., 2001) that colocalize with Mfn2 and Drp1 (Karbowski et al., 2002), proteins that participate in the regulation of mitochondrial dynamics. Therefore, we analyzed mito-PAGFP dilution rates in ActD-and STS-treated cells cotransfected with mito-PAGFP and Bax (Fig. 3, D-F). Increased Bax expression distinctly accelerates STS-and ActD-induced loss of mitochondrial fusion correlating the degree of apoptosis with the degree of inhibition of mitochondrial fusion.
We compared the time of Bax translocation with that of mitochondrial fusion rate changes. HeLa cells were cotransfected with mito-PAGFP and CFP-Bax, treated with broad specificity caspase inhibitor, 75 M zVADfmk, and 1 M STS, and analyzed with time-lapse 3D confocal microscopy, starting immediately after addition of STS (Fig. 4). Initially, STS did not affect the mitochondrial fusion rate; however, complete inhibition of fusion occurred abruptly when mitochondrial clustering of Bax became detectable (Fig. 4 A, i), showing that inhibition of the mitochondrial fusion machinery is temporally linked to the activation of proapoptotic members of the Bcl-2 family and is not dependent on caspase activation. We also tested the effect of a CFP-tagged mutant of Bax, Bax S184V , that constitutively localizes to and circumscribes mitochondria in healthy cells before activation and foci formation during apoptosis (Nechushtan et al., 1999). CFP-Bax S184V did not affect mitochondrial fusion dynamics in untreated, healthy cells, suggesting that mitochondrial translocation of Bax, per se, is not sufficient to inhibit fusion and that conformational changes and foci formation are required (unpublished data).
In addition, when ActD and etoposide-treated cells were stained for activation of endogenous Bax (immunostaining with conformation-specific 6A7 antibodies [Hsu and Youle, 1998] and mitochondrial morphology by staining with Mitotracker red), there was no increase in the fragmentation of mitochondrial networks in 6A7-negative etoposide-and ActD-treated cells compared with the control untreated cells. No cells displayed intact, tubular mitochondria in 6A7 Bax-positive cells ( Fig. S1; available at http://www.jcb.org/cgi/content/full/jcb.200309082/ DC1), indicating a close correlation of Bax activation and mitochondrial fragmentation. Moreover, as mitochondrial accumulation of Mitotracker red is strictly ⌬⌿ m dependent and only a small population of 6A7-positive cells lacked Mitotracker red staining (Fig. S1), apoptotic inhibition of mitochondrial dynamics does not appear to require changes in ⌬⌿ m . We examined the release of Smac/DIABLO-CFP from the intermembrane space, a process that has been reported to occur simultaneously with cytochrome c release (Rehm et al., 2003), relative to the inhibition of mitochondrial fusion. Mitochondrial outer membrane permeabilization (MOMP) occurred within the 15-min window required to quantitate mitochondrial fusion (Fig. 5). Thus, the three events-Bax translocation, inhibition of mitochondrial fusion, and MOMP-appear to be closely linked, temporally and perhaps mechanistically, during apoptosis.
Our results suggest that inhibition of mitochondrial fusion is a general phenomenon during apoptosis that contributes to or mediates the fragmentation of mitochondria and occurs upon activation of proapoptotic members of the Bcl-2 family. Interestingly, mutations in OPA1, a component of mitochondrial fusion machinery, have been detected in patients with inherited dominant optic atrophy, a neuropathy resulting from the loss of retinal ganglion cells (Mozdy and Shaw, 2003), suggesting that inhibition or slowing down of mitochondrial fusion may contribute to the cell loss. Although the nature of the cell death leading to OPA1 mutation-induced optic atrophy is not known, it has been reported that experimental down-regulation of OPA1 by siRNA commits cells to apoptosis without any additional stimuli (Olichon et al., 2002), supporting the potential correlation of down-regulation of the mitochondrial fusion machinery and induction of apoptotic cell death. Therefore, OPA1 and other proteins participating in the regulation of mitochondrial fusion could participate in mitochondrial steps of apoptosis. The findings reported here, that a com- plete block in mitochondrial fusion normally occurs during apoptosis close in time to Bax translocation and MOMP, and upstream of postmitochondrial caspase activation, supports this hypothesis.
Expression vectors
Mito-DsRED2 and CFP-Bax constructs were performed as described previously (Karbowski et al., 2002). PAGFP-N1 vector, which was provided by G. Patterson and J. Lippincott-Schwartz (The National Institute of Child Health and Human Development, NIH; Patterson and Lippincott-Schwartz, 2002), was used to make a mitochondria-targeted version of PAGFP. The PAGFP coding region between NotI and BamHI restriction sites was inserted into a mito-GFP (BD Biosciences) vector fragment digested with the same pair of enzymes, resulting in replacement of the WTGFP coding region with PAGFP. Smac/DIABLO coding region was amplified using primers introducing XhoI site (5 Ј -NNNNCTCGAGATGGCGGCTCTGAAGAGTTG-3 Ј ), and BamHI site (3 Ј -NNNNGGATCCCCTCCATCCTCACGCAGGTA-5 Ј ) has been inserted into CFP-N1 vector (BD Biosciences).
Confocal microscopy and image analysis
Cells were grown in 2-well chambers for confocal microscopy (Karbowski et al., 2002). Images were captured with a microscope (model LSM 510; Carl Zeiss MicroImaging, Inc.) using a 63 ϫ 1.4 NA Apochromat objective (Carl Zeiss MicroImaging, Inc.). The excitation wavelengths for GFP, CFP, and Mitotracker red or DsRED2 were 488, 458, and 543 nm, respectively. 405-or 413-nm light was used for photoactivation of PAGFP (Patterson and Lippincott-Schwartz, 2002). ROIs were selected and series of z-sections from the top to the cell bottom with intervals between sections set to 0.5-0.75 m were irradiated with 405-or 413-nm light. The same intervals between optical sections were used for imaging. Postacquisition processing was performed with MetaMorph software, Microsoft Excel, and Adobe Photoshop. Pixel intensity of selected regions was measured using MetaMorph software. Mitochondrial ROIs were selected in the first image collected after photoactivation (postactivation values). The same regions were transferred without change to the image obtained before photoactivation (preactivation values) or corrected for the movements of mitochondria to the following postactivation images. CFP-Bax translocation and Smac/ DIABLO-CFP release from the mitochondria was quantified using increases and decreases, respectively, of the SD of the pixel intensities within analyzed cells. The initial value of SD was normalized. Data were exported to Microsoft Excel and converted into graphs.
Online supplemental material
Fig. S1 shows a correlation between the changes in the mitochondrial morphology and the release of endogenous cytochrome c from mitochondria and a conformational change in the endogenous Bax protein. Online supplemental material is available at http://www.jcb.org/cgi/content/full/ jcb.200309082/DC1. | 2014-10-01T00:00:00.000Z | 2004-02-16T00:00:00.000 | {
"year": 2004,
"sha1": "8303182ba23bd9835cda156be056a190a8b6faca",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/164/4/493/1312899/jcb1644493.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fd3877ee8d59b6fa994ba3180eab10229fefdd9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119239384 | pes2o/s2orc | v3-fos-license | Long-range supercurrents induced by the interference effect of opposite-spin triplet state in clean superconductor-ferromagnet structures
By now it is known that in an s-wave superconductor-ferromagnet-superconductor ($SFS$) structure the supercurrent induced by spin singlet pairs can only transmit a short distance of the order of magnetic coherence length. The long-range supercurrent, taking place on the length scale of the normal metal coherence length, will be maintained by equal-spin triplet pairs, which can be generated by magnetic inhomogeneities in the system. In this paper, we show an unusual long-range supercurrent, which can take place in clean $SF_1F_2S$ junction with non-parallel orientation of magnetic moments. The mechanism behind the enhancement of Josephson current is provided by the interference of the opposite-spin triplet states derived from $S/F_1$ and $F_2/S$ interfaces when both ferromagnetic layers have the same values of the length and exchange field. This finds can provide a natural explanation for recent experiment [Robinson et al., Phys. Rev. Lett. 104, 207001 (2010)].
I. INTRODUCTION
The interplay between superconductivity and ferromagnetism in hybrid structures has currently attracted considerable attention because of the rich unusual physical phenomena 1-4 and potential practical applications [5][6][7][8] . Much effort has been devoted to obtaining a better understanding of the exotic phenomena appeared in heterostructures involving superconductor (S) and ferromagnet (F ). To mention a few of these, it is natural to highlight the experimental and theoretical study of the transport properties in SF heterostructures.
When a conventional s-wave S is adjacent to a homogeneous F , the superconducting proximity effect in this F is rather short ranged due to the differential action of the ferromagnetic exchange field acting on the spin-up and spin-down electrons that form a Cooper pair. In this case, the spin-split of the electronic energy bands in the ferromagnetic region will make the opposite-spin Cooper pair acquire a finite center-of-mass momentum Q = 2h 0 / v F , where h 0 and v F are the exchange field strength and the Fermi velocity, respectively. As a result, the Cooper pair |↑↓ e iQ·R −|↓↑ e −iQ·R can be decomposed into a spin singlet component (|↑↓ −|↓↑ ) cos(Q · R) and a spin triplet component with zero spin projection along the magnetization axis i(|↑↓ +|↓↑ ) sin(Q·R), where R is the distance from the S/F interface. For simplicity, we will hereafter refer to the wave function of this triplet component as opposite-spin triplet state. Accordingly, the above singlet and triplet components are short range and decays at a distance ξ f from the superconductor 6,7 . Here ξ f is the superconducting coherence length in the F layer, which is much smaller than the correlation length ξ n in normal metal (N ). Another peculiarity in such systems is the spatial oscillations of these two components inside the F region 9 . Owing to this oscillatory nature, the critical current of SF S junctions becomes an oscillating function of the F layer thickness. This oscillating behavior of the supercurrent corresponds to the transition between so-called "0 state and π state 5, 6 .
In contrast, it is useful to seek ways to enhance the proximity effect. Several options have recently been proposed in the literature. First, the presence of the inhomogeneous magnetization may strongly modify the SF proximity effect 8,9 . In the presence of domain at the S/F interface, the induced spin triplet pairing with the equal spin projection |↑↑ or |↓↓ can propagate long distances in a ferromagnetic material. The primary reason is that since two triplet-paired electrons at the Fermi surface have no momentum difference and propagate with the same phase, they are not affected by the exchange field and decay at a distance ξ n . This long-range proximity effect, giving rise to induced superconducting correlations in ferromagnets and half-metals, is prime examples of the potential that lies within this field of research. It has been observed in Co [10][11][12] and in the half-metal CrO 2 13,14 . Its origin is related with the presence of the spin-flip scattering at the S/F interface, which is induced by the noncollinear magnetic domain or magnetic impurity.
Recently, the second way to enhance the supercurrent has been proposed in SF S junction containing a noncollinear thin magnetic domain in the center of ferromagnetic region 15,16 . The magnetic domain will induce a spin-flip scattering process, which reverses the spin orientations of the singlet Cooper pair and simultaneously changes the sign of the corresponding electronic momentum. Under these conditions the singlet Cooper pair will create an exact phase-cancellation effect and gets an ad- The Josephson junction consists of two s-wave superconductors and two ferromagnets of the thicknesses L1 and L2. The exchange fields of the ferromagnets, h1 and h2, denoted by the thick arrows, are confined to the x-z plane, but are misaligned by an angle θ. The phase difference between the two superconductors is φ=φR−φL.
ditional π phase shift as it passes through the entire ferromagnetic region, so that the supercurrent can not be suppressed.
The third approach requires the magnetizations in the clean SF 1 F 2 S junction to be arranged antiparallel. This situation was previously proposed by Blanter et al. 17 through solving the Eilenberger equation. However, the physical origin of this enhanced proximity effect is more subtle. With the simplest picture of this situation, the authors argue that when the Cooper pair propagating from the first F layer to the second between the superconducting electrodes, it first acquire a relative phase δϕ 1 = Q · R 1 , where R 1 is the distance traversed in the first ferromagnetic layer. Subsequently, in the second layer with opposite direction of exchange field, the above pair will gain the other phase δϕ 2 = −Q · R 2 , which can partially compensate for δϕ 1 . For R 1 = R 2 they have full compensation, then the ferromagnetic bilayer behaves as a piece of normal metal, and the proximity effect is fully restored. However, this explanation dose not specify which pairing form (|↑↓ −|↓↑ or |↑↓ +|↓↑ ) provides the main contribution to the long-range Josephson current. Soon afterwards, the same conclusion for clean junction was proposed theoretically by Pajović et al. 18 via solving the Bogoliubov-de Gennes (BdG) equation, but they just took into account a single transverse channel case for simplicity, which is inconsistent with the realistic situation. Recently, Robinson et al. 19 observed experimentally that the supercurrent in the antiparallel domain configuration was enhanced with respect to the parallel one.
In this paper, we report a manifestation of the interference effect in clean SF 1 F 2 S junction with non-parallel magnetizations by considering an oblique injection process. Note that in contrast to the model of Ref. 18 we consider the multiple transverse channel that is more agree with the realistic case of the planar junctions. We investigate the dependence of the critical Josephson current on the thicknesses of both F layers. It is shown a slowly decaying characteristic for non-parallel orientation of magnetizations in the F layers. Furthermore, by changing the relative magnetization direction of F layers from parallel to antiparallel, the critical current is varied from a small to a large value. In this process, the spin singlet state changes slightly but the opposite-spin triplet state could switch from a finite value to be cancelled out in central region of the entire F layer. So we attribute the enhancement of critical current to the interference effect of opposite-spin triplet wave functions in the F region. This effect can weaken the role of the center-of-mass momentum acquired by the Cooper pair, and the situation is similar to the transition of the Cooper pair in normal metal, in which case only the singlet state exists but the opposite-spin triplet state disappear. Moreover, it is found that the critical current is inversely proportional to the exchange field of both F layers. When two F layers are converted into half-metals the Joseohson current will be prohibited. That is because the singlet and triplet states will all be suppressed by the exchange splitting of two F layers, and the interference effect could completely vanish in the entire F region. On the other hand, if the both F layers have different features, the critical current will oscillator decay with their difference of the lengths or exchange fields, which can be attribute to the variation of the interference between the two opposite-spin triplet states derived from the S/F 1 and F 2 /S interfaces.
II. MODEL AND FORMULA
The SF 1 F 2 S junction we consider is shown schematically in Fig. 1. We denote the ferromagnetic layer thicknesses by L 1 and L 2 , respectively. The y axis is chosen to be perpendicular to the layer interfaces with the origin at the S/F 1 interface, and the whole system satisfies translational invariance in the x-z plane. The exchange field in the F 1 layer is directed along the z axis while within the F 2 layer, it is oriented at an angle θ in the x-z plane.
The BCS mean-field effective Hamiltonian 6,20 is and ψ α ( r) represent creation and annihilation operators with spin α, and the vector σ = (σ x , σ y , σ z ) is composed of Pauli spin matrices. m is the effective mass of the quasiparticles in both Ss and F s, and E F is the Fermi energy. ∆( r) = ∆(T )[e iφL Θ(−y)+e iφR Θ(y−L F )] describes the superconducting pair potential with L F = L 1 +L 2 . Here ∆(T ) accounts for the temperature-dependent energy gap. It satisfies the BCS relation ∆(T ) = ∆ 0 tanh(1.74 T c /T − 1), where ∆ 0 is the energy gap at zero temperature and T c is the superconducting critical temperature. φ L(R) is the phase of the left (right) S, and Θ(y) is the unit step function. The exchange field h due to the ferromagnetic magnetizations in the F region can be written as h = h 1ẑ , 0 < y < L 1 h 2 (sin θx + cos θẑ), L 1 < y < L F .
To diagonalize the effective Hamiltonian, we make use of the Bogoliubov transformation ψ α ( r) = n [u nα ( r)γ n + v * nα ( r)γ † n ] and take into account the anticommutation relations of the quasiparticle annihilation operatorγ n and creation operatorγ † n . The resulting BdG equation can be expressed as 20 T are quasiparticle and quasihole wave functions, respectively. In order to calculate the Josephson current, we adopt the Blonder-Tinkham-Klapwijk (BTK) approach. The BdG equation (2) can be solved for each superconducting electrode and each ferromagnetic layer, respectively. We have four different incoming quasiparticles, electronlike quasiparticles (ELQs) and holelike quasiparticles (HLQs) with spin-up and spin-down. For an incident spin-up electron in the left superconducting electrode, the wave function is In this particular process, the coefficients b 1 , b ′ 1 , a ′ 1 , and a 1 correspond to the normal reflection, the normal reflection with spin-flip, the novel Andreev reflection, and the usual Andreev reflection, respectively. Moreover, are the perpendicular components of the ELQs (HLQs) wave vector with k as the parallel component. The corresponding wave function in the right superconducting electrode is where the transmission coefficients c 1 , d 1 , c ′ 1 , and d ′ 1 correspond to the reflection processes described above. The basis wavefunctionsN p (p = 1-4) in the right S can be obtained fromM p by performing the substitution The wave function in the F 2 layer can be described by transformation matrix 21 as are the perpendicular components of wave vectors for ELQs and HLQs. It is worthy to note that the parallel component k is conserved in transport processes of the quasiparticles. The transformation matrix has been defined asT =1 ⊗ (cos θ 2 ·1 − i · sin θ 2 ·σ y ). From the conversion θ → 0 and h 2 → h 1 , we can obtain the wave function Ψ F 1 (y) in the F 1 layer. All scattering coefficients can be obtained by continuity of the wave functions and their derivatives at the interfaces: Here, Z 1 -Z 3 are dimensionless parameters describing the magnitude of the interfacial resistances. y 1-3 = 0, L 1 , L F are local coordinate values at the layer interfaces, and k F = √ 2mE F is the Fermi wave vector. The wave functions for the other types of quasiparticle injection processes can be obtained in a similar way. From the boundary conditions, we obtain a system of linear equations that yield the scattering coefficients. With these coefficients at hand, we can use the finite-temperature Green's function formalism [22][23][24] to calculate dc Josephson current, where ω n = πk B T (2n + 1) are the Matsubara frequencies with n = 0, 1, 2, ... and Ω n = ω 2 n + ∆ 2 (T ). k e (ω n ), k h (ω n ), and a j (ω n , φ) with j = 1, 2, 3, 4 are obtained from k e , k h , and a j by analytic continuation E → iω n . In this case the critical current is defined by To acquire the time dependent triplet amplitude functions and the local density of the states (LDOS), we solve the BdG equation (2) by Bogoliubov's self-consistent field method 20,25-27 . The SF 1 F 2 S junction is placed in a one-dimensional square potential well with in-finitely high walls, then the eigenvalues and eigenvectors of the equation (2) have the following substitutions: Accordingly, the corresponding quasiparticle amplitudes can be expanded in terms of a set of basis vectors of the stationary states 28 , u nα ( r) = q u α nq ζ q (y) and v nα ( r) = q v α nq ζ q (y) with ζ q (y) = 2/L sin(qπy/L). Here q is a positive integer and L = L S1 + L F + L S2 , where L S1 and L S2 are thickness of left and right superconductors, respectively. The pair potential in the BdG equation (2) satisfies the selfconsistency condition 20 where the primed sum of E n is over eigenstates corresponding to positive energies smaller than or equal to the Debye cutoff energy ω D , and the superconducting coupling parameter g(y) is a constant in the superconducting regions and zero elsewhere. The BdG equation (2) is solved by an iterative schedule. One first starts from the stepwise approximation for the pair potential and iterations are performed until the change in value obtained for ∆(y) does not exceed a small threshold value. The amplitude functions of the spin triplet state with zero and net spin projection are defined, respectively, as follows 26 where the sum of E n is in general performed over all positive energies, and η n (t) = cos(E n t) − i sin(E n t) tanh(E n /2k B T ). Additionally, the amplitude function of the spin singlet state can be written as f 3 ≡ ∆(y)/g(y). In this paper the singlet and triplet amplitude functions are all normalized to the value of the singlet pairing amplitude in a bulk superconducting material. The LDOS is given by 26 where f ′ (ε) = ∂f /∂ε is the derivative of the Fermi function. The LDOS is normalized by its value at ǫ = 3∆ 0 beyond which LDOS is almost constant.
III. RESULTS AND DISCUSSIONS
Unless otherwise stated, in BTK approach we use the superconducting gap ∆ 0 as the unit of energy. The Fermi energy is defined as E F = 1000∆ 0 , and the temperature is taken to be T /T c = 0.1. We assume all interfaces between the layers are transparent for electrons Z 1−3 = 0. All lengths and the exchange field strengths are measured in units of the inverse of the Fermi wave vector k F and the Fermi energy E F , respectively. In Bogoliubov's selfconsistent field method, we consider the low-temperature limit and set k F L S1 = k F L S2 = 400 and ω D /E F = 0.1, the other parameters are the same as the ones described above.
The detailed dependence of the critical current on the thickness k F L 1 (= k F L 2 ) is shown in Fig. 2(a) for different misorientation angles θ. We can find a significant change in the magnitude of critical current depending on the mutual orientation of two ferromagnetic magnetizations. Considering first the parallel orientation (θ = 0), the well know 0-π oscillations are reproduced, where the current change sign for certain values of thickness. It should however be noted that we have taken absolute value for I e (φ) to define the critical current I c , because that is most commonly measured in experiments. Increasing the misorientation angle θ tends to enhance the amplitude of current. Meanwhile, the oscillations of the critical current with ferromagnetic layer thickness will diminish. For perpendicular case (θ = 0.5π), the oscillations will almost cease, leaving the junction in the 0 state for larger values of k F L 1 . In addition, we can observe a clear maximum of the critical current for an antiparallel magnetizations (θ = π), but it is significantly smaller than that in SN S junction for all values of k F L 1 . This conclusion is inconsistent with the previous results of Ref. 17,18 .
By comparison, the dependence of critical current I c on the exchange field h 1 /E F (= h 2 /E F ) is plotted in Fig. 2(b). It can be clearly seen that for various θ the critical current I c decreases monotonically with increasing h 1 /E F and it decreases down to zero at h 1 /E F = 1, which suggests a vanishing of the Josephson current. This phenomenon shows that the strong exchange splitting of the energy bands inside the F layers could effec-tively damp the tunneling of pairing electrons. For θ = 0, the critical current becomes an oscillating function of the h 1 /E F , and is also accompanied by an exponential decay. This oscillating effect will diminish as the enhancement of θ and also disappear at some larger θ. We confirm the obvious fact that the critical current increases with θ for any fixed h 1 /E F . Inset of Fig. 2(b) shows this character of critical current for h 1 /E F = 0.1. It displays a nonmonotonic dependence of the critical current on θ, where a low dip corresponds to θ = 0.12π and the maximum is located at θ = π. The main reason is because the junction starts out in the π state for the parallel orientation, and we can see that a transition from the π state to the 0 state takes place as the increase of θ. In contrast, if the 0 state is the equilibrium state of the junction for θ = 0, we will acquire a monotonic variation of the critical current when θ varies from 0 to π. These behaviors agree with the statement made in Ref. 2,29 .
In order to clearly illustrate above feature of the critical current, we plot the current-phase relation I e (φ) and the LDOS respectively in Figs. 3(a) and 3(b) for several misorientation angles θ. If the two ferromagnetic layers have the same directions (θ = 0), the Josephson current I e (φ) is negative and its amplitude is small enough, then the LDOS displays a very small conductance peak at the Fermi level (ǫ = 0), as plotted in Fig. 3(b). These features indicate the junction is situated in π state. By contrast, the current will turn to positive quantity and its amplitude is correspondingly enhanced by increasing the misorientation angle θ. Under such circumstances, the LDOS at ǫ = 0 will be turned from peak to valley. When θ increase to π, the LDOS is strongly enhanced with two distinguishable peaks nearly at ǫ = ±0.5∆. Such LDOS shapes represent the ground state of the junction is converted into the 0 state. These behaviors demonstrate that the transition between the π state and 0 state can be realized by tuning the relative orientation of magnetizations for an appropriate ferromagnetic thicknesses.
For searching the main reason of enhancement of the critical current, we first focus on the transmission of the singlet and triplet components in the antiparallel orientation of magnetic moments. In Fig. 4, we show the spatial distribution of the singlet component and the imaginary parts of the opposite-spin triplet component for three different lengths k F L 1 = k F L 2 = 70, 87 and 100. In the panels, f 3(0) , f → 3(0) and f ← 3(0) represent the wave functions in SF 1 F 2 S, SF 1 F 2 and F 1 F 2 S configurations, respectively. It is found that the singlet components f 3 are symmetrical about the F 1 /F 2 interface, but the triplet components f 0 are antisymmetric and their amplitudes will diminish nearly at the central region of the F layer. The physical origin of these effects can be described as follows. Due to the exchange splitting, the original Cooper pair |↑↓ −|↓↑ in the left superconducting electrode will acquire a center-of-mass momentum Q in the F 1 region, then this pair can be transformed into |↑↓ e iQ·R −|↓↑ e −iQ·R , where R represents the transmission distance from the S/F 1 interface. Accordingly, the wave func- Imf 0 (SF 1 F 2 S) Additionally, for the antiparallel magnetic moment the wave-vector mismatches for spin-up and spin-down particles at both side of the F 1 /F 2 interface will result in an interface scattering 30 . The right-going particle wave transmitted from the F 1 layer will take the F 1 /F 2 interface as the wave source and continually transports into the F 2 layer. In addition, at the location of F 1 /F 2 interface the phase of the wave function could maintain continuously in above transmission process, but the centerof-mass momentum Q will be transformed into −Q in the F 2 layer. As a result, the right-gong wave function of the Cooper pair arising from the S/F 1 interface can be written as where R r and R ′ r denote the distance from the S/F 1 and F 1 /F 2 interfaces, respectively. This wave function can be decomposed into the singlet and triplet components. Accordingly, the right-going singlet component is given by And the associated right-going triplet component reads as From above descriptions, we can demonstrate that the f → 3 and f → 0 are all are symmetrical about the F 1 /F 2 interface.
On the other hand, the left-going wave function χ ← has the same transmission characteristic, but the only difference is that it generates at the F 2 /S interface, in which case its original center-of-mass momentum will become −Q in the F 2 region. It can be expressed as where R l and R ′ l represent the distance from the F 1 /F 2 and F 2 /S interfaces, respectively. Hence we can get the left-going singlet component and the left-going triplet component From above equations, we can find that because the factor cos(QR ′ l ) of the singlet component f ←
3
is an even function of center-of-mass momentum, f ← 3 will not change its sign when passing from the F 2 layer into the F 1 layer, then it will overlap with f → 3 . In contrast, the triplet component f ← 0 will be added a negative sign because the factor sin(QR ′ l ) of this component is an odd function of center-of-mass momentum. Consequently, the sign of f ← 0 is opposite to f → 0 , and these two components could be cancelled out each other. In addition, it is known that in normal metal the singlet component decays more slowly and the triplet component does not exist, then the supercurrent could transmit a long distance in the SN S junction. Compared with this situation, the long-range Josephson current could be induced in the SF 1 F 2 S junction with antiparallel magnetizations by the interference effect, which can revise the configurations of the singlet and triplet components and make their characters more close to them in the normal metal. In Fig. 4, we show the numerical results about the singlet and triplet components through solving the BdG equation (2), which further demonstrate our above discussions. In this case, the total f 3 will be enhanced by the coherent superposition of f → 3 and f ← 3 , but f 0 will be cancelled out in the cental region of the F layer due to the opposite signs of f → 0 and f ← 0 . In the following, we want to known which components can make a crucial contribution to the enhancement of the Josephson current. So we turn to discuss the spatial dependence of the singlet and triplet components on the direction of magnetizations. As shown in Fig. 5, we plot the corresponding singlet component f 3 as a function of the coordinate k F y for several values of θ. It is found that the amplitudes of f 3 appreciably increase with θ increasing from 0 up to π. That is because f 3 is an even function of Q, two singlet components (f → 3 and f ← 3 ) originate from left and right superconducting electrodes are nearly symmetrical to each other for different orientations of magnetic moments. From above features we can exclude the contribution of the interference of the singlet component to the long-range proximity effect when the magnetization direction switches from parallel to antiparallel. Now let us analyze the dependence of the triplet components on the misorientation angle θ. As illustrated in Fig. 6, for parallel orientation (θ = 0) f 0 is symmetrical about the center of the F layer. At this time, the equalspin triplet component f 1 does not exist in the entire ferromagnetic region because of the homogeneous magnetization. When the magnetization direction of the F 2 layer rotates from the z-axis to x-axis, the right part of f 0 gradually decreases, but f 1 correspondingly increases in this region and reaches maximum at θ = 0.5π. Under this situation, the F 2 layer magnetized in the x-direction generates the opposite-spin triplet component with respect to the x-axis (|↑↓ −|↓↑ ) x . If one views with respect to the z-axis, such state is equivalent to the equal-spin triplet component −(|↑↑ −|↓↓ ) z 8,9 . It is interesting to note that for this perpendicular case the spatial oscillations of f 0 in the F 1 region will instead exhibit a monotonic spatial variation with jumping into the F 2 region. Meanwhile, f 1 has the same characteristics as it passes from the F 2 layer into the F 1 layer. It should be noted that there are two important effects to enhance the supercurrent: (i) the emergence of long-range f 1 , and (ii) the interference of the f 0 and f 1 . It is well known that f 1 could induced in a long range supercurrent. However, if two F layers are highly asymmetric, f 1 becomes much larger than f 0 , then the interference between of them will be reduced accordingly. In this case, the long-range proximity effect manifests itself as a large second harmonic (I 2 ≫ I 1 ) in the spectral decomposition of the Josephson current-phase relation I(φ) = I 1 sin(φ) + I 2 sin(2φ) + · · · . This phenomenon has been proposed in Ref. 31,32 . In contrast, the first harmonic could prevail as the interference of f 0 and f 1 was restored again in symmetric junction with equal ferromagnetic layers. The comparison of these two cases is shown in Fig. 7. On the other hand, since θ turns from 0.5π to π, f 1 gradually decreases but f 0 in the F 2 region will increase instead, which leads to the enhancement of the interference effect. In the antiparallel configuration f 1 completely vanishes, but the interference effect becomes most apparent, which is displayed by the cancellation of f 0 in the middle region of the F layer. As a result, in the above process the critical current will continue to increase and reach maximum in antiparallel situation. It is emphasized that the Josephson current in the antiparallel configuration is obviously smaller than that in SN S junctions for the same length between two superconducting electrodes, which has been described in the introduction. That is because the interference effect does not make triplet component f 0 cancel out completely in the entire F region, and also does not let the singlet component f 3 grow big enough.
To understand further the interference effect of the opposite-spin triplet state, we investigate the intriguing influence of the length and exchange field on the Josephson current when both ferromagnetic layers have different physical features, which is illustrated in Figs. 8(a) and 8(b), respectively. Take the first one as an example, the variation of I c with the thickness k F L 2 looks like a Fraunhofer pattern. This phenomenon appears more and more obvious as the misorientation angle θ increases from 0 to π. In parallel orientation (θ = 0), the critical current shows the 0-π conversion on the condition of the nonexistence of interference effect, in which case the amplitude of critical current is weak enough. It is important to note that for perpendicular orientation (θ = 0.5π) the long range second harmonic current will be induced in highly asymmetric junction, which corresponds to the circular regions denoted in Fig. 8, then the interference effect can be almost negligible. By contrast, I c dependence exhibits a remarkable oscillating behavior in the thickness range 70 < k F L 2 < 130, which marks the enhancement of the interference effect. Moreover, I c reaches its maximum value for k F L 2 = 100 and above or below this thickness its amplitude will decrease. If the two F layers are arranged antiparallel to each other (θ = π), the interference effect would appear most likely to occur, meanwhile, their contribution to the Josephson current reaches maximum. In this configuration, we consider in Fig. 9 the current-phase relations I e (φ) and the corresponding LDOSs in particular points A, B, C and D in Fig. 8(a). If the F 1 and F 2 layers have identical thickness, as shown in point A, the Josephson current is positive and the LDOS displays a valley at ǫ = 0 and two distinguishable peaks at ǫ = ±0.5∆. Besides, when thickness k F L 2 decrease to 91, corresponding to point B, the Josephson junction is located at the 0π transition point. The first harmonic current vanishes, and the second harmonic will be fully revealed. Subsequently, the sign of I c turns to negative at k F L 2 = 83 (point C), and the LDOS at ǫ = 0 will be converted from valley to peak. This indicates that the ground state of junction converts into π state. At last, the junction will return to critical point of 0-π transition at k F L 2 = 66 (point D). From Fig 8(a), we can clearly see that the critical current oscillating with k F L 2 displays an unequal period. The detailed explanation will be described in the following paragraph. In addition, if the both F layers have the same lengths but different exchange fields, the critical current I c shows the similar characteristics (see Fig. 8(b)). This feature illustrates that the interference effect is simultaneously related to the difference of the center-of-mass momenta which are acquired by the spinopposite triplet pair from the F 1 and F 2 layers. As mentioned before, the interference of f → 0 and f ← 0 provides the main contribution to the Josephson current. ǫ/∆ To gain further insight into the interference effect in the asymmetric junctions, we take antiparallel configurations as an example for discussion. In Fig. 10, we present results for the dependence of the triplet components f 0 on k F L 2 when the thickness of F 1 layer has a fixed value k F L 1 = 100. It is known that the strength of interference effect is relate to the phase difference and amplitude of two wave functions derived from the opposite direction.
We first talk about the contributions of phase difference between f → 0 and f ← 0 to the oscillation of the critical current. Here we fix the thickness of F 1 layer and shorten that of F 2 layer, which is similar to set constant f → 0 and shift f ← 0 from left to right. When both F layers have the same length, the phase difference of two triplet components f → 0 and f ← 0 is π at every position of the F region. In this condition, the interference effect manifests obviously and could induce an enhancement of the Josephson current. As the thickness k F L 2 is reduced to 91, the f ← 0 moves 1/4 period, then the F 2 /S interface shift from the red vertical dash-dotted line to the green one. Correspondingly, the junction is situated at the critical point of 0-π phase transition. For k F L 2 = 83, the f ← 0 moves 1/2 period to the right, accordingly, the junction con- verts to π state. Decreasing the F 2 layer thickness down to k F L 2 = 66 means that f ← 0 shifts 3/4 period, the junction return to the critical point of phase transition. It is worth to note that the critical current has unequal oscillation period with varying k F L 2 , which is determined by the inhomogeneous spatial oscillation of f ← 0 . On other hand, as the length k F L 2 turns from 100 to 0, the mutual cancellation between f → 0 and f ← 0 will decrease, then the magnitude of f 0 will be enhanced by the superposition of above both triplet components. This phenomenon indicates a weakening of the interference effect that can make the Josephson current diminish.
IV. CONCLUSION
In this paper, we have investigated the relationship between the long-range Josephson current and the pairing correlations in clean SF 1 F 2 S junctions with the misorientation magnatizations through solving the BdG equations. The interference effect of the opposite-spin triplet component was pointed out as a source of this current. The main reason is because the Josephson critical current will enhance when the magnetizations rotate from the parallel to the antiparallel orientation. In this process, the singlet component changes slightly but the interference effect of the triplet components f → 0 and f ← 0 will increase correspondingly, and in the antiparallel configuration the interference of both components cloud nearly cancel each other in central ferromagnetic region. This behavior can be attribute to two facts: (i) the triplet components f → 0 and f ← 0 are derived from the S/F 1 and F 2 /S interfaces and transmit to opposite directions. They experience a scattering occurred at the F 1 /F 2 interface and take this interface as an emission source to continually spread into another ferromagnetic layer. (ii) The antiparallel magnetizations will provide opposite centerof-mass momentum to the Cooper pair, then two singlet components f → 3 and f ← 3 almost maintains invariant, but the triplet components f → 0 and f ← 0 have opposite sign and could interference cancellation in the F region. In addition, if the feature of the F 1 layer remain unchanged, the interference effect will make the critical current oscillate with the length and exchange field of the F 2 layer. Therefore this finding provides new insight into the physical mechanism to the long-range proximity effect in the Josephson junctions with non-parallel magnetizations and can be important for the implementation of interference effect in superconducting spin electronic devices. | 2016-01-22T15:39:09.000Z | 2016-01-22T00:00:00.000 | {
"year": 2016,
"sha1": "d6f86ebd1ec81f456046725a9fb3d80dc2980a6b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1601.06045",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d6f86ebd1ec81f456046725a9fb3d80dc2980a6b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236180588 | pes2o/s2orc | v3-fos-license | Mortality and Clinical Interventions in Critically ill Patient With Coronavirus Disease 2019: A Systematic Review and Meta-Analysis
Objective: The aims of this systematic review and meta-analysis were to summarize the current existing evidence on the outcome of critically ill patients with COVID-19 as well as to evaluate the effectiveness of clinical interventions. Data Sources: We searched MEDLINE, the Cochrane library, Web of Science, the China Biology Medicine disc, China National Knowledge Infrastructure, and Wanfang Data from their inception to May 15, 2021. The search strings consisted of various search terms related to the concepts of mortality of critically ill patients and clinical interventions. Study Selection: After eliminating duplicates, two reviewers independently screened all titles and abstracts first, and then the full texts of potentially relevant articles were reviewed to identify cohort studies and case series that focus on the mortality of critically ill patients and clinical interventions. Main Outcomes and Measures: The primary outcome was the mortality of critically ill patients with COVID-19. The secondary outcomes included all sorts of supportive care. Results: There were 27 cohort studies and six case series involving 42,219 participants that met our inclusion criteria. All-cause mortality in the intensive care unit (ICU) was 35% and mortality in hospital was 32% in critically ill patients with COVID-19 for the year 2020, with very high between-study heterogeneity (I2 = 97%; p < 0.01). In a subgroup analysis, the mortality during ICU hospitalization in China was 39%, in Asia—except for China—it was 48%, in Europe it was 34%, in America it was 15%, and in the Middle East it was 39%. Non-surviving patients who had an older age [−8.10, 95% CI (−9.31 to −6.90)], a higher APACHE II score [−4.90, 95% CI (−6.54 to −3.27)], a higher SOFA score [−2.27, 95% CI (−2.95 to −1.59)], and a lower PaO2/FiO2 ratio [34.77, 95% CI (14.68 to 54.85)] than those who survived. Among clinical interventions, invasive mechanical ventilation [risk ratio (RR) 0.49, 95% CI (0.39–0.61)], kidney replacement therapy [RR 0.34, 95% CI (0.26–0.43)], and vasopressor [RR 0.54, 95% CI (0.34–0.88)] were used more in surviving patients. Conclusions: Mortality was high in critically ill patients with COVID-19 based on low-quality evidence and regional difference that existed. The early identification of critical characteristics and the use of support care help to indicate the outcome of critically ill patients.
INTRODUCTION
With the rapid spread of coronavirus disease 2019 (COVID- 19) globally, as of June 2, 2021, a total of 171,222,477 confirmed cases had been reported in 215 countries, areas, or territories, and COVID-19 has been responsiblefor at least 3,686,142 deaths (1). Critically ill patients are always companied by a high risk of lives, which may be complicated by an uncontrolled systemic inflammatory response leading to acute respiratory distress syndrome (ARDS) and multiple organ dysfunction. Patients with ARDS and requirement for respiratory support need urgently to be transferred to the intensive care unit (ICU). It is reported that nasal cannula or mask, high-flow nasal cannula, non-invasive ventilation (NIV), invasive mechanical ventilation (IMV), and veno-venous extracorporeal membrane oxygenation (VV-ECMO) were widely used in COVID-19 according to the severity of respiratory dysfunction (2)(3)(4). Cardiac injury is common in COVID-19, with an incidence of 36% and closely related to a higher risk of mortality (5). It is reported that, in a systematic review and meta-analysis, the pooled incidence of acute kidney injury (AKI) was 28.6% among hospitalized COVID-19 patients from the USA and Europe and 5.5% among patients from China. Kidney replacement therapy (KRT) was used in 20.6% of patients admitted to the intensive care unit (6).
As is universally known, the mortality of critically ill patients is higher than that of ordinary patients. A systematic review reported that the summary estimate for all-cause mortality was 10% for adult patients with COVID-19 and 34% for critically ill patients within minor countries (7). In order to gain a clearer picture of the mortality of critically ill patients within major countries and clinical interventions or supportive care for organ dysfunction in the ICU, we meta-analyzed the relevant literature. The results may provide a narrative for the mortality of critically ill patients with COVID-19 as well as the effect of clinical characteristics and interventions between surviving and non-surviving patient groups.
METHODS
This systematic review was performed in compliance with the Centre of Reviews and Dissemination guidelines (8) and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) statement (9). In order to complete the systematic review and provide some references for clinical intervention during COVID-19 as soon as possible, this review was not registered.
Eligibility Criteria
We included studies that focused on the mortality of critically ill patients with laboratory-confirmed COVID-19, clinical characteristics, and interventions or supportive care of organ dysfunction.
We included original studies that fulfill the following criteria: (1) the type of study was cohort, case-control, or case-series designs, (2) the study topic was related to the mortality, clinical characteristics, and interventions or supportive care of critically ill patients with COVID-19, which is defined as a positive result of a real-time reverse transcriptase-polymerase chain reaction (RT-PCR) assay of nasal and pharyngeal swabs (10), and (3) the study was published or posted in English or Chinese. We excluded duplicates, conference abstracts, letters, and studies for which we could not access the full text and missing data of outcomes. In order to avoid a small size, only studies of more than 50 patients were included. If there were two or more studies that included the same population, only the study with the largest sample size was chosen.
In this review, the primary outcome was the mortality of critically ill patients with COVID-19. The secondary outcomes included all sorts of supportive care, including non-invasive respiratory support, IMV, KRT, and vasopressor. Critically or severely ill patients were defined as those patients who were admitted to the ICU or required respiratory support. Surviving patients were defined as those discharged from the ICU or hospital or who remained hospitalized. Non-surviving patients were defined as those who died in the ICU or hospital. Immunoregulation therapy includes corticosteroids, interferon, and intravenous immunoglobulin G.
Selection of Studies
After eliminating duplicates by using EndNote X9.3.2 software, two reviewers independently screened all titles and abstracts first, and then the full texts of potentially relevant articles were Frontiers in Medicine | www.frontiersin.org reviewed to identify the final inclusion. Discrepancies were settled by discussion or consultation with a third reviewer. All reasons for exclusion of ineligible studies were recorded, and the process of study selection was documented using a PRISMA flow diagram (11).
Data Extraction
Two reviewers (ZQ and SL) extracted data independently with a standard data collection form. Any disagreements were resolved by consensus, and a third reviewer (XL) checked the consistency and accuracy of all data. The following data and information were extracted for each included study: basic information (title, first author, publication year, funding, and study design), information on the participants (sample size, age, and inclusion/exclusion criteria of participants), details of the intervention and control conditions, outcome information [for dichotomous data, we abstracted the number of events and total participants per group; for continuous data, we abstracted the means, standard deviations (SD), and number of total participants per group].
Risk of Bias in Individual Studies
Two reviewers (ZQ and SL) assessed the potential risk of bias of each included study independently. Discrepancies were resolved by discussion and consensus with a third researcher (XL). We assessed the risk of bias in cohort studies using Newcastle-Ottawa Scale (12), which contains eight domains: representativeness of exposure cohorts, selection of nonexposure cohorts, determination of exposure, outcome events that did not occur before study initiation, comparability of cohort based on design or analysis, assessment of outcome events, adequacy of follow-up time, and completeness of follow-up. For case series, we used the Joanna Briggs Institute critical appraisal checklist for case series (13), which consists of 10 domains. Each domain was graded as one sore if reported.
Statistical Analysis
All statistical analyses were performed using RStudio, version 1.3.1056. Comparable data from studies with one outcome were pooled using forest plots according to the Cochrane Handbook by using random-effects model separately (14). Mortality in the ICU and in hospital was used for a detailed description. A subgroup analysis was performed according to different regions. For dichotomous outcomes, we calculated the risk ratios (RR) and the corresponding 95% confidence intervals (CI) and Pvalues. For continuous outcomes, we calculated the standardized mean difference and its corresponding 95% CI if means and SD were reported. Furthermore, 95% prediction interval (PI) was used to evaluate the range that, we assert with 95% certainty, will fall into during a future validation test. We reported the effect size with 95% CI by using random-effects models. Two-sided P < 0.05 were considered statistically significant. Heterogeneity was defined as P < 0.10 and I 2 >50%.When effect sizes could not be pooled due to only one study for a comparison, we reported the study findings narratively. We used sensitivity analyses to evaluate the stability of mortality outcomes of the included studies. For a result that included more than 10 studies, publication bias was tested by visual funnel plots.
Quality of the Evidence
The quality of evidence for each outcome was assessed by using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. The judgments of quality for specific outcomes were based on five main factors: study design and execution limitations, inconsistency, indirectness, imprecision of results (randomeffects model), and publication bias across all studies (15,16). The quality of evidence for each outcome was graded as high, moderate, low, or very low (17) and presented in "GRADE Evidence Profiles" (18).
Search Results
The literature search retrieved 9,362 records through database searching and 51 additional records through other sources, which included 36 from the Google Scholar and 15 from preprint platforms. After removing duplicates, we screened the titles and abstracts of 5,138 records and reviewed the full text of 101 articles. Finally, we included 33 studies (cohort studies and caseseries) that reported either the mortality of critically ill patients or the clinical interventions between surviving and nonsurviving patients with COVID-19 ( Figure 1). All of them were published in English.
The Characteristics of the Included Studies
The basic characteristics of the included studies of the mortality of critically ill patients are summarized in File 3). A visual analysis of the funnel plot indicated that no publication bias was suspected in the results of age and mortality in the ICU. The results of IMV, PaO 2 /FiO 2 ratio, and SOFA source were suggestive of publication bias (Supplementary File 4).
Quality of Evidence
We evaluated the quality of evidence for 11 outcomes. Among them, two outcomes (18%) were graded as of moderate quality, four outcomes (36%) were graded as of low quality, and five (45%) outcomes were graded as of very low quality. We produced "GRADE evidence profiles, " and the details of GRADE can be found in Supplementary File 5.
Sensitivity Analysis
We conducted a sensitivity analysis on each result by omitting one study at a time. No study had a significant impact on the results of the meta-analysis (Supplementary File 6). A sensitivity analysis showed that all studies had little or acceptable effect on the total combined effect and that the results were stable.
DISCUSSION
The epidemic of COVID-19 is not stopping yet, especially in western countries. In previous reports, the incidence of mortality associated with critically ill patients remains poorly characterized. The novel findings in this study include the mortality of critically ill patients with laboratory-confirmed COVID-19 worldwide and the clinical interventions between surviving and non-surviving patients. The results show that allcause mortality in ICU was 35% and mortality in hospital was 32% around the world for the year 2020. Differences were distinct between regions. The incidence of mortality that occurred in Southeast Asia was as high as 48%, followed by 39% in China and the Middle East. The lowest incidence occurred in America, which is 15%. The plausible explanations for the high mortality in China and other Asia countries are that the arrival and peak of the COVID-19 pandemic in Asia were earlier than in any region, and there was a shortage of ICU resources and experience. Moreover, data may be subject to patient selection for ICU admission, and some nations adopted a stringent strategy (19). In addition, mortality also relates to the time of follow-up. Some of the participants remained in the hospital in mechanical ventilation even at the end of follow-up. A recent meta-analysis reported that all-cause mortality associated with COVID-19 was 10% overall and 34% in patients admitted to the ICU (7), but most of their participants were from China; in this part, we had a close result. This new meta-analysis included more participants and covered much wider regions. Early identification and prompt organ function support care would provide relief in critical cases (53). Among the included studies, five identified independent risk factors were associated with ICU mortality from laboratory parameters to clinical intervention, but the results are not the same (22,25,38,50,51,54). We compared the baseline clinical characteristics between surviving and non-surviving patients. What we found based on the univariate analysis was that old age, APACHEII score, and SOFA score displayed consistency with multivariate Cox regression analysis in these five studies. Besides these, the PaO 2 /FiO 2 ratio is an important index to reflect the severity of respiratory failure. Our results also showed that the PaO 2 /FiO 2 ratio is helpful to predict the outcome.
With regard to the outcome of the clinical interventions of this meta-analysis, respiratory support is the most important part of life sustaining treatments. According to this study, HFNO during ICU hospitalization was more often used in non-surviving patients, and IMV was more often used in surviving patients.
In previous studies, Auld and Capone (22,54) reported that receipt of IMV was associated with a decreased likelihood of survival. When we discuss the difference of respiratory support, respiratory support as rescue therapy and the different severity levels of the two groups should not be ignored. HFNO and NIV can be safely used in COVID-19-related mild-moderate ARDS. In the study of non-COVID-19, HFNO has been associated with lower mortality in hypoxemic respiratory failure (55), but in some moderate-severe ARDS patients, HFNO or NIV should be used cautiously due to rapid progression to severe type and a high risk of treatment failure. According to Mukhtar et al. (56), the use of NIV with a predefined algorithm in subjects with moderatesevere COVID-19 ARDS was successful in 77% of the subjects. IMV is the most widely used therapy of severe hypoxemia. The population with IMV was larger than with non-invasive support in this study. The need of endotracheal intubation and invasive mechanical ventilation was eight times that of noninvasive ventilation in a previous study (30). Although the timing of IMV is disputed, as evidenced in a recent publication, a metaanalysis reported that early intubation was not associated with improved survival (57). A latest meta-analysis (42) reported that the timing of intubation may have not influenced the mortality of critically ill patients with COVID-19. ECMO can be taken into consideration if the respiratory dysfunction of patients develop into severe ARDS, which cannot sustain with IMV, but this salvage treatment did not have a statistically significant difference between the two groups. In a study with a small sample (3), two of five patients survived by the support of ECMO. The appropriate time and eligible patients need to be evaluated.
In a previous research, as high as 31% of patients in a cohort developed severe acute kidney injury requiring renal replacement therapy during hospitalization (25). High creatinine level, AKI, and receipt of RRT were independent risk factors for the inhospital mortality of patients (22,51,58). Similarly, high highsensitivity cardiac troponin I level, ischemic heart disease, cardiac injury, and vasopressor support were associated with death in patients (22,38,50,51,54). In the present study, the result shows that vasopressors and RRT were more often used in the surviving group.
There were some limitations in the current study that must be acknowledged. First is the high level of heterogeneity in the study. Plausible explanations for the heterogeneous risks of mortality include differences in age, nation and race, disease severity, and insufficient length of follow-up. It was difficult for us to control for the effects of these confounding factors. The heterogeneity in the component studies was addressed with random-effects models. Second, as for the secondary outcomes, is that this part of the clinical interventions was derived from an observational cohort, not a randomized controlled trial, so these results should be treated cautiously. The key purpose of this study is to describe the effect of the actual use of various clinical interventions in the surviving group and non-surviving group rather than the impact of individual measures on the prognosis. Third is that most studies were retrospective and recall bias might have occurred.
CONCLUSIONS
Mortality was high in critically ill patients with COVID-19 based on low-quality evidence, and intercontinental differences existed. The early identification of critical characteristics and the use of support care help to indicate the outcome of critically ill patients.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. | 2021-07-23T13:25:44.011Z | 2021-07-23T00:00:00.000 | {
"year": 2021,
"sha1": "d4a088bebfd7713a9e094deeca18591930fc18fe",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.635560/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4a088bebfd7713a9e094deeca18591930fc18fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25888080 | pes2o/s2orc | v3-fos-license | The dynamics of intracellular water constrains glycolytic oscillations in Saccharomyces cerevisiae
We explored the dynamic coupling of intracellular water with metabolism in yeast cells. Using the polarity-sensitive probe 6-acetyl-2-dimethylaminonaphthalene (ACDAN), we show that glycolytic oscillations in the yeast S. cerevisiae BY4743 wild-type strain are coupled to the generalized polarization (GP) function of ACDAN, which measures the physical state of intracellular water. We analysed the oscillatory dynamics in wild type and 24 mutant strains with mutations in many different enzymes and proteins. Using fluorescence spectroscopy, we measured the amplitude and frequency of the metabolic oscillations and ACDAN GP in the resting state of all 25 strains. The results showed that there is a lower and an upper threshold of ACDAN GP, beyond which oscillations do not occur. This critical GP range is also phenomenologically linked to the occurrence of oscillations when cells are grown at different temperatures. Furthermore, the link between glycolytic oscillations and the ACDAN GP value also holds when ATP synthesis or the integrity of the cell cytoskeleton is perturbed. Our results represent the first demonstration that the dynamic behaviour of a metabolic process can be regulated by a cell-wide physical property: the dynamic state of intracellular water, which represents an emergent property.
Supporting Methods:
The generalized polarization (GP) function was originally introduced as an analytical method to quantitatively determine the relative amounts and temporal fluctuations of two distinct lipid phases when they coexist in a model membrane, for reviews see (1,2). This function was originally defined as: where I B and I R are the measured fluorescence intensities under conditions in which a wavelength (or a band of wavelengths) B (for blue shifted) and R (for red shifted) are both observed using a given excitation wavelength. Being a weighted difference, the values of the GP must fall within -1 and 1; the lower the value the greater the extent of relaxation (or bathochromic shift of the spectrum). This definition is formally identical to the classical definition of fluorescence polarization, in which B and R represent two orthogonal orientations of the observation polarizers in the fluorimeter. The advantage of the GP function for the analysis of the spectral properties of the DAN probes is derived from the well-known properties of the classical polarization function, which contains information on the interconversion between two different "states" of the emitting dipole of the fluorophore. In the original studies, the LAURDAN GP was shown to distinguish between the extent of water relaxation in solid-ordered (s o ) and liquid-disordered (l d ) phases in phospholipid membranes (1,2). In the GP function as used here, the two states correspond to the unrelaxed and relaxed environments sensed by the probes. Our approach to the study of intracellular water dynamics in yeast therefore constitutes a generalization of the use of the GP function (3,4). In this case, however, we explore fluctuations in water relaxation throughout the cell rather than in just membrane-associated water. The oscillations of the GP function in the cell ( Figure S1C) yielding the measured changes in the intensity of emission (quantum yield; Figure S1B) of the probes at any given wavelength can be explained only if solvent relaxation is the dominant mechanism. In the classical definition of GP, B and R correspond to 440 and 490 nm, respectively, of the ACDAN fluorescence emission spectrum (1,2).
Modelling the coupling of glycolytic oscillations with water dynamics
In order to put our experimental results in to a more rigid theoretical framework we use the Association-Induction hypothesis proposed by GN Ling (5). This hypothesis builds on the assumptions (i) that the bulk of water and various solutes are adsorbed on cellular proteins, (ii) that this adsorption is synchronized as a result of interactions of neighbouring adsorption sites (cooperativity), and (iii) that the cooperative adsorptions are controlled by a smaller number of molecular species referred to as cardinal adsorbents, which exert their control by interacting with certain key sites (cardinal sites) on the same proteins (5).
Here we give a brief outline of a recently developed model for the coupling of glycolytic oscillations with intracellular water dynamics. The model is based on an earlier model of glycolysis (6) using the generally accepted view that phosphofructokinase is a key enzyme in controlling the pace of glycolysis and that the enzyme shows cooperativity with respect to binding of ATP and is activated by its product ADP (6). However, instead of using the classical mass-action based Michaelis-Menten (or for cooperative enzymes Monod-Wyman-Changeux) approach, we use Yang-Ling isotherms (5,7,8) for describing the transformation of ATP to ADP and its coupling to the state of water, which is denoted p (for polarized). As opposed to the Monod-Wyman-Changeux and the related Koshland-Nemethy-Filmer models the Yang-Ling isotherm has a statistical-mechanical origin and is general.
The full description of the model will be presented elsewhere (manuscript in preparation), but briefly the equations are where v PFK is defined by: where V 1 is a maximum rate for the ATP-induced transition of water from a less to a more polarized state (p) and 1 and − 1 2 represent the dissociation constant and the nearest neighbour interaction energy for binding of ATP to fibrillary proteins, e.g. actin.
We now assume that the maximum activity of PFK (V) and the dissociation constant for binding of ATP ( ) depend on the variable p, i.e. V(p) and ( ). It is well-documented that for enzymes in viscous solutions the maximum activity and the binding constants of substrates change with crowding (9,10). For simplicity we assume that V is inversely proportional to p, while is proportional to p. However, other relations between the two parameters and p will yield similar behaviour. Furthermore, it is quite possible that many of the other parameters in equations S2-S3 may depend on p, but again this will only have qualitative effect on the behaviour of the model. In the current form of the model ATP, ADP , p and time t appear as dimensionless variables. The model was simulated using the Berkeley-Madonna software (Berkeley-Madonna, Berkeley, CA) Simulations of the model are shown in Figs. S16 and S17. The data in Fig. S16 reveal that ATP and p oscillate in phase as revealed by the phase plot in Fig. S16C. Note that this phase plot is similar to that of ATP and ACDAN fluorescence (Fig. S3C). Furthermore, reducing either the maximum velocity V 1 or increasing the rate constant k o will destroy the oscillations. This situation corresponds to that in Fig. 6 where the formation of actin filaments is inhibited by Latrunculin B. Changing the rate constant k o will change the steady-state level of p, which again will affect the amplitude and the frequency of the oscillations. A plot of the relative amplitude of oscillations of ATP against the steady-state value of p reveals a double Hopf bifurcation (Fig. S17), similar to the experimental Hopf bifurcation shown in Fig. 4A. Finally, the model predicts that simple mechanical coupling of p in a region where glycolytic oscillations occur to the polarization of water (p 1 ), in a region where glycolytic oscillations are absent, would result in a slight phase shift in the oscillations of p and p 1 (Fig. S18), which is confirmed experimentally (Fig S19).
It should be emphasized that the variables in the model constitute a network (11), and hence one cannot say that metabolic oscillations drive oscillations in the polarization of water or the other way around. Furthermore, in such a network one cannot study the individual components in isolation, e.g. by assuming that situations may exist in which one variable shows oscillations while others do not.
To investigate if the results obtained with the simple model (equations S2-S4) are general we also implemented the Yang-Ling approach to the coupling of the polarity of water to glycolytic oscillations on a detailed model of glycolysis adapted from Hald and Sørensen (12). The model, which involves 24 reactions and 32 chemical species, is shown in Fig. S20 and Fig. S21 shows two phase plots of p versus [ATP] and p versus [NADH], respectively. We note that the first phase plot is similar to those in Fig. S3C and S16C, showing that p is in phase with [ATP] as observed experimentally and with our simple model. While the simple model (equations S2-S4) does not involve NADH it is interesting to compare phase plots of p versus [NADH] in the detailed model ( Fig. S21B) with the corresponding plots of ACDAN versus NADH fluorescence (Fig. S3D). Both plots show that the oscillations of ACDAN GP are in antiphase with oscillations in NADH.
Table S1
Corresponding values of growth temperature, measurement temperature, ACDAN GP and oscillation frequency for the wild type BY4743 S. cerevisiae strain. A frequency of 0 s −1 means that no oscillations were obtained at the particular temperature.
Figure S2
Oscillations in NADH and Glucose 6-phosphate in S. cerevisiae cells grown at 30 °C. Yeast cells (10% w/v) were suspended at 25 °C in 100 mM potassium phosphate, pH 6.8 and oscillations were induced as described in Fig. S1. Measurements of intracellular glucose 6-phosphate were made by quenching the cells with boiling buffered ethanol and subsequently extracting the metabolites (13). The concentration of glucose 6-phosphate was then measured by addition of NADP + and glucose 6-phosphate dehydrogenase to the extract. The concentration of glucose 6-phosphate in the extract was determined from the concentration of formed NADPH using standard curves measured on solutions with known concentrations of glucose 6phosphate. In the estimation of the intracellular concentration of glucose 6-phosphate we assumed that 1 mg protein corresponds to a cytoplasmic volume of 3.7 µl. Note that the cells grown on glycerol exhibit far more mitochondria and their respiration rate is more than twice that of cells grown on glucose. Images were obtained on a Leica DMRE epifluorescence microscope using a CoolLed illumination system (CoolLed, Andover, U.K.) through a 100X Leica oil-emission objective (NA = 1.4). Images of ACDAN fluorescence were obtained using a Leica Microsystems A4 filter cube, while images of MitoTracker Red were obtained using a Leica Microsystems Y3 filter cube. In the emission range 462 ± 24 nm there is no visible contribution of NADH fluorescence from the control cells upon excitation at 810 (g), 780nm (h) or 740nm (i) at the used settings. Intensity profiles reveal that the intensities in (g)-(i) are not above the background noise. A comparison of (j) and (e) reveals that in order to visualize NADH in unstained cells (j) the ACDAN signal becomes significantly oversaturated (e). The maximum laser power (i.e., the power output from the laser before interaction with filters and optics) are 790mW at 810nm, 740mW at 780nm and 675mW at 740nm. In (b)-(d) and (g)-(i) the used laser power was 40% of the maximum, while in (e) and (j) it was 60%. Scale bars are 10 µm. Table S2. The ethanol production rate is normalized to that of the wild type strain. Calibration of the mass spectrometry ethanol signal showed that more than 80% of the added glucose is converted to ethanol in all strains. The dashed line is a linear regression to the data. It has a slope of 7.2×10 −3 and the R 2 value is 1.6×10 −3 . Figure S15.
Figure S18
Phase plot of polarization of water in two different regions of the cell: one in which glycolytic oscillations occur (p) and another in which glycolytic oscillations are absent (p 1 ). The simulation assumes simple mechanical coupling of p to p 1 . Parameters as in Fig. S15.
Figure S19
Simultaneous measurements of oscillations in ACDAN (A) and Nile Red (B) fluorescence in the wild type strain S. cerevisiae BY4743 strain. A 10% (w/v) cell suspension in 100 mM potassium phosphate buffer, pH 6.8, was incubated at room temperature for 1 h with 10 µM ACDAN and 5 µM Nile red. Then the cells were washed twice and resuspended in the same buffer. ACDAN was excited at 365 nm and its emission was measured at 450 nm, while Nile red was excited at 550 nm with its emission measured at 630 nm. The plot in C is a phase plot of the two measurements. Yeasts were grown at 30 °C. Oscillations were induced by addition of first 30 mM glucose (arrow) and 60 s later 5 mM KCN. Measurement temperature was 25 °C. | 2018-04-03T02:47:16.050Z | 2017-11-24T00:00:00.000 | {
"year": 2017,
"sha1": "df71fc64d2ff076788f1280549601742116d32fa",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-16442-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90d84caa455328a0f0c7e4abd63dcff881929371",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
259259256 | pes2o/s2orc | v3-fos-license | Akkermansia muciniphila and Lactobacillus plantarum ameliorate systemic lupus erythematosus by possibly regulating immune response and remodeling gut microbiota
ABSTRACT Systemic lupus erythematosus (SLE), characterized by persistent inflammation, is a complex autoimmune disorder that affects all organs, challenging clinical treatment. Dysbiosis of gut microbiota promotes autoimmune disorders that damage extraintestinal organs. Modulating the gut microbiome is proposed as a promising approach for fine-running parts of the immune system, relieving systematic inflammation in multiple diseases. This study demonstrated that the administration of Akkermansia muciniphila and Lactobacillus plantarum contributed to an anti-inflammatory environment by decreasing IL-6 and IL-17 and increasing IL-10 levels in the circulation. The treatment of A. muciniphila and L. plantarum restored the intestinal barrier integrity to a different extent. In addition, both strains reduced the deposit of IgG in the kidney and improved renal function significantly. Further studies revealed distinct remodeling roles of A. muciniphila and L. plantarum administration on the gut microbiome. This work demonstrated essential mechanisms of how A. muciniphila and L. plantarum remodel gut microbiota and regulate the immune responses in the SLE mice model. IMPORTANCE Several pieces of research have demonstrated that certain probiotic strains contribute to regulating excessive inflammation and restoring tolerances in the SLE animal model. More animal trials combined with clinical studies are urgently needed to further elucidate the mechanisms for the effect of specific probiotic bacteria in preventing SLE symptoms and developing novel therapeutic targets. In this study, we explored the role of A. muciniphila and L. plantarum in ameliorating the SLE disease activity. Both A. muciniphila and L. plantarum treatment relieved the systemic inflammation and improved renal function in the SLE mouse model. We demonstrated that A. muciniphila and L. plantarum contributed to an anti-inflammatory environment by regulating cytokine levels in the circulation, restoring the intestinal barrier integrity, and remodeling the gut microbiome, however, to a different extent.
in lupus-prone Murphy Roths Large (MLR)/Mp-Faslpr (lpr) mice. For example, several studies have reported an increase in Lactobacillus in SLE mice compared with the control mice (9,10). Conversely, Zhang et al. reported a decrease in Lactobacillaceae in the MRL/lpr mouse model versus healthy controls (11). Meanwhile, Akkermansia muciniphila significantly decreased from the pre-disease stage to the diseased stage in mice (9).
Currently, probiotics have been experimentally and mechanically investigated for their possible effectiveness in treating cancer, metabolic diseases, and autoimmunity diseases, including SLE (12). Mardani et al. showed that administering the probiotics Lactobacillus delbrueckii or L. rhamnosus to a pristane-induced SLE mouse model was able to prevent the initiation or the progression of the SLE disease (12). Luo et al. observed a striking effect of Lactobacillus spp. administration in ameliorating lupus nephritis in MRL/lpr mice (9). L. plantarum is a lactic acid bacterium with particular capabilities of producing diverse and potent bacteriocins, which have antibacterial properties (13). Moreover, Cabana-Puig et al. described Lactobacillus spp. act in synergy to attenuate splenomegaly and lymphadenopathy in lupus-prone MRL/lpr mice (14). To date, a body of evidence was constituted on the role of L. plantarum in medical cases such as diarrhea prevention, cholesterol lowering, and reduction in irritable bowel syndrome symptoms (15)(16)(17). In addition, A. muciniphila was considered one of the most promising candidates as probiotics have an essential value in improving the host metabolic functions and immune responses (18,19). Hänninen et al. found that A. muciniphila would remodel gut microbiota and control islet autoimmunity in non-obese diabetic mice (20).
Our previous study has shown that the genera Lactobacillus and Akkermansia were enriched in the glucocorticoid-treated SLE patients (7). We have recently demonstrated that L. plantarum could restore intestinal permeability and regulate immunity-related pathways in Drosophila (21). In the present study, we tested a hypothesis that treat ing with A. muciniphila and L. plantarum might ameliorate the SLE disease activity by regulating the gut microenvironment and immune response in a classical SLE mouse model.
Bacteria strains and growth condition
A. muciniphila (ATCC BAA-835) was purchased from Biobw (China). L. plantarum used in this study was obtained from our lab (21). L. plantarum and A. muciniphila were cultured on the de Man, Rogosa, and Sharpe (MRS) medium and Brain Heart Infusion medium with 2 g/L mucoprotein, respectively. Both strains were cultured at 37°C under anaerobic condition.
Animal and experimental groups
Female MRL/lpr mice were originally obtained from Dr. Qian Zhang of NHC Key Laboratory of Antibody Technique (Nanjing Medical University). All animals were bred and maintained in a specific pathogen-free facility according to the requirements of the Institutional Animal Care and Use Committee at Nanjing Medical University (IACUC 1812014). The mice were cultured in a standard 12 h light/dark cycle, with controlled temperature (22 ± 2°C), and given water and food ad libitum.
Forty-one female MLR/lpr mice were randomly divided into the following three groups: SLE controls (Con; n = 13), SLE mice treated with L. plantarum group (LP; n = 14), and SLE mice treated with A. muciniphila group (Akk; n = 14). A. muciniphila and L. plantarum were suspended using a sterile phosphatebuffered saline (PBS) solution and diluted to obtain a concentration of 1 × 10 9 CFUs. The SLE controls were treated with sterile PBS. Probiotics gavage was performed every 2 days from the 8th week to the 15th week (Fig. 1A). Mice were euthanized at 15-week old, and spleen weight was measured. Bodyweight was measured each week after treatment began.
range from 0 to 4 as previously described parameters: inflammation, depth of inflamma tion, crypt damage, loss of goblet cells, and thickness of the colon wall (22,23).
Renal function
Urine samples were tested two times for proteinuria. Mice with 8-week old and 15-week old were placed in individual metabolic cages for urine collection for a period of 12 h, respectively. All samples were stored at −20°C until being processed simultaneously. Urine samples were analyzed using ELISA kit to measure total protein level (Cat#RJ17462, Renjie Bio, China). Meanwhile, serum creatinine and blood urea nitrogen (BUN) were also measured using ELISA kit (Cat#RJ17464, Cat#RJ17469, Renjie Bio, China).
The HE staining of kidneys was also conducted at Servicebio company (China). One kidney section per mouse was evaluated. Each glomerulus was examined at 400× magnification and scored from 0 (normal) to 4 (severe) based on the glomerular size and lobulation, presence of karyorrhectic nuclear debris, capillary basement membrane thickening, and the degree of mesangial matrix expansion and mesangial cell prolifera tion as described (24). All measurements and analyses were scored in a blinded fashion by two pathologists. Images were acquired using BX53 Light Microscope (Olympus, Japan).
Fecal sample collection and DNA extraction
Fecal samples were collected in a sterile stool container, frozen at −80°C within 2 h of sample collection. About 100 mg stool samples were used to extract total genome DNA following the protocol of the DNA extraction kit (Cat#DP328, Tiangen, China). The concentration and purity of the extracted bacterial DNA were detected using Qubit 2.0 Fluorometer (Thermo Scientific, USA). DNA quality and quantity were determined by agarose gel electrophoresis.
16S rRNA gene amplicon sequencing and analysis
Polymerase chain reaction (PCR) was performed to produce V4 regions of the 16S rRNA gene using the conserved primers 515F (5′-GTGCCAGCMGCCGCGGTAA-3′) and 806R (5′-GGACTACHVGGGTWTCTAAT-3′), and no template DNA reaction was used as a negative control. PCR products were purified using the GeneJET Gel Extraction Kit (Thermo Scientific, USA). Following manufacturer's recommendation, sequencing libraries were generated using the Illumina TruSeq DNA PCR-Free Library Preparation Kit (Illumina, USA). PCR fragments were sequenced in the Illumina NovaSeq platform (Novogene, China).
Bioinformatics analysis of 16S rRNA gene amplicons was performed by Qiime2 (version 2020.8.0) (27). Briefly, fastq reads were processed by the dada2 program, and dada2 denoise-paired commands were used to delete the low-quality ones. Dada2 generates unique features that could be compared between different studies. The taxonomy of these features was assigned to the silva reference database (version 138) classifier with 99% similarity (28). At each taxonomy level, the taxons with rela tive abundance less than 0.0001 were filtered out. Determination of alpha and beta diversities was conducted by R packages vegan.
Functional analysis
The functional capacity of the gut microbial community was predicted using PICRUSt2. Predicted functional genes were categorized into MetaCyc pathways. The relative pathway abundance change (assigned as deta) between the pre-and post-treatment for each mouse was calculated. The deta value for each treatment was compared with the Wilcoxon rank-sum test. MetaCyc pathway changes with fdr adjusted P values <0.05 were determined as significant.
Microbial network analysis
The co-occurrence microbial network of each experiment was conducted by Spearman correlation based on the relative abundance of each genus. The correlation with P value < 0.004 and correlation value > 0.8 was represented in the figure.
Statistical analysis
The student t-test was used for the comparison of two groups. For comparison of more than two groups, a single-factor analysis of variance was performed. Kruskal-Wallis test was applied for data that did not meet the normal distribution. All the measured data were displayed as means ± SD, and the analysis was performed using GraphPad Prism software. Significance was defined as: * P < 0.05; ** P < 0.01; and *** P < 0.001. The bacterial taxonomic analysis between any two groups was conducted using the two-sided Wilcoxon rank-sum test performed by the R program.
A. muciniphila and L. plantarum treatment relieved systemic inflammation
To determine the effects of probiotics on active disease in MRL/lpr mice, female mice were gavaged with A. muciniphila or L. plantarum starting from 8 weeks of age to 15 weeks old (Fig. 1A). The probiotics treatment increased body weight as expected (Fig. 1B), whereas the spleen weight did not change (Fig. 1C). It is worth noting that mesen teric lymph node weights were significantly decreased with LP treatment compared with SLE controls (Fig. 1D). The production of antinuclear antibodies (ANA) is the immunolog ical hallmark of SLE (29). The anti-dsDNA is one of a group of ANA. We next assessed the serum anti-dsDNA titers. The results showed that both Akk and LP-treated groups had significantly lower levels of serum anti-dsDNA than SLE controls, especially the Akk-treated group (Fig. 1E). In addition, mice treated with both probiotics secreted lower levels of serum IgG, while the decreased effect of the Akk-treated group was much more apparent (Fig. 1F).
It has been widely reported that overexpression of cytokines plays a critical role in SLE pathogenesis (30). The proinflammatory cytokines IL-17 secreted by autoimmune Th17 cells have been shown to facilitate SLE development (31). Consistent with the impor tance of IL-17 in SLE, polarizing Th17 cells by stimulation with IL-6 can acquire pathoge nicity and elicit SLE (32). As might be expected, the serum expression of IL-17 and IL-6 was significantly lower in probiotics-treated groups, especially in the Akk-treated group ( Fig. 1G and H). IL-10 has been shown to protect against SLE by suppressing pathogenic Th1 responses, including IFN-γ-mediated autoantibody production and renal inflamma tion (33). Of particular interest, the antiinflammatory cytokine IL-10 was increased in probiotics-treated mice compared to SLE controls, particularly in the Akk-treated group (Fig. 1I). These results indicated that A. muciniphila and L. plantarum administration indeed relieved inflammatory response in the SLE model.
A. muciniphila and L. plantarum treatment improved renal function
The disease phenotype in MRL/lpr mice resembles human SLE, which is characterized by an increased level of proteinuria and progressive immune complex glomeruloneph ritis (34). Lupus nephritis is the most common cause of renal injury in SLE and the most important predictor of mortality in patients with SLE (35). Next, we determined the renal function by measuring the proteinuria and the kidney histopathology scores. Compared with the SLE controls, mice in probiotics-treated groups exhibited improved renal physiology characterized by decreased levels of proteinuria ( Fig. 2A), creatinine (Cr) (Fig. 2B), and BUN (Fig. 2C). Moreover, the administration of both Akk and LP significantly ameliorated the kidney injury characterized by the reduced scores in crescents, tubular inflammatory infiltrates, tubular atrophy, tubular dilatation, and intestinal infiltration according to the renal histopathology scores (Fig. 2D through I). Besides, the mesan gial proliferation score significantly decreased in LP-treated group but decreased in Akk-treated group with a marginal significance (P = 0.059) (Fig. 2J). IgG-autoantibodies are major immune deposits in the kidney and trigger lupus nephritis (36). Both Akk and LP relieved the IgG-autoantibodies rather than the IgM-autoantibodies deposited in the kidney ( Fig. 2K and L). Moreover, the expression of IgA protein was also dramatically decreased after probiotics treatment (Fig. 2M). Thus, A. muciniphila and L. plantarum administration could improve the renal function in the SLE model.
A. muciniphila and L. plantarum treatment exerted protective effects in the intestinal barrier integrity
The gastrointestinal symptoms were reported to occur in >50% of SLE patients, and lupus enteritis was possibly identified as an initial manifestation in SLE (37,38). The histological examination showed that both probiotics restored the colonic histomorphol ogy to a certain extent (Fig. 3A). Impressively, the epithelial damage was enhanced in control mice, which presented massive loss of goblet cells and crypt. Histopathological scores confirmed that significantly higher scores in control group versus probioticstreated group according to the previously described criteria (22,23). The high score of control group was due to the crypt being nearly destroyed. All the colonic HE staining was listed in the supplementary information . To assess the intestinal permeability, we examined the effects of probiotics upon tight junction structure stained with the marker claudin-7 (Fig. 3B). The immunofluorescence analysis demonstrated that claudin-7 redistributes in the cytoplasm due to the dysplasia of intestinal crypt in SLE control group. Nevertheless, there were significantly increased changes in the tight junction structure of colonic epithelial cells after the probiotics intake. The claudin-7 was localized to the integrity intestinal epithelium in the Akk and LP group, which extended from the base to the tip of the colonic crypts. Taken together, our data showed that the A. muciniphila and L. plantarum help maintain intestinal function and barrier integrity in the SLE model.
A. muciniphila and L. plantarum treatment altered the structure and diversity of the gut microbiota
Increasingly studies demonstrated that dysbiosis of the gut microbiome may be involved in SLE development and progression (39). We analyzed fecal DNA isolated from all experimental mice groups to determine the dynamics of gut microbiota before and after probiotic treatment. To explore the bacterial composition alteration of probiotic treatment, we evaluated multiple ecological parameters, including Shannon and Simpson diversity (the combined parameters of richness and evenness), Pielou evenness (to show how evenly the individuals in the community are distributed over different operation taxonomic units [OTUs]), Chao richness (an estimate of a total number of OTU present in the given community), and Richness (the number of OTU). In the LP-treated group, the Shannon and Pielou index was significantly increased after the treatment compared with the pre-treatment samples (Fig. 4A), indicating an increased evenness of the gut microbial community after the treatment of LP. In contrast, the Akk treatment demonstrated no influence on alpha diversity. To be noticed, the number of taxonomies represented by the index of Richness and Chao was increased for the post-treatment samples in the control group (Fig. S2A), which might represent the microbiota dynamic (7). The administration of both A. muciniphila and L. plantarum reversed this tendency (Fig. S2A). Analysis of Bray-Curtis distance based on the OTU-level composition revealed different microbiome structures between the pre-treatment and post-treatment samples in each experimental group (Fig. 4B). Moreover, the administration of LP was able to decrease the Firmicutes/Bacteroidetes ratio (Fig. 4C) as a result of a decrease in the proportion of Firmicutes and an increase in the proportion of Bacteroidetes (Fig. S2B).
Furthermore, differential bacterial genera between the pre-treatment and post-treat ment samples were defined with paired Wilcoxon rank-sum test (Fig. S2C). There were 21 genera with different proportions in the control group, representing the microbes that altered during the progress of the disease. Of these genera, six and nine genera were also changed upon the administration of Akk and LP, respectively (Fig. 3D). Parasutterella, for example, was increased consistently in all three groups. Parasutterella was a harmful Research Article mSphere bacterium that increased with age in mice (40). The abundance of a short-chain fatty acid producer, Faecalibaculum, was reversed by the Akk and LP administration. In addition, LP treatment specifically increased the accumulation of butyrate-producing Lachnospira ceae, i.e., Lachnospiraceae_UCG-006 and Roseburia, which is beneficial for the intestinal barrier. Multiple species of Roseburia were enriched in healthy samples compared with SLE patients (6). These results indicated the role of L. plantarum in improving the gut barrier integrity. and post-treatment, respectively. Con, control group. Green, red, and blue labels indicate common differential genera in the control and LP-treated group, control, and Akk-treated group, and common differential genera in all three groups, respectively.
A. muciniphila and L. plantarum treatment exerted different impacts on gut microbiota
To further explore the effects of Akk and LP administration on the gut microbiota, we assessed the interactions among the genera in each experimental group. In the control and LP-treated group, the complexity of network structure decreased slightly in the post-treatment mice (Fig. 5). However, in the Akk-treated group, the complexity increased dramatically with a sharply increased number of edges and neighborhood connectivity ( Fig. 5; Fig. S3A). These results indicated a notable impact of A. muciniphila treatment on the microbial community. Moreover, to predict whether A. muciniphila and L. plantarum could modulate the microbial metabolic function of SLE mice, we conducted a functional metagenomics prediction based on 16S rRNA sequencing using the PICRUSt2 (41). We calculated the alteration of relative abundance of each metabolic pathway between the pre-and post-treatment samples. Compared with the control group, the Akk treatment altered a small number of metabolic functions. Most of them were changed in the same direction as the control group (Fig. 6A). Impressively, the LP treatment differentially changed multiple metabolic pathways in the opposite direction compared with the control group (Fig. 6B). For example, the tricarboxylic acid cycle (TCA) cycle was decreased in the post-PBS-treated samples but increased in the post-LP-treated mice. The TCA cycle was an essential pathway for biosynthesis and energy metabolism. Abnormal T cell activation and apoptosis are involved in the pathogenesis of SLE, which is highly energy depend ent (42,43). Similarly, the aerobic respiration pathway, one of the most critical energy production processes, increased after the LP treatment.
Furthermore, to find the potential bacteria associated with the immune disorder in SLE, we conducted a correlation analysis between microbial genera and cytokine levels in each treatment group. As shown in Fig. 6C, A. muciniphila was positively correlated with the level of IL-10 in the Akk-treated group. Conversely, no correlation was observed between LP and any cytokines in the LP-treated group. These results were consistent with that reported by Guo et al. in SLE patients that A. muciniphila but not L. plantarum was extensively associated with cytokines (7).
In summary, as per the above analysis, A. muciniphila and L. plantarum treatment may reduce the SLE symptoms in distinct pathways to mediate the interaction of host and microbiota.
DISCUSSION
In the present study, we demonstrated for the first time that treatment with A. mucini phila and L. plantarum can improve the inflammation, intestinal tract, and renal damage occurring in an experimental SLE mice model. Both Akk and LP showed a protective role in the MRL/lpr mice, represented by reduced overall inflammation, intestinal tight junction, and improved renal function (Fig. 7). Herein, we suggest the critical role of gut microbiota manipulation in relieving the systemic symptoms in a mouse model of SLE.
Both L. plantarum and A. muciniphila strains are promising probiotics that benefit multiple diseases. L. plantarum is well known for its antibacterial property (44)(45)(46). In animal models and clinic samples, multiple strains of L. plantarum have been demon strated to improve intestinal barrier integrity and modulate immune response (16,(47)(48)(49). A. muciniphila is an abundant member of the human intestinal microbiota (50) that can degrade mucin (51). A. muciniphila was reported to participate in the host immune regulation (52). Additional pieces of evidence have shown that A. muciniphila also enhances the integrity of the intestinal epithelial cells and the thickness of the mucus layer, thereby promoting intestinal health (53). In this study, we compared the protective roles and possible mechanisms of A. muciniphila and L. plantarum on MRL/lpr mice.
Firstly, we observed that A. muciniphila and L. plantarum had similar effectiveness in ameliorating the progression of SLE according to the inflammation cytokines and intestinal inflammation ( Fig. 1 and 2). The administration of probiotics both promoted an antiinflammatory environment, A ressing the expression of proinflammatory cytokines IL-6 and IL-17 while increasing the levels of antiinflammatory cytokine IL-10 in circula tion, especially A. muciniphila introduced more pronounced alterations (Fig. 1G through I). These results indicated a more vital function of A. muciniphila in immunity regulation Research Article mSphere than L. plantarum. In MRL/lpr mice, the increased intestinal permeability has been described (54). Our data showed that the intestinal epithelium was compromised in lupus mice. Treatment with A. muciniphila and L. plantarum restored mucosal barrier integrity (Fig. 3). However, lower cumulative scores characterized by crypt hyperplasia, epithelial injury, and inflammation were observed in L. plantarum treated mice com pared with control and the A. muciniphila treated mice, indicating a better ability of L. plantarum to improve intestinal barrier integrity than A. muciniphila. Dysbiosis of the gut microbiota is reported repeatedly to contribute to SLE in humans (6-9, 55) and a lupus-like autoimmune disease in mice (9,10,54,56). From the per spective of gut microbiome community restores, a couple of probiotics, including A. muciniphila and L. plantarum can recover gut microbiota imbalance (20,57,58). We found pretty diverse ways of microbiota remodeling between A. muciniphila and L. plantarum. L. plantarum administration significantly increased the alpha diversity of gut microbiota (Fig. 4A), which was reduced in SLE mice and patients (6,10). Whereas SLE mice treated with A. muciniphila had no changes in the alpha diversity (Fig. 4A). In addition, the LP treatment altered the microbial metabolic function considerably compared with the PBS-treated group (Fig. 6B). However, Akk treatment only slightly remodeled the microbial metabolic function (Fig. 6A). On the other hand, Akk adminis tration significantly increased the network complexity of the microbial community (Fig. 5B). Akk was found to be correlated with the level of cytokines (Fig. 6C), while the LP (59). Moreover, using a new approach to study epitopes and identify T cell receptors expressed by CD4+ Foxp3+ (Treg) cells specific for commensal-derived antigens. Kuczmus et al. found that antigens from Akk reprogram naïve CD4+ T cells to the Treg lineage and expand pre-existing microbespecific Tregs (60). Additionally, a recent study revealed that Akk promotes the accumulation of Th1 and Th17 cells in the gut (61).
In this study, we performed probiotic treatment at 8 weeks old, when the MRL/lpr mice started to develop lupus-like symptoms. However, in clinical cases, patients with developed SLE should be considered. Studies of SLE patients showed intense altera tion in the gut microbiome (6,7). Moreover, a recent clinical trial performed in lupus patients with developed SLE and gastrointestinal symptoms revealed that supplement ing synbiotics could improve gut microbiota and systemic inflammation (62). Future studies to conduct probiotic treatment in the MRL/lpr mice with progressed SLE will provide additional valuable evidence for clinical application.
In brief, the present study demonstrated an essential role of A. muciniphila and L. plantarum in improving the inflammation and renal damage of the MRL/lpr mice. Future studies are necessary to further explore the molecular mechanisms involved. The authors declare no conflict of interest.
ADDITIONAL FILES
The following material is available online. Fig. S1 (mSphere00070-23-s0001.pdf). Representative all the colonic HE-staining images. Fig. S2 (mSphere00070-23-s0002.pdf). The gut microbiota changes in the control, A. muciniphila and L. plantarum treated group. (A) The alpha diversity of Richness, Chao1, and Simpson of the pre-and post-treatment samples in each treatment group. (B) The relative abundance of bacterial phylum in all experiment groups. (C) Veen plot of differential bacterial genera in control, pre-treatment and post-treatment samples. Fig. S3 (mSphere00070-23-s0003.pdf). The network statistics for the microbial community network of the pre-and post-treatment group of control, Akk-treated, and LP-treated mice. | 2023-06-28T06:17:25.841Z | 2023-06-27T00:00:00.000 | {
"year": 2023,
"sha1": "904867955e176f9cd31f5fbf7c30ed6b456b8b93",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/msphere.00070-23",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "263a3013a59f40fa1f6c12417920efe20beab242",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255333209 | pes2o/s2orc | v3-fos-license | New Insights on CMV Management in Solid Organ Transplant Patients: Prevention, Treatment, and Management of Resistant/Refractory Disease
Cytomegalovirus (CMV) infection can have both direct and indirect effects after solid-organ transplantation, with a significant impact on transplant outcomes. Prevention strategies decrease the risk of CMV disease, although CMV still occurs in up to 50% of high-risk patients. Ganciclovir (GCV) and valganciclovir (VGCV) are the main drugs currently used for preventing and treating CMV. Emerging data suggest that letermovir is as effective as VGCV with fewer hematological side effects. Refractory and resistant CMV also still occur in solid-organ-transplant patients. Maribavir has been shown to be effective and have less toxicity in the treatment of refractory and resistant CMV. In this review paper, we discuss prevention strategies, refractory and resistant CMV, and drug-related side effects and their impact, as well as optimal use of novel anti-CMV therapies.
INTRODUCTION
Cytomegalovirus (CMV) infection can be responsible for direct and indirect effects after solid-organ transplantation [1]. Direct effects include CMV syndrome and tissue-invasive organ disease, such as gastrointestinal CMV invasive disease in kidney transplant patients or CMV-induced hepatitis in liver transplant patients. In 1989, Robert H. Rubin evoked for the first time the indirect effects of CMV [2]. Indirect effects are independent of a high viral load and result in part from the effect of the virus on the host's immune response in the setting of long periods of low level of CMV replication. Several indirect effects are associated with CMV, including acute and chronic rejection, arteriosclerosis and cardiovascular disease, opportunistic infections, malignancies, and diabetes mellitus [1].
Despite prevention strategies that are now currently used after transplantation and that decrease the risk of CMV disease [3], CMV disease can still occur in up to 50% of high-risk solid-organ transplant (SOT) patients (CMVseropositive donor/CMV-seronegative recipients, D?/R-) and 17% of CMV-seropositive recipients (R?) [4]. In a recent meta-analysis, several risk factors for CMV infection or disease were identified: D?/R-serological status, seropositive recipients, use of polyclonal antibodies for induction and/or mycophenolic acid and/or steroids, donors' and recipients' advanced age, and history of acute rejection [5]. In a nationwide retrospective French study, it has been shown that, despite preventive strategies, CMV infection after SOT is associated with an increased risk of acute rejection and graft failure, a higher mortality, and increased costs related to a higher number of inpatient days, number of hospital readmissions, and hospital costs [6].
This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Prevention of Cytomegalovirus Infection
There are two strategies for CMV prevention after SOT: universal prophylaxis and preemptive therapy [7]. Universal prophylaxis relies on giving antiviral therapy to all at-risk recipients (except D-/R-) for 3-12 months according to the type of transplantation and serological status [7]. For instance, 6 months of prophylaxis is recommended for D?/R-kidney transplant patients and for seropositive kidney-transplant patients given polyclonal antibodies induction therapy, while 3 months of prophylaxis is recommended kidney transplant patients not given T-cell-depleting agents. In liver-transplant patients, 3-6 months of prophylaxis can be given. Conversely, in lung transplant patients, a longer duration of prophylaxis is recommended (up to 1 year) [7]. Valganciclovir (VGCV) is usually used in this setting. In the early period post-transplantation, intravenous ganciclovir (GCV) can be given for few days before it is replaced by oral VGCV. Prophylaxis is quite easy to implement. Very rare cases of early CMV replication/infection occur. Conversely, late CMV infection/disease after the end of prophylaxis is common.
A preemptive strategy relies on the weekly monitoring of CMV DNAemia and the initiation of antiviral therapy when the viral load is above a predetermined threshold. It requires more complicated logistics, which makes this strategy more difficult to implement in centers with a large number of transplantations. Early CMV replication/infection is common. VGCV is the most common antiviral used for treatment. Both strategies prevent CMV disease. However, the effect of preemptive therapy on CMV indirect effects is uncertain, including on preventing opportunistic infections [7]. In a recent survey that assessed prevention strategies from 224 transplant centers, it was shown that universal prophylaxis is used in 90% of centers in D?/R-SOT patients [8]. Kidney and hearttransplant patients are mostly treated for 6 months and lung-transplant patients are given 12 months prophylaxis, while 50% of liver-transplant patients were treated for 3 months and 50% for 6 months. Among CMVseropositive patients, 50% of centers use a prophylaxis strategy while the others prefer a preemptive strategy [8]. In liver-transplant patients, preemptive therapy is preferred in seropositive patients. VGCV is the most anti-CMV drug used to prevent CMV after SOT. The main side effect reported by different centers is VGCV-induced myelotoxicity, which can lead to its discontinuation in at least 10% of patients [8].
Treatment of CMV Infection
The treatment of CMV infection in SOT patients relies mainly on oral VGCV (900 mg twice a day, renally adjusted) or intravenous GCV (5 mg/kg twice a day, renally adjusted). Intravenous GCV is recommended in case of sightor life-threatening disease, very high viral load, or questionable gastrointestinal absorption. CMV DNAemia should be monitored weekly to detect refractory/resistant CMV. Treatment is recommended until resolution of clinical symptoms, and until obtaining virological clearance (or very low results with ultrasensitive testing) on one or two samples obtained at 1-week intervals. The minimum duration of therapy is 2 weeks [7]. In the real-life setting, this recommendation seems to be followed by the large majority of centers [8]. However, nearly 14% of centers add anti-CMV immunoglobulins to antiviral therapy in the following indications: primary CMV infection in D?/R-patients, in case of hypogammaglobulinemia (\ 500 mg/dL), and in case of severe clinical manifestations such as pneumonia, enteritis, or severe leukopenia [8].
Neutropenia in Transplant-Patients
Neutropenia is frequently observed after solidorgan transplantation. It occurs in up to 30-40% of patients within the first year after transplantation [9][10][11]. It is mainly related to use of myelotoxic drugs such as polyclonal antibodies, mycophenolic acid, mammalian target of rapamycin inhibitors, VGCV, and trimethoprim/sulfamethoxazole.
In a recent study that included 572 adults who received a kidney transplant and were CMV mismatched or had a panel reactive antibody rate C 80%, 208 (36.3%) participants had neutropenia that was defined as absolute neutrophil count \ 1000 cells/ll. In a pediatric cohort of SOT patients, VGCV prophylaxis was associated with neutropenia [11]. In patients presenting with neutropenia, physicians are prompted to either decrease or stop VGCV, decrease or stop mycophenolic acid, stop trimethoprim/sulfamethoxazole, or use of granulocyte colony stimulating factors (G-CSF). In a cohort of 721 kidney-transplant patients, 31% developed at least one neutropenic episode within the first year after kidney transplantation [12]. Most neutropenia episodes were presumably drug related (71%) and managed by reduction/discontinuation of potentially responsible drugs [mycophenolic acid (MPA) 51%, VGCV 25%, trimethoprim/sulfamethoxazole 19%]. Granulocyte colony-stimulating factor was used in 0.6% of patients [12]. The incidence of infections was about three times higher during neutropenia grade 3 and 4 [12]. In a retrospective study, neutropenic patients experienced more bacterial infections compared with those who did not (43% versus 32%, p = 0.04) [9]. Grade of neutropenia correlated with the global risk of infection [9]. Stopping VGCV can increase the risk of CMV infection, especially in D?/R-patients, and requires starting strict weekly CMV DNAemia monitoring to prevent CMV disease [13]. Reducing the dose of VGCV means giving a dose below the recommended dose adapted to kidney function, which can increase the risk of antiviral drug resistance [1,14]. Therefore, VGCV dose reduction should be avoided. With respect to mycophenolic acid discontinuation and dose reduction, several studies have previously shown an increased risk of acute rejection and even graft loss when transplant patients are not given a complete dose [9,10,[15][16][17]. In a study by Brar et al. [10], neutropenia in kidney transplant patients was associated with increased risks of VGCV or mycophenolic acid dose reductions or discontinuations, of acute rejection, and of hospitalization.
Hence, there is a need for an antiviral drug that is as effective as VGCV for preventing CMV and does not have its main side effect, namely myelotoxicity. Letermovir, a selective terminase inhibitor, is a new anti-CMV drug that inhibits formation and release of viral particles. It was previously approved for prophylaxis in hemopoietic-stem cell-transplant (HSCT) patients. Significantly less clinically significant CMV events were observed in HSCT patients given letermovir compared with those who received placebo [18]. No letermovir-related myelotoxicity was observed in letermovir-treated patients. A phase III trial was conducted in 600 D?/R-kidney-transplant-patients to compare letermovir with VGCV prophylaxis to prevent CMV for 28 weeks after transplantation [19]. The results were recently reported. The proportion of patients with CMV disease through the first year after transplantation was similar with both drugs, i.e., 10.4% with letermovir and 11.8% with VGCV. Conversely, drug-related adverse events during the 28 weeks after transplantation were reported more often with VGCV (35%) compared with letermovir (19.9%). The incidence of neutropenia, defined as an absolute neutrophil count \ 1000/lL, during the treatment phase was lower with letermovir than with VGCV (4.1% versus 19.5%; difference, -15.4%; 95% CI, -20.7, -10.5). This study shows that letermovir was not inferior to VGCV for preventing CVM disease during the first year after transplantation and had a lower rate of myelotoxicity [19].
Development of Resistant/Refractory CMV Infection
Resistant CMV infection is defined as detection of a known viral genetic mutation(s) that decreases the susceptibility to one or more anti-CMV medications, while refractory CMV infection is characterized by persistent signs and symptoms of CMV disease and/or persistent CMV DNAemia that fails to improve, defined as a \ 1 log 10 (\ 109) decrease in CMV viral load or increases after at least 2 weeks of appropriately dosed antiviral therapy [14].
Clinical disease from resistant/refractory (R/ R) CMV ranges from asymptomatic infection to severe or even fatal tissue invasive disease. Across multiples studies, it is associated with poor outcomes, including higher rates of hospitalization, increased length of hospital stay, higher costs, increased adverse events from alternative CMV therapies, increased rates of rejection and allograft loss, and increased mortality [7,[20][21][22]. The most significant risk factor for resistant CMV across numerous trials is the lack of prior CMV immunity, seen in CMV mismatched D?/R-recipients; other risk factors for development of resistant CMV include inadequate antiviral drug dose or delivery, prolonged antiviral drug exposure (usually [ 5 months), ongoing active viral replication while on antiviral therapy, intense immunosuppressive therapy, and exposure to therapeutic antiviral drugs with a lower barrier to resistance. When used for treatment, letermovir seems to have the lowest barrier, followed by maribavir and GCV/VGCV [20,[23][24][25][26][27]. Robust data on rates of letermovir resistance are not available, as the drug has not been used for treatment; low rates were seen after prophylaxis, similar to prophylaxis trials with other agents [28].
The frequency of CMV resistance in the SOT population is quite variable across different organs and programs. The incidence of resistance after GCV therapy in SOT patients is generally low (\ 5%), although it seems to be higher in some published reports, ranging from 5% to 12% [25,29,30], and as high as 18% in lung recipients [31,32] and 31% in intestinal and multivisceral organ transplant recipients [33,34]. Rates of genetic resistance have been measured routinely in only a few large trials. In the IMPACT trial comparing 100 days versus 200 days of VGCV prophylaxis in D?/R-kidney recipients, the incidence of resistance was similar at * 2% after 100-200 days of either GCV or VGCV prophylaxis [35]. In the VICTOR trial comparing treatment with intravenous GCV with oral VGCV, 3% of both groups (almost half of whom had prior prophylaxis with GCV or VGCV) had documented resistance testing at the time of treatment initiation [36].
After GCV/VGCV exposure, the most common mutations occur in the UL97 gene, followed by the UL54 DNA polymerase gene. Seven canonical mutations (M460V/I, H520Q, A594V, L595S, C603W, and C592G) account for the majority of the UL97 mutations, most of which convey high-level GCV resistance [7]. Mutations in the UL56 gene are seen after exposure to letermovir (more rarely in the UL89 and UL51 genes) [37].
Diagnosis of Resistant/Refractory CMV Infection
Antiviral drug resistance should be suspected when there is persistent or recurrent CMV DNAemia or disease during prolonged antiviral therapy; it very rarely occurs after brief exposure to treatment. For GCV, prolonged therapy is usually 6 or more weeks of cumulative drug exposure, including at least 2 weeks of ongoing full-dose therapy [20,29]. Although a higher level of CMV DNAemia may commonly be noted a week into therapy, guidelines suggest that this is not yet concerning for R/R disease, and do not recommend sending testing or switching therapy, unless there is severe disease; by definition, R/R disease is after at least 2 weeks of full-dose antiviral therapy [7,14]. Clinicians should be aware that the kinetics of CMV DNAemia response and the risk for early emergence of resistance may be different with newer antiviral drugs, especially those that have a lower barrier to resistance.
Sequencing of each genetic locus (UL97, UL54, UL56) is necessary to detect resistance mutations, and should be determined on the basis of prior drug exposure, as this predicts the likelihood of a mutation. A sample (most often from blood, although also possible from viral culture; sequencing from tissue biopsies is rarely possible) should be sent for mutation sequencing analysis, most commonly in UL97 after VGCV exposure, but also in UL54 with more complex or prolonged exposures, and in UL56 after letermovir exposure. Sequencing of each gene adds cost. Results are more feasible and reliable if the CMV DNAemia in the specimen is at least 1000 IU/mL [39].
False-negative resistance sequencing can occur, due to insensitivity in detecting mutant subpopulations representing less than 20-30% of the total, which may still be clinical significant [39,40]. Emerging, next-generation deep sequencing technologies offer the possibility of detecting small mutant subpopulations [41]. There have been reports of discordant findings of resistance mutations in varied body sites (e.g., eye, spinal fluid) [42][43][44]. Progressive disease at tissue sites despite negative testing in blood may warrant the genotypic testing of tissue-specific specimens, when virus is detectable at adequate levels.
Treatment and Prophylaxis of Resistant/ Refractory CMV Infection
While no controlled trial data define a best practice for treatment of R/R CMV infection, clinically useful published algorithms are based on expert opinion and experience [7,45,46]. In general, the first step is to consider reducing the transplant-related immunosuppressive therapy to the lowest feasible amount, often after careful discussions with the transplant team.
Therapeutic choices, often decided prior to return of sequencing data, often depend on the extent of disease. For asymptomatic or mildly symptomatic disease, or with low-level DNAemia, guidelines recommend the use of highdose GCV (from 7.5 to 10 mg/kg every 12 h in normal renal function) [7]. Data supporting this in SOT patients are limited; one series showed successful outcomes in six patients with lowlevel DNAemia [47]. In general, given that most of the common mutations convey high-level resistance to VGCV, this therapeutic approach has a narrow applicability but may be useful in the setting of refractory infection (i.e., perhaps with malabsorption or other issues with drug delivery), and cases of low-level resistance mutations (i.e., UL97 gene C592G).
For severe, life-threatening, or sight-threatening disease, international guidelines recommend the use of foscarnet [7]. An updated clinical decision support tool, developed by several of the guidelines authors, also recommends maribavir, although not with retinitis or encephalitis due to poor drug penetration, where foscarnet would be preferred [45]. Unfortunately, a review of foscarnet for R/R CMV showed a mortality of 31%, with significant renal toxicities, highlighting the need for new therapies [22]. Maribavir has recently been approved in the USA and Europe for treatment of R/R CMV. This oral drug is a safe and effective therapeutic agent, based on a recent phase 3 trial [48]. The main side effect was dysgeusia, seen in 37%. Although those subjects were treated for 8 weeks, it is possible that shorter treatments may be effective, similar to those standardly used with GCV/VGCV [7]; such research has not yet been done. Furthermore, only 6% of the phase 3 trial subjects had high viral loads, with limited severe end-organ disease, such that some experts suggest using foscarnet induction therapy followed by maribavir treatment. Twenty-five percent of subjects underwent sequencing and developed mutations conveying resistance to maribavir [38]. Clinicians should be aware that maribavir treats only CMV, and may wish to provided acyclovir or another similar agent to protect against reactivation varicella and herpes. Brincidofovir was previously evaluated for CMV treatment, but is not currently available for that indication at the time of this review. Letermovir, approved for prophylaxis after stem cell transplant, is not being developed as a treatment agent. Small, uncontrolled studies have shown that it may be helpful in R/R CMV, although it has a very low barrier to resistance and is probably better used as prophylaxis [49].
Additional adjunctive therapies, such as the use of CMV immunoglobulin, may be useful. Other agents such as mTor inhibitors (e.g., sirolimus and everolimus), leflunomide, and artesunate, have anti-CMV effects in vitro that may sometimes act synergistically with conventional antivirals [50,51], although none of these is strongly evidence based [7]. Given the mechanisms of action, the combination of maribavir and GCV/VGCV may be antagonistic and should be avoided [52]. Early data suggest that infusions of CMV-specific T cells may improve antiviral host defenses [53,54].
Prophylaxis after treatment of R/R CMV infection can be challenging, especially if there is multidrug resistance. Maribavir is rarely available and not approved for prophylaxis, VGCV is usually ineffective, and foscarnet is often considered impractical and too toxic. In general, we recommend preserving letermovir for prophylaxis after treatment of R/R CMV, rather than using it for treatment, given the lower barrier to developing resistance with letermovir treatment. Other options that may be effective, depending on prior exposures and resistance mutations, include CMV immunoglobulin and cidofovir every 2 weeks.
Very Low DNAemia and Diagnosis of Resistant/Refractory Disease
The advent of ultrasensitive CMV DNAemia testing has proven to be somewhat enigmatic for transplant clinicians. The use of real-time PCR seems to have created more artifact, or at least results of unclear clinical significance, in the lower ranges (generally below 500 IU/mL in whole blood or plasma). In the absence of signs and symptoms of disease, this may not represent R/R CMV but rather diagnostic artifact of DNAemia of unclear significance, and in the right clinical setting, clinicians may wish to monitor this with weekly CMV DNAemia testing and consider possible slight reduction of immunosuppression, which, in our experience, can often resolve this low-level DNAemia. In one series, almost half of patients with a CMV DNAemia of \ 1000 IU/mL resolved without treatment [55]. A recent study on the use of letermovir in 37 subjects with very low viral loads (\ 1000 IU/mL) showed good virologic outcomes, although may also have resolved DNAemia without treatment [49].
While earlier guidelines recommended treating until the CMV DNAemia was negative or undetectable [56], when using ultrasensitive CMV DNAemia testing, newer guidelines recommend treating until there are one or two negative or very low CMV DNAemia tests a week apart [7]. Clinicians should be aware of the impact of ultrasensitive CMV DNAemia testing, and not to overdiagnose R/R CMV at these lower levels of DNAemia.
CONCLUSIONS
Novel therapies for preventing and treatment of CMV have emerged as beneficial within the last few years. While VGCV has been very effective for more than two decades, letermovir may be as efficient as VGCV for preventing CMV disease with fewer hematological side effects. Maribavir is now approved for treating refractory/resistant CMV infection. Further studies are still required to improve the rate of sustained virological clearance and outcome in this setting.
ACKNOWLEDGEMENTS
Funding. No funding or sponsorship was received for this study, or publication of this article.
Author Contributions. Both named authors were involved in the study and drafting of the manuscript.
Compliance with Ethics Guidelines. This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Data Availability. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/. | 2023-01-01T16:08:10.460Z | 2022-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "bf8ded2077f14f24c070df00d5045df09b56302a",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40121-022-00746-1.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "ce4ffb72efbd5656cb1a7f99717b8dc35f0b995d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11898135 | pes2o/s2orc | v3-fos-license | Antifungal activity of terrestrial Streptomyces rochei strain HF391 against clinical azole -resistant Aspergillus fumigatus
Background and Purpose: Actinomycetes have been discovered as source of antifungal compounds that are currently in clinical use. Invasive aspergillosis (IA) due to Aspergillus fumigatus has been identified as individual drug-resistant Aspergillus spp. to be an emerging pathogen opportunities a global scale. This paper described the antifungal activity of one terrestrial actinomycete against the clinically isolated azole-resistant A. fumigatus. Materials and Methods: Soil samples were collected from various locations of Kerman, Iran. Thereafter, the actinomycetes were isolated using starch-casein-nitrate-agar medium and the most efficient actinomycetes (capable of inhibiting A. fumigatus) were screened using agar block method. In the next step, the selected actinomycete was cultivated in starch-casein- broth medium and the inhibitory activity of the obtained culture broth was evaluated using agar well diffusion method. Results: The selected actinomycete, identified as Streptomyces rochei strain HF391, could suppress the growth of A. fumigatus isolates which was isolated from the clinical samples of patients treated with azoles. This strain showed higher inhibition zones on agar diffusion assay which was more than 15 mm. Conclusion: The obtained results of the present study introduced Streptomyces rochei strain HF391 as terrestrial actinomycete that can inhibit the growth of clinically isolated A. fumigatus.
Introduction
mong the human pathogenic species of Aspergillus, A. fumigatus is perhaps the most devastating of Aspergillus-related diseases, followed by A. flavus, A. terreus, A. niger, and the model organism, A. nidulans [1][2]. Aspergillus fumigatus is a ubiquitous saprophytic mold that forms airborne spores (conidia). Humans inhale, on average, hundreds of these infectious propagules daily [3] .A. fumigatus pathogenesis and progression are the result of both fungal growth and the host response [4]. Invasive aspergillosis (IA) can cause a wide range of human ailments depending on host immune function [5]. Pathogenesis and virulence of aspergillus occurs when the host response is either too strong or too weak [6]. The types of hosts that are susceptible to invasive aspergillosis are the leukemic patients; hematopoietic stem cell transplant recipients, leukemia; patients on prolonged corticosteroid therapy, which is commonly utilized for the prevention and/or treatment of graft-versus-host disease in transplant patients; individuals with genetic immunodeficiencies such as chronic granulomatous disease (CGD); and individuals infected with human immunodeficiency virus [7]. In these patients, resistance is most commonly observed in A.fumigatus, and the isolates may be resistant to only itraconazole (ITZ) or exhibit a multi-azole or panazole-resistant phenotype. The phenotype depends on the underlying resistance mechanism, which commonly involves point mutations in the cyp51A-gene, the target for antifungal azoles [8].
A In recent years the microorganisms have become important in the study of novel active compounds, secondary metabolites and chemical structure exhibiting antimicrobial may serve as model system in the discovery of new drugs [9]. The use of chemical fungicides has led to deteriorating human health and development of pathogen resistance to fungicide. Actinomycetes are the main source of antifungal. The antagonistic activity of actinomycetes is used for the bio-control of fungal diseases [10]. Actinomycetes produce about 75% of commercially and medically useful antibiotics [11][12]. Thus, the search for new antibiotics from these bacteria has gained importance. For example, it had been discovered in Egypt that a strain of Streptomyc-es spp., produced a strong antifungal antibiotics [13]. Furthermore, a research in Turkey for an antibacterial agent, producing Streptomyces spp. [14] and in China, a new strain of Streptomyces was discovered that kills certain pathogenic fungi [15].
Among the different types of drugs, secondary metabolites of actinomycetes including antibiotics with diverse chemical structure and biological activities have occupied a prominent position in the pharmaceutical industry [16]. This study was explored for the isolation characterization of native actinomycetes for antifungal metabolites, to screen a new antifungal compound against drug resistant A.fumigatus.
Fungal strains
In the current study, an azole-resistant strain (IFRC 500, Invasive fungi Research Center, Mazandaran University of Medical Sciences), previously isolated from Bronchoalveolar Lavage (BAL) and identified by molecular methods were used. The resistant strain harbored an L98H amino acid substitution and a 34-bp tandem repeat in the cyp51A gene promoter region, and exhibited an itraconazole minimum inhibitory concentration (MIC) of > 16 µg/ml. Stock cultures for the transient working collections were cultured on malt extract agar (MEA, Difco, Beckton, Dickinson, and Company, Franklin Lakes, NJ, USA) at 35°C for 48 h until use.
Collection of soil samples
100 soil samples were collected from different points of Kerman City, Iran. The samples were taken up to a depth of 20 cm after removing approximately 3 cm of the soil surface and the samples were placed in polyethylene bags to avoid external contamination and kept in 4°C until pretreatment.
Isolation of actinomycetes
For the isolation of actinomycetes, various methods were performed on the basis of different sources and media [17]. Soil samples were processed by serial dilution method and cultured by spread plate technique on starchcasein-agar (SCB) and incubated at 37°C for 2 weeks. Slants containing pure cultures were stored at 4°C until further examination [18].
Identification of active actinomycetes
Various levels for the identification of actinomycetes were used such as: i) Chemotaxonomical level: identified based on chemical variation and characters in all genera of actinomyces. ii) Classical level: identified based on macroscopic and microscopic methods and other properties such as the color of colonies culture. iii) Molecular level: the 16S rRNA partial gene sequences obtained from active isolate compared with other bacterial sequences by using PubMed -NCBI BLAST search [17].
Screening of the antifungal activity Spread-plate method
The antifungal activity of actinomycetes was tested by agar plug method [19]. For the actinomycetes grown on surface of SCB medium Petri dishes, agar discs were cut out and transferred to the surface of PDA plates seeded with azole-resistant A.fumigatus. The petri dishes were incubated at 25°C to allow the growth of test organisms.
Well Diffusion Method
The isolated strains were transferred into the CG (Casein-Glycerin) medium in a 250 ml flask and incubated at 25°C for 15 days. Wells were made in the center of PDA plates seeded with Azole-resistant A. fumigatus. 100 μl of the test samples were transferred into the wells and plates were incubated at 25°C. The plates were then observed for zone of inhibition.
Assay for antifungal activity by minimum inhibitory concentration (MIC)
MIC was determined by the antimicrobial concentrations which were prepared as 1.25, 2.5, 5, 10, 20, 40 and 80 mg/ml in DMSO: MeOH (1:1, v/v) and tested in well-method technique against the pathogen. The lowest concentration which indicated growth inhibition was selected as MIC [18].
Identification of active strains
The active strain was identified by chemotaxonomical level as well as the classical level. Results are shown in Tables 1 and 2.
Molecular level
Blast search for the 16S rRNA gene sequences of the isolates KP137826.1 in the NCBI data bank showed a maximum similarity of 86% with Streptomyces rochei strain HF391.
Actinomyces spp. kp137826 alone showed significant strong antifungal activity against the azoles-resistant A.fumigatus. The diameter of the zone of complete inhibition was measured to the nearest millimeter. Antibiotic production was not detected in 7 days culture filtrate, but that showed maximum antibiotic production after 9 days of incubation ( Figure 2).
Minimum Inhibitory Concentration (MIC) determination
The best concentrations of the pure antifungal compounds from the Streptomyces rochei strain HF391 against azole-resistant A. fumigatus was 80 mg/ml. Furthermore, the inhibition zone (35mm) was measured as well ( Figure 3).
Discussion
In our study, among fifty BAL samples, only one azoles (Clotrimazole, Itraconazole and Ketoconazole) -resistant A. fumigatus was found. In another study investigated the prevalence of azole-resistant Aspergillus spp. Only 4 azole-resistant isolates were found, which corresponds with a prevalence of 1.9% [20]. Another study showed the prevalence of 12.8% among A. fumigatus isolates that had been sent to hospitals in the Netherlands [21]. For patients with aspergillosis affected by azoles resistance A. fumigatus treated with voriconazole, the propor-tion of death was 48% [22]. Another study investigated the prevalence of azole-resistant Aspergillus spp, described the emergence of acquired resistance of A. fumigatus to azole compounds [23].
In our research, among the 100 actinomycete isolates, Actinomyces spp. kp137826, exhibited strong antifungal activity against azole-resistant A. fumigatus. The rate of antifungal metabolite production correlated with the growth rate of the Actinomyces spp. kp137826. Among the bacteria, actinomycetes are the important source of bioactive compounds and many clinically relevant antibiotics in use today and may continue to be so. The other study performed on 153 isolates showed broad spectrum antifungal activity [24]. Augustine reported that out of 335 isolates, 230 (69 %) isolates were active against bacteria, fungi and yeast [25]. Of the 312 Actinomycete strains from different regions, of which, 22% exhibited antifungal activity against fungi [26]. Michael et al. and Gomeset al. isolated chitinolytic actinomycetes and found its antifungal activity [27][28].
Our study shows that only Actinomyces spp. kp137826 exhibited antifungal activity against azoles-resistance A. fumigatus. The MIC of the antifungal compound was determined as 80 mg/ml and showed the highest zone of inhibition A. fumigatus (35 mm). The other study used also different concentrations e.g. 2, 4, 6, and 10% of extract were used to check antifungal activity and the minimum inhibitory concentration [29].
Streptomyces spp and Nocardia spp. also showed anti-Aspergillus activity because of observed in Netherlands in 1999 [30].Screened287 isolates from various habitats and recorded166, 164, 134, and 132 actinomycete isolates active against C. albicans, A. | 2018-04-03T02:00:18.388Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "ab2272a5d5b66032f4722abb98ec0f1791ce6b14",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18869/acadpub.cmm.1.2.19",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6efd694c7e74672301d099113c97d4d068481843",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
233429551 | pes2o/s2orc | v3-fos-license | The extraction and process optimization of Cu (II) and Cd (II) using Pickering emulsion liquid membrane
In the present study, the extraction of divalent heavy metals like copper [Cu (II)] and cadmium [Cd (II)] using a Pickering Emulsion Liquid Membrane (PELM) has been investigated by using three different surfactants such as Amphiphilic silica nanowires (ASNWs), Aluminum oxide nanoparticles (Alumina) and Sorbitan monooleate (SPAN 80). The influence of the process parameters such as pH, the stripping phase concentration, the agitation speed, and the carrier concentration on the extraction efficiency have been examined to find the optimum conditions at which the maximum recovery of Cu (II) and Cd (II) could take place. At optimum conditions, the extraction efficiency of 89.77% and 91.19% for Cu (II) and Cd (II) ions were achieved. Non-edible oils were used as diluent in this present study to reduce the need for toxic organic solvents in preparing PELM. The impact of each process factor on the extraction efficiency of Cu (II) and Cd (II) ions has been verified using analysis of variance (ANOVA). The higher values of F and lower values of P (less than 0.05) indicate pH is the most significant parameter on the percentage extraction of Cu (II) and Cd (II) using the Taguchi design
GRAPHICAL ABSTRACT INTRODUCTION
Divalent copper, Cu (II) ions are toxic above the concentration of 2 mg/L as per the general standards for the discharge of the environmental pollutants. Cu (II) ions are being discharged in larger quantities to water bodies from many industries such as plating, metallurgy, fertilizer, paper and pulp, mining, steel works, circuit printing, wood preservatives, petroleum refining, and so on. Cu (II) ions accumulate and biomagnify in the human body and cause certain serious health issues like kidney damage, stomach disorders, intestinal problems, anaemia, coma and can even lead to death. Therefore, it is necessary to develop economically innovative techniques for an effective removal of Cu (II) ions from wastewater (Tofighy & Mohammadi ).
Naturally, cadmium is found in the Earth's crust. Divalent cadmium, Cd (II) ions are toxic even at an extremely low concentration. World Health Organization (WHO) has set the standard as 0.05 ppm for Cd (II) release to water bodies. Cd (II) ions are found in industrial wastewater due to manufacturing processes, such as nickelcadmium batteries, cadmium alloys, cadmium plating, ferrous metallurgy, and so on. (Awual et al. ). Excessive Cd (II) exposure could result in health risks like kidney damage, lethargy, headache, vomiting, nausea, ataxia, renal disorder and increased thirst (Mortaheb et al. ). Hence, it is the responsibility of the research community who work in the heavy metal toxicity to find a feasible and an effective method for the removal of Cu (II) and Cd (II) from the industrial effluents before discharging them to surface water (Ahmad et al. ).
The Emulsion liquid membrane (ELM) is an effective technique for the recovery of the various toxic organic and inorganic compounds from wastewater when compared to the conventional methods like liquidliquid extraction, solidliquid extraction, and so on. The ELM system consists of three phases: (i) wastewater as in the feed phase, (ii) a diluent along with an extractant and a surfactant as in the membrane phase and (iii) NaOH, as in the stripping phase. The ELM is a double emulsion type membrane with either water in oil in water (w/o/w) emulsion, or oil in water in oil (o/w/o) emulsion. The surfactants are repeatedly adsorbed and desorbed at the interface of the ELM, which leads to phase separation (Zeng et al. ). The pioneering work of Ramsden, which resulted in Pickering emulsion, has opened a new window to emulsion science, and it is now renowned that the solid colloid particles would be the best alternative for conventional emulsion stabilizers. The emulsions formulated using micro and nano-sized solid particles as stabilizers are called Pickering emulsions (Hedjazi & Razavi ). Pickering Emulsion Liquid Membrane (PELM) stabilized by the solid particles has received great attention in areas like wastewater treatment, food science, drug delivery, biphasic catalysis, and in the preparation of porous materials (Xie et al. ).
Until now, to the best of our knowledge, the removal of Cu (II) and Cd (II) from aqueous solution by the PELM technique has not been reported by the research community. So, a comprehensive and detailed study on PELM preparation and its application for extraction of Cu (II) and Cd (II) from aqueous solution has been carried out and, additionally, an experimental design for multivariable optimization by Taguchi design had also been investigated. Genichi Taguchi developed the Taguchi experimental design method to evaluate the effect of the process parameters and to determine the optimal process conditions in order to achieve the maximum efficiency with minimal experimental runs (Hsu et al. ). Taguchi suggested that the orthogonal arrays for controlling the parameters should be applied in order to determine the level of factors for the experimental design (Nagpal et al. ). This method reduces the experimental cost and enhances the reliability of the results. S/N ratios have been used to analyse the contribution of each and every experimental factor. S/N ratios have been calculated for the batch process optimization by using three methods, namelynominal-the-better, smaller-thebetter and larger-the-better (Shahavi et al. ). The contribution level of each of these process parameters and its significance have been analysed using analysis of variance (ANOVA) (Reyhani & Meighani ).
In the present study, the extraction of Cu (II) and Cd (II) from the aqueous solution using a Pickering emulsion liquid membrane and the optimization of the process parameters using the Taguchi experimental design method have been investigated. The influence of the various parameters like pH, the stripping phase concentration, the agitation speed, the initial feed concentration, the treat ratio, the surfactant concentration, the carrier concentration and the M/S (Membrane to Stripping) ratio on the percentage extraction of Cu (II) and Cd (II) have been studied in detail and the optimum values for each and every factor has been identified. The experiments have been performed at optimised conditions to find out the deviation between the experimental and predicted values.
Preparation of amphiphilic silica nanowires (ASNWs)
The ASNWs were prepared by wet chemical process. 15 g of PVP was dissolved in 150 ml 1-pentanol and sonicated for 35 min. To this solution, 5 ml of deionized water, 1 ml of 0.18M of the sodium citrate aqueous solution, 1.15 ml of ammonium hydroxide solution and 30 ml of ethanol were added, and immediate gentle shaking was provided to the mixture for 30 sec to keep all the components well mixed. 2 ml of TEOS was added to introduce the hydrophilic character to the prepared surfactants. The mixture was undisturbed overnight at ambient temperature for the hydrolysis of TEOS. For the synthesis of ASNWs, 0.4 ml of HDTMOS was added and 12 h of reaction time was provided for hydrolysis of HDTMOS. The obtained reaction mixture was washed five times with ethanol and centrifuged at 5,000 rpm for 15 min. Prepared ASNWs were taken in Petri dishes, dried in hot air oven at 80 C and stored in closed containers. The average particle size of ASNWs were 172-238 nm (Perumal et al. ).
Preparation of PELM
The membrane phase was prepared by mixing an appropriate ratio of the carrier (Aliquat 336) and surfactant with the diluent, Pungai oil. The extraction studies have been conducted individually for each surfactant, namely ASNWs/alumina/SPAN 80. 3 ml of 0.3 M NaOH solution has been added as a stripping phase to the membrane phase consisting of 6.5 ml of Pungai oil, 0.5 ml of the carrier and 25 mg of surfactant. The mixture was homogenized by using a high-speed homogenizer (Ultra-Turrax T10 basic, IKA) at an agitation speed of 8,000 rpm for 1 min (Yan et al. ). Thus, the water-in-oil (w/o) emulsion was obtained with a membrane phase to stripping phase ratio of 7:3 (Perumal et al. ).
Extraction of Cu (II) and Cd (II)
The individual PELM extraction studies were carried out for 10 ppm Cu (II) and 10 ppm Cd (II) as feed solutions. The mixture (PELM and feed solution) was continuously stirred at an agitation speed of 400 rpm to provide an intimate contact between the two phases for a contact time of 2 minutes and transferred into a separating funnel. It was left undisturbed until the separation of the two product phases was obtained. The product phases were analysed for the residual concentration of Cu (II) and Cd (II) by using an atomic absorption spectrophotometer (AAS; Thermo Fisher Scientific, Model AA 303).
RESULTS AND DISCUSSION
Extraction of Cu (II) using three different surfactants Effect of pH on percentage extraction of Cu (II) The effect of pH on Cu (II) extraction was studied in the pH range from 1 to 6. Cupric hydroxides began to precipitate at pH above 6, which was the reason for the selection of pH range from 1 to 6. From the experimental results, it has been observed that the percentage extraction efficiency of Cu (II) decreases from 85.88% to 56.63% with an increase in pH from 2 to 6 for ASNWs. Likewise, the percentage extraction efficiency of Cu (II) decreases from 67.74% to 49.84% and from 79.68% to 51.49% for alumina and SPAN 80 as surfactants respectively, and has been represented in Figure 1(a). This is due to the larger difference in hydrogen ion concentration between the stripping phase and the feed phase, which increases the driving force. Hence, Cu (II) extraction is reduced when the pH of the feed solution increases, but it retards the accumulation of Cu (II) -Aliquat 336 complex in the membrane phase at pH 1 (Leon et al. ).
Effect of the agitation speed on the percentage extraction of Cu (II)
The higher stirring rate would induce the formation of smaller sized emulsion droplets, thereby increasing the interfacial mass transfer area in the membrane phase and leading to the intensification of the mass transfer rates. The higher agitation speed might result in swelling of the membrane and rupturing of the emulsion droplets. The significance of the agitation speed on extraction of Cu (II) ions is shown in Figure 1(b). It has been clear from the results that a rise in the agitation speed from 100 to 400 rpm increases the extraction efficiency of Cu (II) ions from 66.69% to 82.51%, 47.23% to 62.98%, and 54.72% to 75.34% for ASNWs, Alumina and SPAN 80 as surfactants respectively. Further, the increase in the stirring speed had resulted in the reduction of the extraction efficiency of Cu (II) ions (Alaguraj et al. ). It has confirmed that the higher shear rates ruptured the membrane and reduced the extraction efficiencies of Cu (II) ions.
Effect of the initial feed concentration on the percentage extraction of Cu (II)
The effect of the initial feed phase concentration has been studied by varying the concentration of Cu (II) ions' range from 10 to 50 ppm as shown in Figure 1(c). From the results, it has been observed that the extraction efficiency of Cu (II) ions decreases with an increase in the initial concentration of Cu (II) ions in the feed phase. When the initial concentration of Cu (II) ions was very high, the internal droplets in the emulsion globule at the boundary and would easily become saturated. As a result, the metal-carrier complex did not have proper chances to migrate from the membrane phase to the inner region of the stripping phase so as to release the metal ions into the stripping phase (Alaguraj et al. ). The PELM process would be more significant at the lower concentration of Cu (II) ions in the aqueous/ feed solution.
Effect of the treat ratio on the percentage extraction of Cu (II)
The effect of the treat ratio on the extraction efficiencies of Cu(II) ions has been studied for a constant volume of the Pickering emulsion of 10 ml by varying the feed volume from the range of 10 to 50 ml, whereas the feed concentration was maintained constant (10 ppm Cu (II)). The appropriate ratio of the feed phase and the Pickering emulsion has been stirred at an agitation speed of 400 rpm for 2 minutes. From the experimental results, the percentage extraction efficiencies of Cu (II) ions decrease with an increase in the treat ratio and is shown in Figure 1(d). The higher degree of the extraction is possible at the lower treat ratio, due to the high emulsion volume with a large surface area. The volume of the emulsion utilized could be low which is economically favoured, but the removal efficiency of Cu (II) ions were found to be high for high emulsion volumes. Therefore, the higher extraction efficiency occurs at lower treat ratio (Alaguraj et al. ). The treat ratio of 1:1 has been found to be optimum for the maximum extraction of Cu (II).
Effect of carrier concentration on the percentage extraction of Cu (II)
Aliquat 336 used as a carrier which varied from the range of 0.014 to 0.129 vol. % in the membrane phase. The extraction efficiency of Cu (II) ions has been determined for each carrier concentration by keeping the constant feed volume of 10 ml. The 6.5 ml diluent and the 25 mg surfactant in the membrane phase were maintained constant throughout the experiments. The significance of the carrier concentration on the extraction efficiency of Cu (II) ions are represented in Figure 1(e). The extraction efficiency increases with an increase in the carrier concentration, since the carrier would facilitate the transportation of the solute from the feed phase to the membrane phase. It is also inferred from the literatures that the carrier concentration would significantly affect the emulsion stability as well (Lu et al. ).
Effect of the membrane to the stripping phase (M/S) ratio on the percentage extraction of Cu (II) The membrane to the stripping phase ratio (v/v) has been studied (Figure 1(f)) at 9:1, 8:2, 7:3, 6:4, and 5:5 and their effects on the extraction of Cu (II) ions have been investigated by adding 10 ml of Cu (II) solution into it. From the results, it is inferred that the percentage extraction of Cu (II) ions increases with the increase in the volume of the membrane phase to up to 7. This is actually due to the reason that the increase in the emulsion volume not only increases the number of emulsion globules and the surface area of PELM, but also increases the number of active sites for the carriers in the membrane phase which could increase the total number of Cu (II)/Aliquat 336 complexes in the feed/membrane interface (Lu et al. ). Further, the increase in the volume of membrane phase decreases the extraction efficiency, since the heavy metal ions cannot migrate effectively in the highly viscous membrane phase.
Effect of the stripping phase concentration on the percentage extraction of Cu (II)
The effect of the stripping phase concentration on the extraction efficiency of Cu (II) ions has been studied by changing the NaOH concentration from 0.1 to 0.5 M. It is clear from the results that an increase in NaOH concentration from 0.1 to 0.3 M increases the percentage extraction efficiency of Cu (II) ions from 74.81% to 86.63%, 62.23% to 69.65% and from 71.55% to 79.52% for ASNWs, Alumina and SPAN 80 respectively. However, an increase in the concentration of NaOH in the stripping solution from 0.4 M reduces the percentage removal efficiency of Cu (II) ions as represented in Figure 2
Extraction of Cd (II) using the three different surfactants
Effect of pH on the percentage extraction of Cd (II) The effect of pH on the extraction of Cd (II) ions has been studied within the range of pH 1 to 6. From the results, it has been inferred that at higher pH values, the rate of association of Cd (II) ions with Aliquat 336 decreases. The Figure 4(a) shows that pH 1 is the most appropriate pH in the feed phase for transferring the solute. On comparing the different experiments studied in the pH range of 1 to 6, the removal efficiency at pH 1 has been found to be optimum (Mortaheb et al. ). Hence, it is reported that the percentage extraction of Cd (II) ions decreases with an increase in the pH of the feed solution (Coelhoso et al. ).
Effect of the agitation speed on the percentage extraction of Cd (II)
The effect of the agitation speed on the percentage extraction of Cd (II) ions is shown in Figure 4(b). From the results, it is conclusive that the extraction efficiency of Cd (II) ions increases with an increase in the agitation speed up to a certain point. An increase in the agitation speed results in smaller emulsion globules, which in turn enhances the surface area for mass transfer, and thereby the percentage extraction of Cd (II) ions increases (Ahmad et al. ). It is found that a speed of 400 rpm could achieve the maximum removal efficiency of Cd (II); 88.99%, 85.49%, and 73.38% for ASNWs, alumina, and SPAN 80 respectively. On the other hand, increasing the agitation speed beyond 400 rpm reduces the extraction performances. The extraction efficiency has gradually decreased from higher speeds as the membrane gets ruptured due to the excessive shear induced by the impeller tip during the extraction process.
Effect of the initial feed concentration on the percentage extraction of Cd (II)
Figure 4(c) represents the effect of the initial Cd (II) concentration on the extraction efficiency of Cd (II) and it is evident from the results that the feed phase containing 10 ppm of Cd (II) concentration yields the highest percentage removal for all three surfactants examined. The extraction efficiency of Cd (II) ions was found to be 85.96%, 74.26%, and 80.57% for ASNWs, alumina, and SPAN 80 respectively for a contact time of 2 minutes. As the concentration increases beyond 10 ppm, the percentage extraction of Cd (II) ions decreases due to the saturation of the active sites. At a higher concentration of solute in the feed phase, the stripping phase needs to strip the targeted solute very quickly; otherwise, the membrane-stripping interface undergoes saturation. This phenomenon also reduces the driving force of a system that is exploited by the reactions with the stripping agent at the membranestripping interface, thereby resulting in a low performance (Ahmad et al. ).
Effect of the treat ratio on the percentage extraction of Cd (II)
The impact of the treat ratio on the extraction efficiency of Cd (II) ions has been studied by varying the feed volume within the range of 10 to 50 ml. A constant volume of Pickering emulsion (10 ml) was added in different volumes to the feed solution. The appropriate ratio of the feed phase and the Pickering emulsion has been stirred at an agitation speed of 400 rpm for 2 minutes. Figure 4(d) shows that the percentage extraction of Cd (II) decreases at higher treat ratios. At the treat ratio of 1:1, the volume of the emulsion is sufficient with a larger surface area for the mass transfer. As a result, higher extraction performances were possible (Ahmad et al. ). At higher treat ratios, less emulsion is used for the extraction of Cd (II). Even though it is desired from an economic point of view, however, extraction performance decreases (Medjahed et al. ). Hence, a treat ratio of 1:1 has been found to be the optimum for the extraction of Cd (II) from the synthetic Cd (II) solutions.
Effect of the carrier concentration on the percentage extraction of Cd (II)
The impact of the carrier concentration on the percentage extraction of Cd (II) ions is represented in Figure 4(e). Several experiments have been carried out for the metal transport through PELM by varying the carrier concentration within the range of 0.014 to 0.129 vol. %. If there is an increase in the carrier concentration, the extraction efficiency increases too. It is also seen that when the extent of the metal transport through PELM is enhanced, the content of extractant also increases in the membrane phase. An increase in the carrier concentration from 0.014 to 0.129 vol. % resulted in an increase of the percentage extraction of Cd (II) from 65.94% to 88.96%, 54.27% to 64.27%, and 64.58% to 82.62 for ASNWs, alumina and SPAN 80 as surfactants respectively. However, an increase in the carrier concentration beyond 0.10 vol. % showed no significant difference in the percentage extraction of Cd (II). The results of Medjahed et al. () also had the same trend.
Effect of the membrane to stripping phase (M/S) ratio on the percentage extraction of Cd (II)
The mesoscale structure and the performance of the emulsion strongly depend on the nature of the emulsifier and its proportion in the mixture. By increasing the M/S ratio, the percentage extraction of Cd (II) increases. On the other hand, increasing the M/S ratio would also increase the mass transfer resistance. The impact of the M/S ratio on extraction efficiency of Cd (II) ions is studied in a series of experiments and reported in Figure 4(f). The result shows that for the M/S ratios of 9:1, 8:2, and 7:3, the removal efficiency increased (Ahmad et al. ). However, the extraction efficiency of Cd (II) ions decreased for the M/S ratios of 6:4 and 5:5. Hence, the M/S ratio of 7:3 has been found to be optimum for the present studies.
Effect of the stripping phase concentration on the percentage extraction of Cd (II)
The influence of the stripping phase concentration on Cd (II) extraction was studied at different NaOH concentrations ranging from 0.1 to 0.5 M. The result shows that 0.3 M NaOH was best for Cd (II) extraction, as illustrated in Figure 5(a)-5(c). It is evident from the results that an increase in the stripping concentration from 0.1 to 0.3 M has increased the Cd (II) removal from 79.35% to 88.16% for ASNWs, and 61.81% to 71.37% for Alumina and 68.39% to 79.38% for SPAN 80. On increasing the NaOH concentration in the stripping solution beyond 0.3M, the extraction efficiency of Cd (II) was reduced. Increasing the stripping phase concentration does not provide better Cd (II) extraction as the number of moles of NaOH needed to react during the stripping process has proved to be in excess beyond 0.3 M of NaOH (Ahmad et al. ). Hence, it is not necessary to use high concentration of the stripping phase as it benefits neither the emulsion stability nor the extraction efficiency.
Effect of the surfactant concentration on the percentage extraction of Cd (II)
The surfactant concentration varied from 5 to 30 mg per 7 ml of the membrane phase, for ASNWs and alumina. However, for SPAN 80, the surfactant concentration varied from 0.0143 to 0.0857 vol.% per 7 ml of the membrane phase, and the extraction studies have been carried out for each emulsion. The percentage extraction was found to increase along with an increase in the surfactant concentration, as shown in Figure 6(a) and 6(b). It has been noted that the emulsion was stable, and the percentage extraction remained almost constant beyond 20 mg for ASNWs and alumina. Similar effects were observed for SPAN 80 beyond 0.06 vol. % (Mortaheb et al. ). From the experiments carried out, it is understood that the high interfacial area, high stability and good regeneration possibility of Pickering Emulsion Liquid Membrane (PELM) made from non-edible oil are advantages of the present system. At optimum conditions, the extraction efficiency of Cu (II) and Cd (II) was 89.77% and 91.19% respectively. Improvement of existing formulations and identification of new fields of application are the latest challenges.
Optimization and extraction of the heavy metals using Taguchi method The Taguchi method has been employed for the extraction of Cu (II) and Cd (II) ions from the wastewater by ASNWs stabilized PELM. The optimization of controllable factors, including the initial pH of the solution, the agitation speed, the carrier concentration and the stripping phase concentration, have been investigated at three levels (Hsu et al. ), as shown in Table 1.
From Taguchi's results, the number of experiments has been reduced to nine while assigning all the three levels to each of all the four factors (Deepanraj et al. ). The L9 orthogonal array of Taguchi's design is represented in Tables 2 and 3. Nine experiments have been performed as per the design by neglecting the interactions between the main factors. The S/N ratios have been figured out for all the nine trials, and their values are given in Tables 4 and 5.
The main effects' plot for S/N ratios on the percentage extraction of Cu (II) and Cd (II) ions are shown in Figure 7. Analysis of variance (ANOVA) for the extraction of Cu (II) and Cd (II) using PELM
Analysis of variance (ANOVA)
The ANOVA procedure was used to determine the percentage contribution of each of the factors studied.
Degree of freedom (DOF)
The DOF denotes the number of independent variables. The degree of freedom for each factor is the number of its levels minus one. The total degree of freedom is the total number of trial number of repetition minus one.
Contribution factor
The percentage of contribution factor was calculated using Equation (1).
where, SS is the sum of squares of factor and MS is the mean (total sum) of squares of all factors. In addition, the Fischer test (Fvalue) was used to determine the significant effect of factors on the performance characteristics (Patel & Murthy ).
Ftest
This is the ratio of the sum of squares of each trial sum result involving the factor, divided by the error or pooled error (P e ). The F value was calculated using Equation (2) The actual impact of each process factor on the extraction efficiency of Cu (II) and Cd (II) ions has been verified using ANOVA and represented in Tables 6 and 7. In this study, the degree of freedom is consumed by all factors and has no left over information for error calculation. Therefore, the error variance was found to be zero. The impact of each factor could be estimated by comparing the factor variances (Reyhani & Meighani ). From Table 6, the variance for stripping concentration (Sc) has been observed to be the lowest and recognized as an insignificant one because of the lowest contribution during the Cu (II) extraction. Therefore, the variance and the degree of freedom of carrier concentration have pooled error to estimate the error variance. The Ftest has been performed for the extraction of heavy metals, and the higher values of F indicate the greater effect on the percentage extraction of these metals. Table 6 provides the F values for the four factors (pH, N, Sc, and Cc) and the percentage contribution of each factor were as follows: pH (59.09%) > Cc (19.34%) > N (11.58%)> Sc (9.9%) for the extraction of Cu (II) ions. It is observed that pH is the most significant contributing factor for the extraction performances of Cu (II) ions when compared to the other factors. The optimum extraction efficiency of 89.77% for Cu (II) has been obtained at pH 2, the carrier concentration -0.129 vol% and agitation speed -400 rpm. Similarly, the DOF, F-value, and the percentage contribution of the four factors towards Cu (II) and Cd (II) extraction have been calculated and shown in Table 7. In the case of Cd (II) extraction, the percentage contribution of each factor was in the order: pH (52.86%) > Sc (19.35%) > Cc (19.11%) > N (8.68%). It is observed that pH has the most significant contribution for the extraction of Cd (II) ions when compared to the other factors. The optimum extraction efficiency of 81.59% for Cd (II) ions has been obtained at pH 2, stripping concentration -0.3 M and carrier concentration -0.073 vol%.
Interactions in the Cu (II) and Cd (II) for the extraction process
The probable interactions among the four parameters have been studied using the interaction plots and are depicted in Figure 8(a) and 8(b).
Figure 8(a) shows the interaction matrix plot for the extraction efficiency of Cu (II) ions. The interactions between pH and the stripping phase concentration are shown in Row 1 and 2. The interactions between the stripping phase concentration and the agitation speed are indicated in Rows 2 and 3, whereas the interactions between the agitation speed and the carrier concentration can be witnessed in Row 3 and 4. Similarly, rows 1 and 4 indicate the interactions between pH and the carrier concentration. Rows 1 and 3 indicate the interactions between pH and the agitation speed. And the interactions between the stripping phase concentration and the carrier concentration are indicated in rows 2 and 4. The likely interactions among the process parameters were studied by Sohrabi et al. (). Patel and Murthy in 2010 also supported the present study (Patel & Murthy ).
From the matrix plot of Cu (II) extraction, it could be confirmed that the interactions among pH with the stripping phase concentration, the agitation speed and the carrier concentration are strong at pH 2 and 1, whereas the interactions between the stripping phase concentration and pH and the agitation speed along with the carrier concentration are found to be strong at 0.2 and 0.4 M. But, the interactions between the agitation speed and pH and between the stripping phase concentration and the carrier concentration are found to be strong at 300 and 400 rpm. The interactions of the carrier concentration with pH and the stripping phase concentration with the agitation speed were strong at 0.114 and 0.129 vol. %. Similarly, from the matrix plot (Figure 8(b)) for Cd (II) extraction, it can be concluded that the interactions between pH with the stripping phase concentration, the agitation speed and the carrier concentration are strong at pH 1 and 2. Whereas the interactions between the stripping phase concentration and pH, the agitation speed and the carrier concentration are found to be strong at 0.3 and 0.2 M and, in addition, the interactions between the agitation speed and pH, the stripping phase concentration and the carrier concentration are found to be strong at 400 and 500 rpm. The interactions of the carrier concentration with pH and the stripping phase concentration with the agitation speed were strong at 0.114 and 0.071 vol. %.
CONCLUSIONS
In this study, the PELM process has been used for the extraction of Cu (II) and Cd (II) ions from the synthetic wastewater solution. Taguchi design approach was employed to determine the optimal conditions for extraction of Cu (II) and Cd (II) ions from wastewater. The ANOVA results confirmed that pH has the most significant effect on the extraction efficiency of Cu (II) and Cd (II) ions. At optimum conditions, the extraction efficiency of Cu (II) and Cd (II) ions was found to be 89.77% and 91.19% respectively. The relative errors between the predicted and the experimental values have been estimated as 9.99% and 3.01% for the extraction of Cu (II) and Cd (II) ions respectively. The results revealed that the contributing parameter for the extraction efficiency of Cu (II) and Cd (II) ions have been found to be in the order: pH (59.09%) > Cc (19.34%) > N (11.58%) > Sc (9.9%) and pH (52.86%) > Sc (19.35%) > Cc (19.11%) > N (8.68) respectively. The results confirmed that there were probable interactions among the operating parameters that influenced the extraction efficiencies. | 2021-04-29T06:17:17.627Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "5f5e553eda8f868e3094d4c635cfc03f9d256611",
"oa_license": "CCBY",
"oa_url": "https://iwaponline.com/wst/article-pdf/83/8/1863/880654/wst083081863.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0ecf92f67b94dcf769548289b1201c013989a5f3",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
36987024 | pes2o/s2orc | v3-fos-license | THE NEUMANN PROBLEM ON LIPSCHITZ DOMAINS
Au — 0 in D; u = ƒ on bD9 where ƒ and its gradient on 3D belong to L(do). For C domains, these estimates were obtained by A. P. Calderón et al. [1]. For dimension 2, see (d) below. In [4] and [5] we found an elementary integral formula (7) and used it to prove a theorem of Dahlberg (Theorem 1) on Lipschitz domains. Unknown to us, this formula had already been discovered long ago by Payne and Weinberger and applied to the Dirichlet problem in smooth domains. Moreover, they used a second formula (2), which is a variant of a formula due to F. Rellich [7], to study the Neumann problem in smooth domains. We show here that the same strategy as in [4] applied to the second formula (2) coupled with Dahlberg's theorem yields our main result. Thus integral formulas give appropriate estimates for the solution of not only the Dirichlet problem, but also the Neumann problem on Lipschitz domains. We will present a more general version that applies to variable coefficient operators, systems, and other elliptic problems in a later
Let D be a Lipschitz domain in R", n > 2. Let o denote surface measure on 3A and let b/bn denote the normal derivative on bD. In this note we use an a priori estimate due to Payne and Weinberger [6], to bound the nontangential maximal function of the gradient sju of a (generalized) solution to the Neumann problem Au = 0 in D; -u = g on bD (1) for boundary data# in L 2 (do). A corollary is that Vu attains its boundary values nontangentially pointwise almost everywhere and through dominated convergence in L 2 on level sets that tend to bD. Moreover, u belongs to the Sobolev space H 3 , 2 (D). We obtain the same bound and corollary when u is the solution to the Dirichlet problem where ƒ and its gradient on 3D belong to L 2 (do). For C 1 domains, these estimates were obtained by A. P. Calderón et al. [1]. For dimension 2, see (d) below.
In [4] and [5] we found an elementary integral formula (7) and used it to prove a theorem of Dahlberg (Theorem 1) on Lipschitz domains. Unknown to us, this formula had already been discovered long ago by Payne and Weinberger and applied to the Dirichlet problem in smooth domains. Moreover, they used a second formula (2), which is a variant of a formula due to F. Rellich [7], to study the Neumann problem in smooth domains. We show here that the same strategy as in [4] applied to the second formula (2) coupled with Dahlberg's theorem yields our main result. Thus integral formulas give appropriate estimates for the solution of not only the Dirichlet problem, but also the Neumann problem on Lipschitz domains. We will present a more general version that applies to variable coefficient operators, systems, and other elliptic problems in a later article. We would like to thank Professor H. F. Weinberger for calling his work to our attention.
The nontangential maximal function M(u) of a function u on D is defined for Q e bD by then there is a unique harmonic function u in D such that and u(P) --> f(Q) as P -+ Q nontangentially for almost every Q, do. (The constant C depends only on the Lipschitz constant ofD.) For simplicity we will only consider star-shaped Lipschitz domains. Let y(d) be a positive Lipschitz function on the unit sphere S n~l C R".
COROLLARY. bu(P)jbxj tends to a limit bu(Q)/bXj G L 2 (do) asP -+ Q nontangentially a.e. Q and NQ • Vw(0 = g(Q), where NQ is the normal to bD atQ. License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use By Green's theorem,
Suppose that 12 is star-shaped: 12 = {(r, 0): 0 < r < \I/(0)} for some V e C 00^"1 ). The crucial fact is that <Q, N Q ) > c > 0 where c depends only on the Lipschitz norm of ^, ll^ll Lip . Therefore, we conclude from (2) where the constant C depends only on ll^ll Lip . It is well known that if fdto u(Q)do(Q) = 0, then IMI L 2 (cf5) < C^*u K 2 (ds) for a constant C depending only on ll^ll Li p. This combined with (4) <2C| ^(-.)- (6) In particular, \\UJ\\H *(£!•) is uniformly bounded. Therefore, replacing u-by a subsequence, we can assume that u. converges weakly in H r (D) to a function u.
(c) The estimates obtained here show that (2) and (3) are actually valid in Lipschitz domains for functions u satisfying Theorem 2 or 3. Moreover, (7) is valid on Lipschitz domains.
(d) The analogous estimates for the Neumann problem in dimension two were proved by Fabes and Kenig. They showed that for each Lipschitz domain D C R 2 , there is p 0 > 2 such that if p < p 0 , g G L p (do) and Au = 0 on D, du/dn = g on dD, then W(^t4)\\ LP < ^g^LP(do)' For p > p o> the estimate fails. Also, given p > 2, there exists a Lipschitz domain D for which the estimate fails. The situation for g G L p (do), p < 2, in higher dimensions remains open. | 2017-07-30T05:35:55.313Z | 1981-03-01T00:00:00.000 | {
"year": 1981,
"sha1": "97b5fb8d6088e88bf0380f22fab30f60fc5221fb",
"oa_license": null,
"oa_url": "https://www.ams.org/bull/1981-04-02/S0273-0979-1981-14884-9/S0273-0979-1981-14884-9.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ccb4547d091822100767a0f1a4bbc2eea0d3a71d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
56166547 | pes2o/s2orc | v3-fos-license | Charmless B decays: Dalitz
LHCb using proton-proton data, recorded in 2011 and 2012 and corresponding to an integrated luminosity of 3.0 fb−1. The inclusive CP asymmetries and local asymmetries in specific regions of the phase space are measured for the B± → h±h+h− (where h = π,K) decays. In addition the first evidence for CP violation in charmless B+→ pp̄K+ decays is reported. Finally, a more accurate measurement of the branching fraction of the B+→ Λ(1520)p (with Λ(1520) decaying into a K+p final state) is shown.
Introduction
The violation of CP symmetry in the quark sector is well explained in the Standard Model (SM) by the Cabibbo-Kobayashi-Maskawa (CKM) mechanism [1,2], through the presence of a single complex phase.Despite the success of the SM to describe all CP asymmetries observed experimentally, the amount of CP violation within the SM is not enough to explain the matterantimatter asymmetry present in the universe.
The charmless b-hadron decays offer an interesting opportunity to search for different sources of CP violation.In these decays the violation of CP can be observed as an asymmetry in the decay rate of a particle and its CP conjugate.In order to occur the existence of at least two amplitudes with different weak and strong phases is necessary.The weak phases are sensitive to physics beyond the Standard Model and can be studied through the interference between tree-level and penguin contributions.The strong phases can originate from different mechanisms.One source is related to short-distance processes [3], while the other sources are due to long-distance effects, like KK ↔ ππ rescattering [4], occurring in the final state, or interference between intermediate states of the decay [5].The measurement of CP asymmetries over the Dalitz plane is useful to better understand the generation of such phases and to constrain the parameters describing the hadronic interactions.
In these proceedings we report the latest results obtained using a data sample corresponding to an integrated luminosity of 3.0 fb −1 collected by the LHCb experiment in proton-proton collisions at center-of-mass energies of 7 and 8 TeV.
CP violation in B
The studies of direct CP violation in B ± → h ± h + h − decays 1 , performed at the B-factories through amplitude analyses, have shown evidence of CP asymmetries in B + → ρ 0 (770)K + [6,7] and B + → φ (1020)K + [8] decays.Moreover the LHCb collaboration measured non-zero inclusive CP asymmetries and larger local asymmetries in the decays [11] using a data sample corresponding to 1.0 fb −1 .The results presented in this document constitute an update of the previous analysis, making use of the entire data set recorded by LHCb.Additional information can be found in [12].The signal candidates are selected by making use of a multivariate technique and efficient particle identification variables, thus improving the performance.
The observed asymmetry (raw asymmetry) is defined as where N represents the signal yields, extracted with an unbinned maximum likelihood fit to the mass spectra of the selected candidates.By measuring the A raw asymmetries and correcting for B ± production asymmetry and for other detector effects due to unpaired hadron h ± , using control decays, it is possible to measure the CP violation
PoS(Beauty2014)020
Marianna Fontana whose results, integrated over the Dalitz plot, read where the first uncertainty is statistical, the second systematic and the third is due to the error on the CP asymmetry of the B ± → J/ψK ± decay, used as a control channel.
The asymmetries are as well studied in bins of the Dalitz plot. Figure 1 shows the distributions of the raw asymmetry, calculated according to Eq. 2.1.The plots are obtained after the subtraction of the background and the correction for acceptance effects.The binning is chosen adaptively, in order to have approximately the same number of entries in each bin.The distributions show a very rich structure, with very large asymmetries localised in certain regions.The sign of the asymmetry is positive for the channels which include two pions in the final state and negative for those which include two kaons.The asymmetry calculations, performed in the m(K + K − ) or m(π + π − ) invariant mass region between 1.0 and 1.5 GeV/c 2 , leads to the following results The results have a significance of 5σ .A possible explanation for the generation of the strong phase differences is the role played by hadron rescattering.Another interesting feature can be observed plotting the yield asymmetry as a function of the m(hh) invariant mass, splitted according to the sign of the cosine of the angle θ p , defined by the momenta of the unpaired hadron and the resonant daughter with the same-sign charge.As an example Figure 2 shows, for the B ± → π ± π + π − decay, the distribution of the yield asymmetry in bins of m(π + π − ), according to the sign of cos(θ p ).The charge asymmetry changes sign at a value of m(π + π − ) close to the ρ(770) resonance.This can be related to the dominance of the longdistance interference effect in this region of the Dalitz plot.Moreover, the change of sign occurs for both cos(θ p ) values, indicating the dominance of the real part of the Breit-Wigner propagator.To understand the dynamical origin of these CP-violating sources a full amplitude analysis is needed.
CP violation in B + → p ph + decays
The large asymmetries measured in B ± → h ± h + h − decays motivates the studies on the closely related B + → p ph + decays, where a smaller h + h − ↔ pp rescattering is expected.The results presented here are an update of the studies previously performed by in LHCb [13].It made use of the entire data set, improving the selection with a similar strategy adopted for the B ± → h ± h + h − analysis.More details can be found in [14].
After the selection, the number of signal events are extracted using an unbinned maximum likelihood fit to the B + → p pK + and B + → p pπ + invariant masses, yielding to N(ppK) = 18721 ± 142 and N(ppπ) = 1988 ± 74 signal candidates.
The distribution of events in the Dalitz plane, defined by (m 2 pp ,m 2 hp ), is shown in Figure 3, after the subtraction of the background and the correction for acceptance effects.In the left plot it is possible to observe hints of the Λ(1520) → K + p band.The measurement of the branching fraction is performed using the B + → J/ψ(p p)K + decay as a reference mode B(B + → Λ(1520)(K + p)p) = (3.15± 0.48(stat) ± 0.07(syst) ± 0.26(BF)) × 10 −7 .
The enhancements at low m 2
p p values seen in Figure 3 are observed in other B → ppX decays.The B + → p pK + events occupy the region at low m K + p, while B + → p pπ + candidates rather the region at large m π + p.In Figure 4 is shown (left) the distribution of the helicity angle θ p , defined as the angle between the daughter meson h and the oppositely charged baryon in the p p rest frame of the p p system for p pK and p pπ, in the region with invariant mass m p p < 2.85 GeV/c 2 .The two modes show a clear sign-inversion pattern.The forward-background asymmetry is defined as A FB = N(cos θ p > 0) − N(cos θ p < 0) N(cos θ p > 0) + N(cos θ p < 0) ,
These asymmetries can be interpreted as being due to the dominance of non-resonant p p scattering [9].The CP asymmetry variation across the Dalitz plane has been studied for the B + → p pK + decay only, since for the B + → p pπ + decay the statistics of the data sample was not enough.Figure 5 shows on the left the distribution of the raw asymmetry over the Dalitz plot.The sign of the asymmetry is positive for m 2 K + p > 10 GeV 2 /c 4 and negative for m 2 K + p < 10 GeV 2 /c 4 .
Figure 4 :
Figure 4: Left: background-subtracted and acceptance-corrected normalized distributions of cos(θ p ) for m p p < 2.85 GeV/c 2 .Right: Forward-backward asymmetry as a function of m p p. | 2019-04-22T13:05:09.320Z | 2015-05-13T00:00:00.000 | {
"year": 2015,
"sha1": "1790b205e49b7db99f83e300cb4b2373600e4a5f",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/216/020/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "37cf101f8a9ab2e1105ef318c9a14264d7506197",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248242087 | pes2o/s2orc | v3-fos-license | The C-terminal tail of ribosomal protein Rps15 is engaged in cytoplasmic pre-40S maturation
ABSTRACT The small ribosomal subunit protein Rps15/uS19 is involved in early nucleolar ribosome biogenesis and subsequent nuclear export of pre-40S particles to the cytoplasm. In addition, the C-terminal tail of Rps15 was suggested to play a role in mature ribosomes, namely during translation elongation. Here, we show that Rps15 not only functions in nucleolar ribosome assembly but also in cytoplasmic pre-40S maturation, which is indicated by a strong genetic interaction between Rps15 and the 40S assembly factor Ltv1. Specifically, mutations either in the globular or C-terminal domain of Rps15 when combined with the non-essential ltv1 null allele are lethal or display a strong growth defect. However, not only rps15 ltv1 double mutants but also single rps15 C-terminal deletion mutants exhibit an accumulation of the 20S pre-rRNA in the cytoplasm, indicative of a cytoplasmic pre-40S maturation defect. Since in pre-40S particles, the C-terminal tail of Rps15 is positioned between assembly factors Rio2 and Tsr1, we further tested whether Tsr1 is genetically linked to Rps15, which indeed could be demonstrated. Thus, the integrity of the Rps15 C-terminal tail plays an important role during late pre-40S maturation, perhaps in a quality control step to ensure that only 40S ribosomal subunits with functional Rps15 C-terminal tail can efficiently enter translation. As mutations in the C-terminal tail of human RPS15 have been observed in connection with chronic lymphocytic leukaemia, it is possible that apart from defects in translation, an impaired late pre-40S maturation step in the cytoplasm could also be a reason for this disease.
Introduction
Eukaryotic ribosome assembly is a highly complex, hierarchical process that is initiated in the nucleolus upon transcription of a large ribosomal RNA precursor (35S pre-rRNA in yeast) and a pre-5S RNA accompanied by co-transcriptional association of the first ribosome assembly factors and ribosomal proteins (r-proteins). The resulting initial pre-ribosomal particles (90S particles or also called SSU processomes) undergo an intricate maturation pathway, involving more than 200 different assembly factors while separating into precursors for the small 40S and large 60S ribosomal subunits (r-subunits) and transiting to the nucleoplasm and finally to the cytoplasm. Seminal cryo-EM studies of yeast and human pre-ribosomal particles have shed light on the structures of intermediates along this ribosome maturation path, revealing that besides rRNA processing (summarized in Figure S1), also massive rRNA folding and restructuring events take place until rRNAs and r-proteins become compacted into functional mature r-subunits [1][2][3].
Freshly exported pre-40S particles are not engaged immediately in translation as important functional sites are blocked by assembly factors [15]. In particular, Dim1, Rio2 and Tsr1 are positioned at the inter-subunit side, and prevent association of 60S and translation initiation factors [16][17][18]. In the course of cytoplasmic pre-40S maturation, all pre-40S assembly factors are successively released, while, concomitantly, r-proteins which are not yet assembled at their final position are stably accommodated, and in addition, the last missing r-proteins are incorporated into pre-40S particles. For instance, the kinase Hrr25 phosphorylates and releases Ltv1 promoting the stable accommodation of r-protein Rps3/uS3 [19][20][21]. This maturation event is coordinated with the ATPhydrolysis-triggered self-release of the ATPase Rio2 [22][23][24]. Obviously, a recurring theme in ribosome assembly is quality control, in which assembly factors bind to functionally important sites of the nascent r-subunits and probe for correct structure and/or function. In yeast, these maturation steps are believed to occur in a 'translation-like cycle', in which an immature pre-40S particle is joined via eukaryotic initiation factor 5B (eIF5B) with an apparently mature 60S subunit [25][26][27]. Only when this putative 'translation test-drive' is successful, the final maturation steps are triggered. The hallmark event of these final 40S maturation steps is the processing of the 20S pre-rRNA into the mature 18S rRNA by the endonuclease Nob1, involving the release of the Nob1inhibitor Pno1, and finally Nob1, by Rio1 ATPase [28][29][30].
A key prerequisite for the progression of ribosome biogenesis is the correct and timely binding of r-proteins, as reflected by the fact that the absence of almost any of the 79 yeast r-proteins lead to a specific maturation defect, either in nucleolar ribosome biogenesis steps, in r-subunit export, or maturation steps after export [31][32][33]. Apart from that, many r-proteins have additional functions that could only be revealed by specific point mutants. The advantage is that such mutant r-protein variants can still assemble onto preribosomal particles and allow maturation to proceed but are disturbed in one of their specific functions. Two wellcharacterized examples are Rps14/uS11 and Rps5/uS7: while their depletion leads to nucleolar pre-rRNA processing defects [31], Rps14 and Rps5 variants carrying mutations or deletions in the C-terminal unstructured extensions of the proteins cause defects in cytoplasmic 20S pre-rRNA processing, indicating an additional function of these proteins in cytoplasmic pre-40S maturation [34,35].
We are interested in understanding the function of Rps15, an r-protein of the small 40S subunit, which is highly conserved among eukaryotes ( Figure S2A). Both in yeast and human cells, depletion of Rps15 was shown to cause retention of pre-40S particles in the nucleus, suggesting that Rps15 assembly onto pre-40S particles is essential for the export of these particles [11,12,31]. Additionally, an earlier, nucleolar function of Rps15 was revealed by a synthetic lethality study with a thermosensitive rps15-1 mutant, uncovering genetic links of RPS15 with NHP2, UTP15, SLX9, and BUD23. Combined mutations of NHP2, UTP15 or SLX9 with rps15-1 led to nucleolar accumulation of pre-40S particles and early pre-rRNA processing defects [36]. In contrast, the rps15-1 Δbud23 double mutant primarily showed nucleoplasmic pre-40S accumulation, and, therefore, defects in nuclear export of pre-40S particles [36].
Besides these functions in 40S subunit biogenesis, Rps15 also fulfils important roles in the mature ribosome. The globular domain of Rps15 is engaged in the B1a inter-subunit bridge by interacting with helix H38 of the 25S rRNA [37,38]. In turn, the largely unstructured C-terminal tail of Rps15 reaches to the decoding site of the 40S subunit, thereby playing an important role in translation elongation [39,40]. Recently, the very C-terminal residues of Rps15 could be visualized in a mature human 80S ribosome, revealing its flexibility during translation [41]. In the post-decoding pre-translocation state of translation elongation, the Rps15 C-terminus interacts with both A-and P-site tRNAs and the mRNA in the decoding site [40]. Based on these structural data, the Rps15-C-terminal domain was suggested to be engaged in the efficient accommodation of tRNAs at the A-site. Notably, mutations in the Rps15 C-terminal tail have been found with a high frequency in patients with relapsing chronic lymphocytic leukaemia (CLL), further substantiating the important function of the Rps15 C-terminus [42][43][44].
In this study, we identified a specific rps15 mutant, carrying an F138S exchange in the C-terminal tail of the Rps15 r-protein, which shows a synthetically enhanced growth defect together with a Δltv1 chromosomal deletion. A second mutation, F102L, positioned in the region from where the C-terminal tail emerges, further enhanced this defect. In pre-40S particles, both F138 and F102 residues are positioned in close proximity to 40S assembly factor Tsr1. Consequently, tsr1 mutants genetically interact with these different rps15 mutants. Further investigation of these as well as other mutations in Rps15 indicated that, in contrast to the Rps15 depletion, nucleo-cytoplasmic export of pre-40S particles is not impaired, but the final cytoplasmic pre-40S maturation is disturbed. We conclude that the Rps15 C-terminal tail is not only needed for correct translation but already required during the final steps of 40S subunit synthesis.
A synthetic lethality screen links Rps15 to 40S assembly factor Ltv1
Previously, we have performed a synthetic lethality (SL) screen with a deletion mutant of the non-essential 40S subunit assembly factor Ltv1, revealing its functional connection to RNA helicase Prp43, and its G-patch cofactor Pfa1 [30]. The synthetically enhanced growth and 20S pre-rRNA processing defects observed upon LTV1 deletion in prp43 or pfa1 mutants were high-copy-suppressed by NOB1 ( Figure 1A and [30]). These results indicated a role of Prp43 and its cofactor Pfa1 in promoting 20S pre-rRNA processing by the endonuclease Nob1.
To identify additional players in late pre-40S maturation functionally connected to Ltv1, we set out to investigate seven of the so far uncharacterized mutants found in this SL screen. Considering the previous observations of a synthetically sick phenotype upon combination of certain rio2 mutants with the null or point-mutant ltv1 alleles [23,24], we transformed the remaining mutants with a RIO2 containing plasmid and scored for complementation of synthetic growth defects, as evaluated by a white or red/white sectoring colony colour and the ability to grow on 5-FOA containing plates (see Methods section for details). Indeed, six of these seven mutants could be complemented by RIO2 (Fig. 1(A,B), exemplarily shown for mutant #111). This strong genetic link between LTV1 and RIO2 is also in agreement with their joint action during cytoplasmic pre-40S maturation [24].
One remaining mutant (#432) was neither complemented by RIO2, PRP43 and PFA1 nor suppressed by NOB1. To identify the mutation leading to synthetic enhancement in this strain, we transformed a yeast genomic library into the mutant and screened for transformants re-establishing the red/white sectoring phenotype, and hence for the ability to lose the LTV1 containing plasmid.
Using this strategy, we found that the original SL mutant #432 was complemented by the RPS15 gene ( Figure 1(A,B)). Sequencing of the chromosomal copy of this gene confirmed a mutation, leading to a phenylalanine to serine exchange (F138S) in the C-terminal unstructured tail of the Rps15 protein (position indicated in Figures S2B and 3A).
To avoid interference by other mutations arising in the strain generated by UV mutagenesis, we constructed a strain in which the chromosomal copy of RPS15 was deleted and complemented by a low-copy plasmid carrying the rps15-F138S allele; then, the growth of this mutant was compared to that of the same strain but carrying the RPS15 wild-type (WT) plasmid. While the rps15-F138S mutant did not display any growth defects by itself at any tested temperature (23°C, 30°C or 37°C), it strongly enhanced the growth defect of a Δltv1 strain, in particular at low temperature, indicating a genetic interaction between LTV1 and the rps15-F138S allele (Fig. 1C).
To exclude that this genetic interaction is caused by a reduced level of the Rps15-F138S compared to the wildtype protein, we chromosomally fused the sequence encoding an N-terminal 2xHA-tag to RPS15 and rps15-F138S in the presence and absence of LTV1, respectively. Rps15 protein levels, detected with an α-HA-antibody, were apparently similar in the wild-type and the rps15-F138S mutant strain at any of the temperatures tested ( Figure S3A). Based on these results, we conclude that there is a specific functional link between Ltv1 and the C-terminal domain of Rps15. As the chromosomally integrated constructs, however, displayed growth defects due to the N-terminal 2xHA-tag ( Figure S3B), we performed all subsequent experiments using untagged Rps15 and variants thereof, encoded from a low-copy plasmid in the background of the knockout of the chromosomal RPS15 copy.
The rps15-F138S mutation enhances the 40S maturation defect of the Δltv1 deletion
Considering the genetic link of the rps15-F138S mutant to the late 40S assembly factor Ltv1, we speculated that the Rps15 C-terminal tail may be important for 40S subunit maturation. Polysome profiling indicated that the rps15-F138S mutant showed a mild 40S subunit shortage, as concluded from the slightly increased free 60S subunit peak (Figure 2A). The Δltv1 mutant showed a decreased free 40S peak and a strongly increased free 60S peak, as observed previously [30]. An increased free 60S peak is characteristic for a 40S synthesis defect, as a deficit of available 40S subunits for subunit joining is leading to a consequent accumulation of free 60S subunits. Importantly, the rps15-F138S mutation clearly enhanced the 40S subunit synthesis defect of the Δltv1 mutant, as obvious from the absence of a free 40S subunit peak, and a massive increase of a combined 60S subunit/80S ribosome peak compared to the Δltv1 mutant alone. Moreover, polysomes were drastically reduced in the rps15-F138S Δltv1 strain at 30°C, and even more so at 23°C, indicating that the 40S subunit synthesis defect results in reduced translation elongation (Figure 2A). Genes displayed on the left side were identified in a previous study [30], while genes identified in the current study are shown on the right side. (B) Mutants #SL111 and #SL432 isolated in the SL screen were transformed with LEU2-plasmids carrying the indicated genes or with an empty plasmid (-); transformants were spotted in serial 10-fold dilution steps onto SDC-leu plates (-leu) as well as plates containing 5-fluoroorotic acid (5-FOA), and incubated at 30°C for 5 days. Red colony colour on SDC−leu and slow growth on 5-FOA plates indicate a synthetic growth defect. Red/white sectoring on SDC-leu and cell growth on 5-FOA plates indicate complementation of the synthetic enhancement phenotype. (C) Temperature dependence of the observed genetic interactions. Strains carrying the indicated wildtype and mutant RPS15 alleles on LEU2-plasmids plus HIS3-empty plasmid (Δltv1) or LTV1 wild-type allele on a HIS3-plasmid were spotted onto SDC-his -leu (-his -leu) plates to select for the transformed plasmids and incubated at 23°C, 30°C, and 37°C for 4 days.
To determine the underlying maturation defects more precisely, we performed northern blotting and detected several intermediates of the pre-rRNA processing pathway (see also the schematic overview of the pathway in Figure S1). While the rps15-F138S mutant alone did not display any apparent defects, this mutation enhanced the defects observed for the Δltv1 strain, especially at 25°C, leading to a slight additional accumulation of 20S pre-rRNA as compared to Δltv1 alone and, even more noticeable, a reduction of mature 18S rRNA levels. Moreover, possibly as a secondary effect, the 35S pre-rRNA accumulated in this double mutant at low temperature ( Figure 2B).
Both the Δltv1 deletion and Rps15 depletion were previously observed to result in 40S subunit export defects [12,13,31]. To assess if the cause for the strong growth defect of the rps15-F138S Δltv1 double mutant is a pre-40S export defect or a defect prior to pre-40S export, we examined the localization of the 40S subunit reporter Rps3-GFP (uS3-GFP) in this mutant by fluorescence microscopy ( Figure S4). No significant nuclear accumulation of Rps3-GFP was observed in the rps15-F138S Δltv1 double mutant as compared to Δltv1 alone, pointing towards a later, mainly cytoplasmic defect of the double mutant. Fluorescence in situ hybridization (FISH) using a probe specific to the D/A 2 segment of ITS1 further indicated that 20S pre-rRNA synthesized in the rps15-F138S Δltv1 mutant can be exported into the cytoplasm, where, similar to the Δltv1 mutant, increased amounts of 20S pre-rRNA are detected; thus, its cytoplasmic processing is impaired ( Figure 2C). We conclude that besides the previously reported function of Rps15 in 40S subunit export, Δltv1 strains carrying a LEU2plasmid encoding wild-type RPS15 or rps15-F138S were grown in liquid SDC-leu medium at 23°C or 30°C, respectively. After addition of cycloheximide and inhibition of translation, cells were lysed and polysome profiles recorded. Peaks corresponding to the free 40S and 60S subunits, 80S ribosomes and polysomes are indicated in the profile of the wild-type strain at 23°C. (B) rps15-F138S enhances the 20S pre-rRNA accumulation occurring in a Δltv1 strain. For northern blotting, cells were grown in liquid SDC-leu at the indicated temperatures. For pre-and mature rRNA detection, the indicated probes were used (the yeast pre-rRNA processing pathway is shown in Figure S1). (C) The mutant rps15-F138S shows no pre-40S export defect but an impairment in cytoplasmic 20S pre-rRNA processing. For fluorescence in situ hybridization (FISH), cells were grown in liquid SDC-leu at 30°C. After cell fixation, spheroplasts were incubated with a probe specific to the D/A 2 segment of the ITS1 region to detect 20S pre-rRNA and nuclei were stained with DAPI.
Rps15 is also required for cytoplasmic steps of pre-40S maturation.
Expanding the genetic network between Rps15 and late 40S maturation factors
Given the link of the rps15-F138S mutation to LTV1, we wanted to further explore the connections of RPS15 to late 40S maturation. In addition to the rps15-F138S mutant, we included a second mutant into these analyses, which was erroneously generated during PCR amplification of the rps15-F138S allele during cloning. The resulting mutant allele contains a phenylalanine to leucine exchange (F102L) in addition to the F138S exchange. Whereas F138 is positioned only five amino acids from the C-terminal end of Rps15, F102 is positioned in the globular domain in the region from where this C-terminal tail emerges (positions indicated in Figure 3(A)). Notably, the rps15-F102L/F138S mutant revealed a slight growth defect at 37°C on its own and was lethal when combined with a Δltv1 null mutant ( Figure 3(B,C)).
Previous studies suggested that LTV1 deletion results in a mixed phenotype, showing both pre-40S export and cytoplasmic pre-40S maturation defects (see, for example, [30]). To further validate our hypothesis that Rps15 has a function in cytoplasmic 40S maturation, we examined ltv1 mutants with no export but only cytoplasmic defects for genetic interactions with rps15 mutants. We have previously generated several ltv1 mutants in which the phosphorylation of the corresponding Ltv1 protein by Hrr25 is reduced, thus, resulting in an inhibition of the release of Ltv1 in the cytoplasm and consequently an exclusively cytoplasmic pre-40S maturation defect; these mutants include the ltv1-S339A/S342A mutant [20]. Indeed, this phosphorylation deficient ltv1 mutant also showed a genetic interaction with both rps15 alleles of this study, with the rps15-F102L/F138S allele showing an almost lethal phenotype in combination with ltv1-S339A/S342A (abbreviated as ltv1-SS>A; ( Figure. 3(D)) and S5A).
Given the genetic links of LTV1 to PFA1 30 and RIO2 23 ( Figure. 1(A)), we next addressed if RPS15 also genetically interacts with these two genes. However, no genetic interaction was observed between our rps15 mutant variants and a Δpfa1 null allele or the rio2-1 point-mutant ( Figure S5B and S5C).
We then inspected recent pre-40S cryo-EM structures, to identify assembly factors which might function together with Rps15. Interestingly, while the C-terminal tail of Rps15 is positioned between the A-and P-site tRNAs in the postdecoding pre-translocation state in translation (Figure 3(F)) [41], this tail is positioned between 40S assembly factors Rio2 and Tsr1 in pre-40S particles (Figure 3(E)) [18,45]. Moreover, domain II of Tsr1 reaches close to the region in which the F102L exchange in the Rps15 mutant variant is positioned. Therefore, we tested for a genetic interaction with the essential TSR1 gene. To this end, we generated tsr1 mutants by random PCR mutagenesis. Two thermo-sensitive tsr1 alleles were obtained, termed tsr1-1 and tsr1-2, leading to three (E588D, Y604C and T704A) and eight (F220L, L239F, R473 G, E481G, F596Y, F710L, T716A, F759L) exchanges in the amino acid sequence of the Tsr1 protein, respectively.
Interestingly, both tsr1 mutant alleles caused synthetic lethality in combination with the rps15-F102L/F138S mutation and a synthetically enhanced defect in combination with the rps15-F138S mutation (Figure 3(G)) and S5D. These results may be an indication that the contact between Rps15 and Tsr1 is important for cytoplasmic pre-40S subunit maturation.
The C-terminal tail of Rps15 functions in late 40S subunit maturation
The so far obtained rps15 mutants display no (rps15-F138S) or only mild (rps15-F102L/F138S) defects by themselves, and their connection to late 40S maturation becomes only obvious when combined with other mutations (e.g. ltv1 or tsr1). Therefore, we next aimed to generate rps15 mutants displaying late 40S maturation defects on their own.
For this purpose, we created further mutations in proximity to F102 and F138. Residue F102 does not directly contact RNA nor other proteins; however, in the mature 40S and pre-40S structures, it is in close proximity to arginine 77 (R77) and histidine 79 (H79), which are in direct contact with 18S rRNA helices h32 and h33 (Figure 4(A) [37,45]. Therefore, we speculated that the F102L mutation may affect the positioning of these residues, consequently leading to altered interactions of Rps15 with rRNA. To further evaluate the importance of these residues, we mutated R77 and H79, as well as the residue in between, threonine 78 (T78), alone and in combinations (Fig. 4B). However, although most of them (except the R77A mutant) were genetically linked to LTV1 ( Figure S6A and S6B), many of these mutants did not display a growth defect on their own, except for the triple mutant rps15-R77A/T78A/H79A (further referred to as rps15-RTH (77-79)>A), which showed a slight growth defect at 37°C (Fig. 4B).
Next, as F138 is very close to the C-terminal end of Rps15, we generated stepwise C-terminal truncations (depicted in Fig. 4C) and compared the growth of the different mutants to that of the F138S mutant. Removal of the last seven amino acids (rps15-1-135) led to a slight growth defect, deletion of the last eight residues (rps15-1-134) to a strong growth defect, which became even more severe when the last 12 (rps15-1-130) amino acids were removed (Fig. 4D). Notably, all tested C-terminally truncated Rps15 variants, including a variant lacking only the very last amino acid (lysine, K142), led to a synthetic enhanced defect (rps15-1-141) or even a synthetic lethal phenotype (all other C-terminally truncated mutants) in combination with the Δltv1 null allele ( Figure S6C and S6D).
To address whether the novel rps15 mutants also display late 40S maturation defects on their own, we investigated the phenotypes of the newly generated rps15-RTH(77-79)>A and rps15-1-134 mutants. Polysome profiles showed a slight (rps15-RTH(77-79)>A) or severe (rps15-1-134) increase in the free 60S peak, together with a reduction of the free 40S peak, indicating a 40S subunit maturation defect in these mutants (Fig. 5A).
To detect potential rRNA processing defects in these new rps15 mutants, we performed northern blotting and included the rps15-F138S and rps15-F102L/F138S mutants in the The amino acid exchange F138S is located at the C-terminus of Rps15, whereas the second mutation F102L is positioned in the globular domain at which the C-terminus emerges ( [45], PDB 6Y7C). (B) Genetic interaction between the Δltv1 null allele and the double mutant rps15-F102L/F138S. The Δrps15 and Δrps15 Δltv1 strains carrying the wild-type plasmid URA3-RPS15 as well as a LEU2-plasmid encoding RPS15, rps15-F138S or rps15-F102L/F138S, respectively, were spotted onto SDC-leu plates and plates containing 5-FOA and were incubated at 30°C for 3 and 6 days. (C) In the Δltv1 strain, the RPS15-mutation F138S leads to cold sensitivity. The shuffled Δrps15 and Δrps15 Δltv1 strains, shown in B, containing only a LEU2-plasmid encoding RPS15, rps15-F138S or rps15-F102L/F138S were spotted onto SDC-leu plates and incubated for 3 days at the indicated temperatures. (D) Genetic interaction between the phosphorylation deficient ltv1 mutant (ltv1-SS>A) and RPS15. The Δrps15 Δltv1 cells carrying URA3-RPS15 and LEU2-LTV1 or LEU2-ltv1 -SS>A as well as a TRP1-plasmid with RPS15, rps15-F138S or rps15-F102L/F138S, respectively, were spotted onto SDC-leu-trp plates (-leu -trp) and plates containing 5-FOA and lacking leucine (5-FOA-leu) and were incubated at 30°C for 3 and 6 days. (E) Rps15 is in contact with Rio2 and Tsr1 in the structure of the pre-40S particle ( [45], PDB 6Y7C). Rps15 is coloured in teal, Rio2 in blue and Tsr1 in purple. Parts of Tsr1 are more transparent to reveal the C-terminus of Rps15 behind Tsr1. (F) The Rps15 C-terminal tail is positioned between the A-and P-site tRNAs in the post-decoding pre-translocation translating human ribosome ( [41], PDB 6Y0G). The indicated position K145 (C-terminal residue of human RPS15) corresponds to yeast K142, the indicated F141 corresponds to yeast F138. (G) Genetic interaction between TSR1 and RPS15. The double knockout strain Δrps15 Δtsr1 carrying URA3-RPS15 and URA3-TSR1 as well as combinations of LEU2-plasmids with TSR1, tsr1-1, tsr1-2 and the HIS3-plasmids with RPS15, rps15 F138S or rps15 F102L/F138S were spotted onto SDC-his-leu plates (-his -leu) and plates containing 5-FOA and were incubated at 23°C or 30°C for 3 days.
analysis for comparison. Indeed, we observed a slight (rps15-RTH(77-79)>A) or strong (rps15-1-134) accumulation of 20S pre-rRNA, which was, in the case of the rps15-1-134 mutant also accompanied by a reduction of mature 18S rRNA, especially at low temperatures. In addition, in the rps15-1-134 mutant, a 21S pre-rRNA was also detected at 30°C and to a higher extent at 25°C (Fig. 5B). This aberrant precursor arises when cleavage of the 32S pre-rRNA intermediate at site A 2 is skipped and the precursor is cleaved at site A 3 instead (see Figure S1). In line with this, a mild accumulation of 32S (and also 35S) pre-rRNA and a reduction of 27SA 2 pre-rRNA was also observed under the same conditions. These data suggest that the rps15-1-134 mutant also shows a defect in early nucleolar ribosome biogenesis, next to the late defect in 20S pre-rRNA processing. As 20S pre-rRNA is cleaved by Nob1, a potential explanation for 20S pre-rRNA accumulation in the rps15 mutants could be the failure to recruit Nob1 to pre-40S particles. However, Nob1 was present in late nuclear and cytoplasmic pre-40S particles purified from the rps15-1-134 and rps15-RTH(77-79)>A strains, indicating that a step after Nob1 recruitment has to be affected (Fig. 5C). In addition, to determine, in which cellular compartment the 20S pre-rRNA accumulates, we assessed the localization of 20S pre-rRNA in the mutants by FISH (Fig. 5D). While the rps15-RTH(77-79)>A mutant showed only slight cytoplasmic accumulation of 20S pre-rRNA, the C-terminal truncation mutant rps15-1-134 accumulated significant amounts of 20S pre-rRNA in the cytoplasm, indicating a strong cytoplasmic 40S maturation defect before site D is cleaved by Nob1.
Although cytoplasmic 20S pre-rRNA accumulation was the strongest phenotype observed in the two rps15 mutants, the rps15-1-134 mutant also appeared to have an increased nuclear ITS1 signal (Fig. 5D) and showed some accumulation of nuclear pre-rRNAs, i.e. 21S, 23S, 32S and 35S pre-rRNAs (Fig. 5B). Therefore, we sought for further confirmation that the cytoplasmic 20S pre-rRNA accumulation is the main defect in these mutants. To this end, we scored for potential pre-40S export defects in these two rps15 mutants, by examining the localization of the 40S subunit export reporter Rps3-GFP via fluorescence microscopy ( Figure S7A). As a control, also the 60S subunit export reporter Rpl25-GFP was examined in these strains. Importantly, neither the rps15-RTH(77-79)>A nor the rps15-1-134 mutant showed an r-subunit export defect ( Figure S7A), further supporting our model that they mainly display cytoplasmic pre-40S maturation defects. In our previous study, we observed that large amounts of the 20S pre-rRNA accumulating in Δltv1 Δpfa1 as well as Δltv1 prp43 double mutants are incorporated into polysomes [30], suggesting that in these mutants, 20S pre-rRNA containing 40S subunits can escape the quality control exerted by the 'translation-like cycle' and engage in translation. To address if such a phenotype is also observable in the rps15-1-134 and rps15-RTH(77-79)>A mutants, we analysed fractions collected from polysome profiles of these strains by northern blotting ( Figure 5E). In the rps15-RTH(77-79)>A mutant, the 20S pre-rRNA sedimented with the 40S peak comparable to the wild-type strain. In contrast, the rps15-1-134 mutant showed small amounts of 20S pre-rRNA in polysomes; however, in contrast to the Δltv1 Δpfa1 and Δltv1 prp43 double mutants investigated previously [30], most of the 20S pre-rRNA signal was found in the 80S peak, suggesting that pre-40S particles from the rps15-1-134 mutant can form 80S ribosomes but engage only inefficiently in translation. Taken together, we conclude that the C-terminal domain of Rps15 is important for the final cytoplasmic pre-40S subunit maturation events prior to 20S pre-rRNA processing.
The C-terminal domain of Rps15 acts together with the N-terminal extension of Rps31 in ensuring translational fidelity
Recent findings suggested an important function of the Rps15 C-domain in translational decoding [39,41], while our data indicate an additional role of Rps15 in late pre-40S maturation. These phenotypes are reminiscent of those described for r-protein Rps31/eS31, for which a dual role in late 40S maturation and decoding was also described [46][47][48], and a ∆rps31 (also known as ∆ubi3) Figure 5. The C-terminus of RPS15 functions in late pre-40S maturation. (A) Δrps15 cells carrying the LEU2-plasmid encoding the wild-type RPS15, rps15-1-134 or rps15-RTH(77-79)>A, respectively, were grown in liquid SDC-leu medium at 30°C. After inhibition of translation by cycloheximide, cells were lysed and polysome profiles were recorded. Peaks corresponding to the 40S and 60S subunit, 80S ribosomes and polysomes are indicated in the profile of the wild-type strain. (B) For northern blotting, rps15 deletion strains carrying LEU2-plasmids with wild-type RPS15 or the indicated rps15 variants were grown in liquid SDC-leu medium at 30°C to an OD 600 of 0.1-0.2 and shifted for 3 h to 25°C, 30°C or 37°C, respectively. Blots were probed as indicated in the methods section (the yeast pre-rRNA processing pathway is shown in Figure S1). (C) Eluates from TAP purification of Tsr1-particles from the wild-type RPS15 or rps15-1-134 or rps15-RTH(77-79)>A mutant backgrounds were analysed via SDS-PAGE, followed by western blotting. Tsr1-CBP was detected via α-CBP antibody; Rps3 and Nob1 were detected with antibodies specific to the respective proteins. (D) The rps15-1-134 and rps15-RTH(77-79)>A mutants show cytoplasmic 20S pre-rRNA accumulation. For fluorescence in situ hybridization (FISH), cells were grown in liquid SDC-leu at 30°C. After cell fixation, spheroplasts were incubated with a probe specific to the D/A 2 segment of the ITS1 region to detect 20S pre-rRNA and nuclei were stained with DAPI. (E) After fractionation of polysomes from the RPS15 strain and rps15-1-134 or rps15-RTH(77-79)>A, respectively, northern blots were performed. Blots were probed as described in the methods section. mutant was also found to be synthetic lethal with ∆ltv1 [47].
Moreover, Rps31 is positioned at the beak of the 40S subunit and has an unstructured N-terminal extension, which reaches into close proximity to the C-terminal domain of Rps15 (Fig. 6A) [37]. Similar to C-terminal deletions of Rps15, the deletion of the N-terminal domain of Rps31 (ubi3ΔN allele, herein rps31ΔN) or complete deletion of the non-essential RPS31 gene leads to late pre-40S maturation defects, including a huge 20S pre-rRNA accumulation in the cytoplasm. Moreover, in ubi3Δ cells, 20S pre-rRNA mainly co-sediments with 80S ribosomes, with small amounts of 20S pre-rRNA entering into polysomes [47], similar as we observed for the rps15-1-134 mutant. Besides, cells lacking Rps31 show an increased rate of amino acid misincorporation during translation, indicating that Rps31 ensures optimal rps15-1-134 rps31ΔN, the rps15-F102L/F138S rps31ΔN and rps15-RTH(77-79)>A rps31ΔN mutants were inviable on 5-FOA, indicating synthetic lethality. (C) Measurement of misincorporation frequencies in the indicated mutants. Strains were transformed with URA3-plasmids pDB688 and pDB868 to measure misreading (Arg245(GGC) to His245(CAC)). Transformants were grown in liquid SDC-ura medium to mid-log phase at 30°C and then Renilla and firefly luciferase activity was measured. Assays were done in quadruplicate, and the data were expressed as the mean ± the standard deviation. The percentage of misreading was expressed as the firefly (H245R)/Renilla luciferase activity divided by the firefly (wild-type)/Renilla luciferase activity multiplied by 100. Significance levels were determined by Student's t-test (*, p < 0.05).
To assess if these synthetic growth defects go along with an enhancement of the translational error rate upon combination of rps15 and rps31 mutations, we analysed amino acid misincorporation in the viable rps15 rps31ΔN double mutants (Fig. 6C). Additionally, we also analysed the rps15 single mutants F138S, 1-141, 1-134, F102L/F102S and RTH(77-79)>A. To this end, we made use of a plasmid-borne tandem Renilla and firefly luciferase reporter system [49], where the CAC (His) codon at position 245, essential to form functional firefly luciferase, has been substituted by a CGC (Arg) codon, resulting in only residual firefly luciferase enzymatic activity. An increased firefly luciferase activity is indicative of a misreading event, leading to misincorporation of histidine at this critical position, and consequently to reconstitution of the enzymatic activity of the protein.
Our results indicated that the single rps31∆N and the rps15-1-141 mutation leads to a modest but statistically significant increase in the misreading rate (Fig. 6C). Although we observed the tendency that the combined mutation of rps31∆N and rps15-1-141 further increased the misincorporation rate compared to the single mutations, while the combination of rps31∆N and rps15-F138S reduced the misreading rate compared to rps31∆N alone, no significance was detected for these combinations compared to the single mutations. Similarly, single rps15 mutants F138S, F102L/F138S, and RTH(77-79)>A repeatedly showed slightly increased misreading rates in our assays, however without significance. In contrast, no difference compared to the wild-type was observed for the rps15-1-134 mutant. Interestingly, the ∆ltv1 knockout mutation also significantly increased the translation error rate compared to the wild-type strain, but only tendencies for increased misreading without significance were detected for the double ∆ltv1 rps15-1-141 mutant, and for decreased misreading in the double ∆ltv1 rps15-F138S mutant. Altogether, these results suggest that slight alterations in the C-terminal tail of Rps15 can lead to increased misreading. Moreover, our data support a model where the N-terminal tail of Rps31 and the tip of the C-terminal end of Rps15 (missing in the rps15-1-141 mutant) cooperate in ensuring translational fidelity. Ltv1 is a crucial factor for final maturation of cytoplasmic pre-40S particles, including correct positioning of different r-proteins [20,21,24]; thus, as our results showed, it is not surprising that 40S subunits are also prone to make more misreading in the absence of Ltv1.
Discussion
Many eukaryotic r-proteins contain, in addition to globular domains, long unstructured N-or C-terminal extensions [50]. Interestingly, in several instances, such extensions were described to fulfil important functions during ribosome biogenesis. For example, the C-terminal tails of Rps14 and Rps5 and the N-terminal tail of Rps31 were shown to be required for 20S pre-rRNA processing in the cytoplasm [34,35,48]. Interestingly however, Rps5, Rps14 and Rps31 are assembled to pre-ribosomes early on, and the absence of the essential Rps5 and Rps14 r-proteins stalls the ribosome biogenesis pathway in the nucleus [31]. Hence, while their physical presence is required early in maturation, these r-proteins fulfil additional important functions in late 40S subunit maturation steps. Here, we show that Rps15 is yet another r-protein with more than one function in ribosome biogenesis, being not only involved in early nucleolar maturation events and required for 40S export, as previously reported [11,12,31,36], but also in cytoplasmic pre-40S maturation.
In particular, the mutant in which the seven C-terminal amino-acids of Rps15 are missing, rps15-1-134, accumulated 20S pre-rRNA in the cytoplasm, suggesting a defect in the final 40S maturation steps prior to D-site cleavage. The molecular reason for this phenotype is not yet clear, however, we consider it unlikely that the Rps15 C-terminal tail directly regulates 20S pre-rRNA processing for several reasons: (1) Nob1 is present in pre-40S particles purified from the rps15-1-134 mutant, excluding a defect in Nob1 recruitment. (2) Our rps15 mutants did not exacerbate the mild slow-growth phenotype of a NOB1-TAP strain (data not shown), in contrast to other mutants more directly connected to D-site cleavage, like rpl3-W255C [27]. (3) In pre-40S structures, the Rps15 C-terminal tail is ~50 Å apart from Nob1 and more than 80 Å apart from the 3' end of the 18S rRNA [29,45,51], making a direct engagement of the Rps15 C-terminal tail in D-site cleavage unlikely.
Notably, the Rps15 C-terminal tail alters its positioning throughout the translational cycle. In the post-decoding pretranslocation state of translation elongation, it is wedged between the A-and P-site tRNAs [41]. In line with this, deletion of the last 15 residues from the C-terminal tail of human RPS15 leads to defects in translation elongation, as was concluded from the reduction of polysomes and the accumulation of 80S particles [39]. Moreover, some of the mutations in the RPS15 C-terminus linked to CLL showed reduced overall translation, while others showed increased misreading rates or increased stop-codon readthrough ( Figure S2C) [43].
Here, we show that the deletion of the very C-terminal residue of Rps15 (K141) results in increased misreading rates. This result is in compliance with structural data suggesting that this terminal lysine residue (K145 in human RPS15) is in direct contact with mRNA during decoding [41]. Moreover, the genetic interaction between rps15 and rps31 mutants, as well as the enhanced misreading rates in the rps31ΔN rps15-1-141 double mutant suggest that the Rps31 N-terminal extension and the Rps15 C-terminal tail might act in concert in ensuring not only the correct execution of final pre-40S maturation events but also correct decoding. Interestingly, RPS31 is non-essential, and also our rps15-mutants are viable, indicating that although these mutants display growth defects, they tolerate a certain extent of translational error. Consistently, the misreading rates we detected were quite modest, especially when compared to the values obtained for mutants of other r-proteins (e. g. rps9B-D94N) or the wildtype strain treated with low doses of paromomycin [49]. Moreover, we did not observe increased misreading in the rps15-1-134 mutant, which showed the strongest defects in cytoplasmic pre-40S maturation. Together with the previous results that only few of the human RPS15 C-terminal tail mutants show increased misreading ( Figure S2C) [43], these data suggest that mild positional alterations at some critical residues of the tail can alter translational fidelity, potentially due to altered communication with mRNA or tRNAs, while other mutations or the complete absence of this tail do not have such an effect. However, the main defect of C-terminal tail mutants is likely not the altered misreading, as it occurs only in some mutants and the increase in misreading is only subtle.
Strikingly, the strategic positioning of the Rps15 C-terminal tail between the A-and P-site tRNAs resembles the orientation of the Rps15 C-terminal tail in pre-40S particles, which is clamped between assembly factors Tsr1 and Rio2 instead of tRNA (compare PDB 6Y0G [41] and PDB 6Y7C [45], Figure 3F and 3E). Considering this positioning, we speculate that Rio2 and Tsr1 might quality-check for the integrity of the Rps15 C-terminal tail. In the absence of an interaction of the C-terminal tail with Rio2 and/or Tsr1, further maturation (including 20S pre-rRNA processing) and engagement of faulty pre-40S particles lacking the Rps15 C-terminal tail in translation might be prevented, resulting in the accumulation of these immature 40S particles in 80S(like) particles. An alternative possibility would be that the rps15-1-134 mutant might cause an earlier cytoplasmic pre-40S maturation defect (e.g. an altered rRNA structure or protein positioning) and that not the absence of the Rps15 C-terminal tail directly, but the defect caused by its absence is responsible for the blocking of subsequent pre-40S maturation steps. Last but not least, it might be possible that the Rps15 C-terminal tail is itself part of a quality control mechanism to test the functionality of 40S subunits, and that in its absence, a particular quality control step cannot take place, consequently trapping pre-40S particles in an inactive form.
While the functional connection of the Rps15 C-terminal tail to Tsr1 can be explained by the structural data demonstrating a direct interaction, the precise connection of the Rps15 C-terminal tail to Ltv1 is more puzzling. Ltv1 interacts with Enp1 and Rps3 [19,20], which are both positioned in the head domain of the pre-40S particle, as is the globular domain of Rps15. So far, the flexible and poorly structured protein Ltv1 has only been partly resolved on pre-40S particles [17,18,45,52]. However, biochemical interaction data indicate a direct interaction between Rps15 and Ltv1 [53]. Moreover, a human pre-40S particle cryo-EM analysis succeeded in visualizing a C-terminal α-helix of Ltv1 that is in direct contact with the globular domain of Rps15 [29]. In lack of further structural information, it is not possible to judge whether additional contacts are formed between Ltv1 and Rps15 or whether Ltv1 even interacts with the Rps15 C-terminal domain directly. The observed defects in the absence of Ltv1 might either be a consequence of the missing interaction with the Rps15 globular domain (potentially leading to an altered positioning of Rps15) or might be due to the effect of the missing Ltv1 on other proteins linked to Rps15 like Rio2 or Tsr1. The third possibility we envisage is an RNA folding problem in these mutants. ∆ltv1 mutants are cold-sensitive, and the genetic interaction between LTV1 and RPS15 is also most pronounced at low temperatures. Cold-sensitive phenotypes are frequently observed upon RNA folding problems [54,55], hence we speculate that an altered rRNA structure occurring in the absence of Ltv1 might be the reason for the genetic interactions with Rps15.
Evidence for a cytoplasmic pre-40S maturation function of Rps15 also comes from a recent study, in which several different mutants including rps15 mutants were investigated for defects in the cytoplasmic release of Ltv1, Rio2 and Tsr1 from pre-40S particles [56]. Mutants in the C-terminal tail of Rps15 showed defects in the release of Rio2 from pre-40S particles in the mentioned study [56]. In our experiments, however, we did not see Rio2 release defects in rps15 C-terminal mutants (data not shown). The reason for these divergent results could be related to the fact that, in the study by Huang et al. [55], the phenotypes were investigated in a strain depleted for Fap7, a condition normally leading to the accumulation of 80S-like particles [25,57]. Hence, Rio2 release defects might only be visible when rps15 mutations are combined with the depletion of Fap7.
In recent years, a strong connection between mutations clustering in the evolutionary-conserved C-terminal tail of human RPS15 (131-PGIGATHSSR-140) and aggressive, chemo-refractory CLL has been uncovered, and the molecular reason has been attributed to translational defects [42][43][44]. Although a mutation of F141, the residue corresponding to yeast F138, has not yet been observed in this context, F141 is positioned directly after a cluster of amino-acids found to be mutated in CLL ( Figure S2B). Additionally, C-terminal truncation mutants tested in our study lack at least one (rps15-1-141), or several (rps15-1-134) of the equivalent CLL-linked residues. In human cells, it was shown that the respective mutated RPS15 variants are incorporated into ribosomes and negatively affect translation fidelity and global protein synthesis ( Figure S3C) [43]. Our study revealed that in yeast, Rps15 C-terminal truncation mutants already show defects in the course of 40S subunit maturation before 40S particles are even joined with 60S subunits forming translation-competent ribosomes. Thus, a plausible alternative possibility to explain the disease relapse could be related to the 40S biogenesis defects, likely at the late cytoplasmic maturation steps and, therefore, should be considered in the future.
Yeast strains and genetic methods
The S. cerevisiae strains used in this study are W303 derivatives generated by integration at the genomic locus and are listed in Table S1. Yeast plasmids were constructed using standard recombinant DNA techniques and are listed in Table S2. All DNA fragments amplified by PCR were verified by sequencing. The tsr1-1 and tsr1-2 mutants were generated by random PCR mutagenesis as described previously [58,59].
Identification of mutants from the SL screen
Mutants showing synthetic growth defects or synthetic lethality in combination with Δltv1 were previously generated in a synthetic lethality (SL) screen [30]. The screen is based on a combination of the ade2/ade3 red/white colony-sectoring assay and counter-selection on 5-FOA (5-fluoro orotic acid, Thermo Scientific)-containing plates and scores for the inability to lose a plasmid carrying an LTV1 wild-type copy, resulting in a red non-sectoring, 5-FOA-sensitive phenotype (for more details, see [60]). The seven mutants that were not further characterized in the previous study [30] were first transformed with LEU2-plasmids containing genes already known to be genetically linked to LTV1 (i.e. PRP43, PFA1, NOB1 and RIO2). While mutants remained red upon transformation with non-complementing plasmids, the mutants carrying complementing plasmids recovered the ability to lose the LTV1 wild-type plasmid, and hence showed red/ white sectoring colonies. Using this strategy, we identified six of the seven mutants to be complemented by RIO2. Sequencing of the chromosomal RIO2 copy confirmed mutation of this gene in these six mutants. The remaining SL mutant (#432) was transformed with a genomic LEU2plasmid based library. Colonies showing a red/white sectoring phenotype were re-streaked onto plates lacking leucine (SDCleu) and then on 5-FOA containing plates. Complementing plasmids resulting in red/white sectoring colonies on SDC-leu and growth on 5-FOA were isolated, and LTV1 containing plasmids were identified by PCR. All other plasmids were subjected to DNA sequencing, revealing that all of them contained RPS15. Mutation of RPS15 in the SL mutant #432 was subsequently confirmed by DNA sequencing at the genomic locus.
Plasmid shuffle assays
The Δrps15 shuffle strain (Δrps15 [pRS316-RPS15]) was constructed by chromosomal deletion of RPS15 in a diploid yeast strain, followed by transformation with the URA3-plasmid [pRS316-RPS15]. After tetrad dissection, the spores harbouring the gene knockout and the complementing URA3 plasmid were recovered. A similar strategy was used to generate the RPS15 shuffle Δltv1 strain (Δrps15 Δltv1 [pRS316-RPS15]).
To analyse the growth phenotypes conferred by mutant alleles of RPS15 either alone or in combination with Δltv1 and tsr1-or rps31-mutations, we transformed the strains with the respective plasmids. Thereafter, transformants were spotted in 10-fold serial dilutions on 5-FOA containing plates to evaluate the phenotypes caused after loss of the URA3-RPS15 plasmid or both the URA3-RPS15 and URA3-TSR1 plasmids. Subsequently, strains that were viable on 5-FOA containing plates were re-streaked on plates selecting for the transformed plasmids. Subsequently, these strains were spotted in 10-fold serial dilutions onto the respective plates and incubated at different temperatures.
Fluorescence in situ hybridization (FISH), fluorescence microscopy
For fluorescence in situ hybridization, cells were grown in 50 ml SDC-leu medium at 30°C to an OD 600 of ~0.5 and fixated with formaldehyde with a final concentration of 4% for 1 h at room temperature. After fixation, cells were washed twice with buffer containing 0.1 M KPO 4 buffer (K 2 HPO 4 and KH 2 PO 4 mixed in the appropriate ratio to obtain a pH of 6.4) and washed once with washing buffer containing 0.1 M KPO 4 and 1.2 M sorbitol (pH of 6.4). For cell wall lysis, cells were incubated with 1 ml washing buffer containing 500 µg/ml Zymolyase 100 T (Amsbio) for 60 min at room temperature, followed by one washing step with the washing buffer. Finally, the spheroplasts were resuspended in ~1.5-fold pellet volume, applied to adhesive coated 10-well diagnostic microscope slides (Thermo Scientific, LOT #381,613) and incubated for 10 min. For equilibration and to remove non-adhering cells by aspiration, spheroplasts were washed with a 2x SSC buffer (pH 7) and, afterwards, incubated in a humid chamber overnight at 37°C with hybridization buffer containing 50% formamide, 10% dextran sulphate sodium salt from Leuconostoc ssp. (Fluka), 125 µg/ml E. coli MRE600 tRNA (Boehring Mannheim GmbH), 500 µg/ml salmon sperm DNA sodium salt (AppliChem), 4x SSC, 1x Denhardt solution (Invitrogen) and approximately 0.8 pmoles of a Cy3-labelled ITS1-specific probe (5′-Cy3-ATGCTCTTGCCAAAACAA AAAAATCCATTTTCAAAATTATTAAATTTCTT-3′). After probing, spheroplasts were washed once with 200 ml 2x SSC, with 1x SSC and, finally, incubated with 200 ml 0.5x SSC containing 5 µg DAPI. After nuclear staining with DAPI, spheroplasts were washed twice with 0.5x SSC and the wells were dried and layered with Mowiol before the microscopy slides were covered with coverslips. Cells were imaged by fluorescence microscopy using either an Imager Z1 microscope (Carl Zeiss) with Plan-Apo-Chromat 100 oil immersion lens and a DICIII, 4',6-diamidino-2-phenylindole, and a HECy3 filter, or a Leica DM6 B microscope, equipped with a DFC 9000 GT camera, using the PLAN APO 100x objective and narrow band TXR and LDA filters and the LasX software.
Sucrose gradient analysis
Cells were grown in 70 ml SDC-leu medium at 30°C to an OD 600 of ~0.5-0.7 (log-phase). Cycloheximide (CHX) was added to 50 ml culture in a final concentration of 100 µg/ ml, and cells were incubated on ice for 5 min. After harvesting, cells were resuspended in lysis buffer containing 10 mM HCl-Tris (pH 7.5), 100 mM NaCl, 30 mM MgCl 2 and 100 µg/ ml CHX. After mechanical cell lysis using glass beads, 7 A 260 units of the cell extracts were loaded onto 5-35% sucrose gradients containing 50 mM HCl-Tris (pH 7.5), 50 mM NaCl and 10 mM MgCl 2 and centrifuged at 38,000 rpm at 4°C for 2 h 45 min using a Beckman Optima TM LE-80 K Ultracentrifuge. Gradients were analysed using a UA-6 system (Teledyne Isco) with continuous monitoring at A 254 nm.
Northern blotting
For the analysis of polysome profile fractions by northern blotting, RNA from the fractions (approximately 500 µl) was extracted by mixing three times with phenol-chloroformisoamyl alcohol (25:24:1) and once with chloroform-isoamyl alcohol (24:1). RNA was precipitated as described below.
Tsr1-TAP purification
The Tsr1-TAP Δrps15 strains carrying the LEU2-plasmids expressing RPS15, rps15-1-134 or rps15-RTH/77-79)>A, respectively, were grown at 30°C in 4 l YPD each to an OD 600 of 2. TAP purifications were performed in a lysis buffer containing 50 mM Tris-HCl (pH 7.5), 100 mM NaCl, 1.5 mM MgCl 2 , 0.075% NP-40 and 1 mM dithiothreitol (DTT). Prior to use, 1× Protease Inhibitor Mix FY (Serva) was added freshly to the lysis buffer. Cells were lysed by mechanical disruption using glass beads and the lysate was incubated with 300 µL IgG Sepharose™ 6 Fast Flow (GE Healthcare) at 4°C for 60 min. After incubation, beads were transferred into Mobicol columns (MoBiTec) and washed with buffer. Elution from IgG Sepharose™ beads was performed via TEV protease under rotation at room temperature for 70 min. A final concentration of 2 mM CaCl 2 was added to the TEV eluates and they were then incubated with 300 µL Calmodulin Sepharose™ 4B (GE Healthcare) at 4°C for 60 min, washed with lysis buffer containing 2 mM CaCl 2 and finally eluted with 5 mM EGTA. Protein samples were TCA-precipitated and dissolved in an SDS sample buffer, separated on NuPAGE™ 4-12% Bis-Tris gels (Invitrogen) and analysed via western blotting.
Quantification of translation accuracy
To measure the rate of amino acid misincorporation, the appropriate strains were transformed with a dual-luciferase reporter plasmid generously provided by David M. Bedwell (see Table S2). Luciferase activities were measured as previously described [49] using the Dual-Glo® Luciferase Assay System (Promega). Cells from each strain were grown in liquid SDC-ura medium to mid-log phase at 30°C and firefly and Renilla luciferase luminescence levels were measured at room temperature with a CLARIOstar 1.20 microplate reader (BMG Labtech, Germany) adjusted to endpoint read-type and default settings. Assays were repeated four times (biological replicas), each of these was replicated three times (technical replicas), and the data were expressed as the mean ± the standard deviation. Error rates for each strain were calculated as the percentage of the firefly/Renilla luciferase activity (mutant plasmid) divided by the firefly/Renilla luciferase activity (wild-type plasmid).
Notes
1. Recently, a new nomenclature for ribosomal proteins was introduced (Ban et al., 2014). In this publication, the standard nomenclature is used, and the new nomenclature is additionally indicated upon first mentioning of a r-protein. | 2022-04-20T06:25:15.738Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "eec9b13d243e53bf625591006fd162ae437e0eb6",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15476286.2022.2064073?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b32853dc8bdc24fbecc584ca772891cb3ae73bad",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21663778 | pes2o/s2orc | v3-fos-license | New Guinea bone daggers were engineered to preserve social prestige
Bone daggers were once widespread in New Guinea. Their purpose was both symbolic and utilitarian; they functioned as objects of artistic expression with the primary function of stabbing and killing people at close quarters. Most daggers were shaped from the tibiotarsus of cassowaries, but daggers shaped from the femora of respected men carried greater social prestige. The greater cross-sectional curvature of human bone daggers indicates superior strength, but the material properties of cassowary bone are unknown. It is, therefore, uncertain whether the macrostructure of human bone daggers exists to compensate for inferior material properties of human femora or to preserve the symbolic value of a prestigious object. To explore this question, we used computed tomography to examine the structural mechanics of 11 bone daggers, 10 of which are museum-accessioned objects of art. We found that human and cassowary bones have similar material properties and that the geometry of human bone daggers results in higher moments of inertia and a greater resistance to bending. Data from finite-element models corroborated the superior mechanical performance of human bone daggers, revealing greater resistance to larger loads with fewer failed elements. Taken together, our findings suggest that human bone daggers were engineered to preserve symbolic capital, an outcome that agrees well with the predictions of signalling theory.
Introduction
Signalling theory is a unifying concept in the social and biological sciences [1]. It proposes that social prestige, or symbolic capital [2], is a mechanism for communicating underlying traits with adaptive value. A central tenet of signalling theory is that status-accruing signals are honest (indexical) and, therefore, reliable indicators of the intrinsic qualities of the signaller. An intriguing application of signalling theory involves the decorative arts, a topic that is usually viewed as purely symbolic. Yet virtually every culture devotes effort to the elaboration of utilitarian objects (clothing, pots, tools, dwellings, etc.), as well as their own bodies. Universal behaviours invite an adaptive explanation [3], and signalling theory argues that symbolic expression can serve a functional purpose if it communicates, by proxy, attributes of the signaller, such as fine cognitive and motor skills or time available for non-subsistence behaviours. These qualities, in turn, are expected to attract higher quality allies and reproductive partners, thus enhancing the reproductive success of both the signaller and receiver. The bone daggers of New Guinea were potent objects of artistic expression [4]. They were incised with elaborate designs, both abstract and representational (figure 1), and worn as conspicuous personal adornments (figure 2). It is a signalling tradition that invites study because, as close-combat weapons [4,7], bone daggers were exemplars par excellence of male fighting abilities, and a highly desirable status symbol among men [8,9]. In addition, bone itself was the embodiment of strength, both mechanically and symbolically with powers enmeshed in the supernatural world [4]. This dual concept of strength, together with the dual function of bone daggers as weapons and symbols, is intriguing when one considers the different macrostructures of bone daggers. Those made from a human femur were shaped differently and appear to be better engineered for mechanical performance, which raises the possibility of biomechanical trade-offs in a social signal, a topic that intersects the arts with the physical, life and social sciences.
Bone daggers as weapons
In New Guinea, bone daggers were close-combat weapons used to kill outright or finish off victims wounded with arrows or spears, by stabbing them in the neck ( [8][9][10][11]; Bragge LE. n.d. (1970)(1971)(1972)(1973)(1974) Interview notes (unpublished). Koetong, Australia: Bragge Archives.) a process that Schultze Jena [12] described vividly in 1914: The lethal point at which one aims [the bone dagger] is the neck just above the breastbone end of the collarbone, the area of the subclavia and carotid. The dagger serves not only to stab into the main arteries but at the same time as a lever with which one twists the punctured neck of the enemy in order to tear the throat and, with sufficient power, break the neck (p. 9; German-English translation by P. Roscoe).
Landtman published a similar account based on interactions with the Kiwai from 1910 to 1912 [13]. According to Kiwai respondents, the eastern Gulf tribes used bone daggers for 'stabbing prisoners, taken in a fight, through their hip joints, knees or ankles'. Thus disabled, the prisoners 'could be kept alive until needed for a later cannibal feast' (p. 57).
The veracity of these accounts is difficult to assess; contact-era narratives were often based on the views of informants toward their own adversaries. However, the reports are consistent insofar as they describe stabbing actions in various joints (cervical, hip, knee, ankle). Another similarity is implicit, and it concerns compressive and torsional loads near the tip and the potential for mechanical failure, which is evident in some specimens (figure 3). Such failure would empty the dagger of all symbolic strength and potentially jeopardize the user during hand-to-hand fighting, suggesting that the strength of daggers can hold adaptive value. It is telling that peacemaking ceremonies in the Lower Arafundi require the mutual destruction of spears but the exchange of bone daggers [14], a distinction that highlights the practical value of the latter weapon.
Bone daggers as objects of social prestige
The biological source of a bone dagger (human or cassowary) is readily apparent [4]. Those shaped from human femora have distinct pommels (notched femoral condyles and a steep patellar groove; figure 1a) and greater curvature, both longitudinally and transversely. Human bone daggers were prestigious [4] in part because the most suitable femora were sourced from battle-proven men, usually those of a father, once the corpse was reduced to a skeleton [4], or those of a vanquished enemy [15]. They were weapons filled with substantial strength, i.e. they were the manifestation of spiritual power [16], allowing the owner to lay claim to the powers of the man who surrendered the bone.
Daggers were also shaped from the tibiotarsus of cassowaries (figure 4), and these were widespread (figure 5), especially in the Sepik region [24]. There is reason to surmise that cassowary bone daggers were also symbols of male strength, although to lesser extent [4]. Cassowaries are described as 'sullen, [17]. Massive leg muscles [18] enable running speeds up to 50 km h −1 and standing jumps as high as 1.5 m [19]. An outstanding peculiarity of cassowaries is their medial toe (digit II), which is equipped with a prodigious spike-like claw (photograph by Christian Hütter, reproduced with permission). The claw can be 12 cm long and 3 cm at the base [20] and used to telling effect by quickly extending (kicking) the leg forward or to the side. A review of 221 incidents between 1926 and 1999 found that southern cassowaries (C. casuarius) inflicted serious wounds on domestic animals, with kicks resulting in lacerations, punctures, and ruptures to internal organs [21]. At least one person, a boy aged 16 years, stumbled and fell while assailing a cassowary and later succumbed to an exsanguinating puncture wound to the neck [22]. (c) Articulation of the tibiotarsus and tarsometatarsus at the intratarsal joint. The area that appears to be the knee of a bird is homologous to the human ankle. [25], words that speak to their size, agility and aggression when provoked ( figure 4). Hunting such an animal was an additional source of male status in New Guinea [26,27]; indeed, some odes to deceased men recounted the number of his cassowary kills [28]. Cassowary bone daggers also featured prominently in local prestige economies [29,30], in part because cassowaries were imbued with deep cultural significance: commonly sexed as female, they were widely metaphorized and mythically and ritually cast as women, wives and sometimes enemies rather than as birds [31][32][33][34]. Possession of a cassowary bone dagger was thus a plausible signal of male hunting ability, physical and ritual strength and status. Another explanation for the prevalence of cassowary bone daggers is more utilitarian: it might have outstanding mechanical properties. When compared to mammals, the mass of compact (cortical) bone in birds is distributed relatively far from the long axis, leading to higher second and polar moments of area and greater inferred resistance to bending and twisting [35], ideal properties for any tool. It is perhaps unsurprising that the tibiotarsius and tarsometatarsus of cassowaries (figure 4c) were often fashioned into practical tools, e.g. coconut splitters [36,37] and pandanus splitters [38][39][40][41]. Indeed, the mechanical strength and dagger-like appearance of these implements has led some authors to suggest that early accounts misidentified bone 'daggers' as weapons rather than tools. However, this is not the case: the tips of these tools are usually blunted (see electronic supplementary material, figure S2), whereas true bone daggers were sharpened to a fine point (cf. figures 1 and 2d).
In sum, cassowary bone appears superficially to have a similar mechanical utility for dagger manufacture to human bone, yet human bone daggers have greater prestige than those of cassowary bone. These differences focus the aims of the present study.
Study aims
Bone daggers were ornaments and armaments, and the retention of greater cross-sectional curvature appears to be a deliberate design feature of all human-derived daggers. It is a difference that, a priori, would suggest better mechanical performance. Yet the material properties of cassowary bone daggers are unknown, and the widespread use of ratite leg bones-e.g. moa bone daggers in prehistoric New Zealand [23] and emu bone daggers in Australia [42,43]-raises the possibility that cassowary bone has ideal material properties. It is, therefore, uncertain whether the superior macrostructure of human bone daggers exists to compensate for inferior material properties or to better preserve an object with greater symbolic capital. Here we test between these competing possibilities.
Specimens and imaging
We examined intact bone daggers accessioned in the Hood Museum of Art, Dartmouth College. This sample includes early-and mid-twentieth century specimens derived from human femora and cassowary tibiotarsi (n = 5 each; table 1). In addition, we purchased a modern (ca. 1970s) cassowary bone dagger from a private art dealer (see Ethics statement). The human-derived daggers in our sample are readily distinguished by their greater curvature and richer patina. They are also rather rare. An unpublished survey of museum collections (those of the American Museum of Natural History; the Field Museum of Natural History; the Peabody Museum of Archaeology and Ethnology; and others) found that 21 of 499 bone daggers (4.2%) were shaped from human femora (M. Golitko 2017, personal communication).
We scanned each specimen in a 16-slice spiral computed tomography (CT) system (LightSpeed 16, General Electric Medical Systems, Milwaukee, WI, USA) located in the Department of Radiology, Dartmouth-Hitchcock Medical Center. We used a voxel size of 0.2 × 0.2 × 1.25 mm to create threedimensional reconstructions of each dagger (figure 6) and we measured the length (L) of each dagger on the basis of these images. We estimated dagger penetration in human joints at 20% of the overall length (figure 7), as measured from the tip. We, therefore, used this distance to compare crosssectional geometries. We used BoneJ [44] to measure geometrical properties, including cross-sectional area (CSA), minimum and maximum moments of inertia (I min , I max -measures of bending resistance; see electronic supplementary material, figure S2), minimum and maximum section moduli (Z max , Z min -proportional to bending strength) and polar section modulus (Z p -proportional to torsional strength).
cm
cassowary tibiotarsus human femur Figure 6. Three-dimensional reconstructions of each dagger in the present study. This grouping of (a) human-derived daggers and (b) cassowary-derived daggers is useful for highlighting differences in the pommels, which stem from anatomical differences in the human knee joint and cassowary intratarsal joint (figure 4c). The distal condyles of the human femur are asymmetrical, and the patellar groove is much steeper than the shallow trochlear surface of the distal tibiotarsus of cassowaries. In human bone daggers, much of the medial and lateral epicondyles of the femur were removed to create a steeply notched, V-shaped pommel.
Cassowary bone material properties
To measure bone properties and dagger strength, we used a modern cassowary bone dagger for CT imaging and destructive testing. It was first tested in cantilever bending using a uniaxial mechanical tester with a 500 N load cell (Insight 30, MTS, Eden Prairie, MN, USA; figure 7). To simulate insertion into a human joint, we embedded 20% of the dagger length into urethane casting material (DynaCast, Freeman Manufacturing and Supply, Avon, OH, USA). Then we inserted a small sheet of rubber between the compression platen and bone in order to apply load evenly to the dagger handle. We loaded the dagger in the anteroposterior direction to failure at a displacement rate of 1 mm s −1 . We used the maximum force measured in this test (see electronic supplementary material, figure S3) as a benchmark to establish a failure criterion for the finite-element (FE) modelling of each dagger. We used excess pieces of the dagger to create three dog-bone samples for mechanical testing ( figure 8). The samples were taken from the mid-diaphysis of the bone, and were cut and sanded to a uniform cross-section with no curvature. The samples were tested to failure in tension using a 30 kN load cell at a displacement rate of 0.01 mm s −1 . We used uniaxial strain gages (L2A-06-062LW-120, Micro-Measurements, Vishay Measurements Group, Raleigh, NC, USA) to measure strain at the narrow section of each sample. After testing, we measured the CSA at the failure location using a flatbed scanner at a resolution of 0.01 mm pixel −1 . We used the resulting area to convert force measurements to equivalent stress. We calculated the Young's modulus (E) of each sample using the slope of the linear portion of the stress-strain curve, and defined the ultimate stress (σ ult ) as the highest stress achieved during a test.
Finite-element model and analysis
We converted CT scans into FE models using ScanIP + FE software (Simpleware, Synopsys, Mountain View, CA, USA). We used FE modelling because it converts complex structures into simpler, smaller sections for mechanical analysis and because it is non-destructive. FE models have become a powerful tool for evaluating the mechanical performance of osteological and fossil specimens in museum collections [45][46][47][48][49][50]; however, the focus of these 'osteometric eyes' [50] is seldom turned on objects of art or material culture in the ethnographic or archaeological records (but see Thomas et al. [51] and their FE analysis of fluting in North American Pleistocene weaponry).
We segmented images using a threshold-driven region-growing algorithm, and meshed each model with linear four-node tetrahedral elements (average number of elements = 88 110). The number of elements was determined to be sufficient after a convergence study found no significant increase in the accuracy of the models with more elements. We separated trabecular bone from cortical bone using the grey-scale value of the CT scans for each voxel, where the trabecular bone could then be assigned material properties. Trabecular bone was confined to the grip and pommel regions of each dagger; i.e. the distal condylar region of the human femur or cassowary tibiotarsus. Our mechanical tests suggest that the Young's modulus of cassowary and human cortical bone [52][53][54][55][56][57] are practically equivalent (see Results and discussion). Accordingly, we used identical material properties for all models, assigning a Young's modulus of 24.0 GPa to cortical bone (present results), a Young's modulus of 0.4 GPa to trabecular bone [58] and a Poisson's ratio of 0.3 to all bone tissue.
We imported the FE models into ABAQUS (Simulia, Dassault Systèmes, Waltham, MA, USA) for analysis. We fixed each model at 20% of the length measured from the tip, and applied a 225 N force to the handle end. The tested dagger failed at 200 N, but a slightly higher force of 225 N was applied to the FE models to ensure failure. Bending was performed in two perpendicular directions (corresponding to I max and I min ) in order to estimate the maximum and minimum bending strength values of each dagger. We imported the von Mises stress distribution of each dagger into Matlab (Mathworks, Natick, MA) to determine failure load, which we estimated based on a per cent volume failure criterion. In the model of the cassowary dagger used for testing, the von Mises stress in 3.3% of the overall volume exceeded the σ ult of bone tissue at the measured failure load. Given these results, we used a 3.3% volume criterion for all models. We used the FE-based stress distribution for each dagger and scaled the applied load until 3.3% of the volume exceeded σ ult . This method to calculate the failure force allows for any force value to be applied to the models, and the correct failure force will still be calculated. The resulting force (F max ) is assumed to be the force required to induce fracture. For additional comparison, we calculated the forces required to induce failure in 1%, 3% and 5% of the volume of each dagger.
Statistical analysis
We used Mann-Whitney U tests to compare the means of all geometric properties and FE-predicted failure loads in our sample of five human bone daggers and six cassowary bone daggers, where the sixth cassowary bone dagger was purchased and used for mechanical testing. Statistical differences were set at α < 0.05.
Results and discussion
Ethnographic accounts of bone daggers report that users targeted the cervical vertebrae or the hip, knee and ankle joints of their victims. We estimated that penetration of these joints would entail 20% of the length of dagger; therefore, this distance, as measured from the tip, was used to compare the cross-sectional geometries of human-and cassowary-derived daggers. CT images affirmed our subjective impressions of greater cross-sectional curvature among the human bone daggers, a shape that predicts superior mechanical performance (see electronic supplementary material, figure S2). Indeed, I min differed between the two dagger types (table 2), with the mean value of human bone daggers being 290% greater than that of cassowary bone daggers despite similar cross-sectional areas (table 2). Thus, the retention of greater cross-sectional curvature during the manufacture of human bone daggers appears to be a design feature that results in higher moments of inertia and greater resistance to bending; i.e. a stronger dagger.
To explore whether the superior macrostructure of human bone daggers exists to compensate for the inferior material properties of human bone, we tested the mechanical properties of a cassowary bone dagger to establish baseline properties for FE models. Flexural strength in cantilever bending was 200 N, and corresponded to testing in the I min direction. Tensile tests revealed a mean (±1 s.d.) elastic modulus (E) of 24.01 ± 1.57 GPa and an ultimate stress (σ ult ) of 153.9 ± 42.3 MPa. This finding compares well with measures from an ostrich (Struthio camelus; E: 13.90 GPa [55]) and emu (Dromaius novaehollandiae; E: 13.05 ± 3.94 GPa, range: 5.62 to 19.83 GPa; σ ult : 146 MPa [59]), with the caveat that these authors examined the compact bone of fresh (undried) femora. In our tests, the stress-strain behaviour of dry, tibiotarsal compact bone was relatively linear-elastic, showing a brittle response and failure before 1% strain (see electronic supplementary material, figure S4). Crucially, our measures of material properties fall squarely between (E = 18.0 and 27.4 GPa [53,58]) and overlap (σ ult = 103-133 MPa [52][53][54][55][56][57]) published values for dry compact bone from human femora, which simplified the construction of our FE models.
FE models of the daggers underwent simulated cantilever-bending experiments to analyse the failure loads of each dagger ( figure 9). First, an FE model of the experimental bending test was subjected to 200 N (i.e. the flexural strength of the dagger). The simulation showed that 3.3% of the volume of the dagger had a stress greater than the σ ult . Next, FE models of all daggers were subjected to simulated bending tests. The influence of force on the percentage of volume failed in the daggers was compared in bending with respect to I min and I max (figure 10). All daggers followed the same general trends. At low loads (less than 50 N) the stresses in the models remained below the σ ult and thus no volume of the model was considered 'failed'. With increasing load, stresses in the model increased and the volume of failed elements increased in a linear fashion. When testing in the I min direction, each cassowary model demonstrated less force was required to induce failure for a given percentage of volume; however, the cassowary daggers showed more similar behaviour to the human daggers when tested in the I max direction. We compared the failure load of the daggers at 1. force at 1% force at 3% force at 5% force at 3.3% force at 1% force at 3% force at 5% (b) (a) to the volume of failed elements for the dagger tested experimentally ( figure 11). When tested in the I min direction, the human bone daggers were significantly stronger than the cassowary bone daggers at all levels of failed volume (table 2). For example, the human bone daggers required 254 N to fail at 3.3% of the total volume of the dagger, which corresponds to 31% more force, on average, compared to the cassowary bone daggers. However, when tested in the I max direction, the human bone daggers required only 27% more force on average.
Conclusion
Our results suggest that the mechanical properties of compact bone are similar in the human femur and cassowary tibiotarsus, although our analysis is limited by necessity to a single sample of cassowary bone. Still, this finding suggests that systematic differences in the macrostructure (cross-sectional curvature) of bone daggers will determine differences in mechanical strength. We affirmed this prediction with FE models, finding that human bone daggers can support larger loads with a smaller volume of elements failing. We conclude by suggesting that the retention of greater diaphyseal curvature is a deliberate design feature intended to produce a stronger bone dagger.
It is, therefore, difficult to explain why dagger-makers working with a tibiotarsus would choose to remove so much of the mediolateral wall (figure 2d). A flatter cross-sectional shape is a weaker macrostructure and it is tempting to speculate that the disadvantages of this design are balanced by greater comfort for the owner (when fixed to the upper arm; cf. figure 2) or perhaps reduced weight during fighting or friction during insertion. In the event of breakage, a cassowary bone dagger is easily replaced, whereas a human bone dagger is not. We conclude by suggesting that people in the Sepik region of New Guinea engineered human bone daggers to withstand breakage, and that their prevailing motivation was to preserve intact the embodiment of symbolic strength and social prestige, an outcome that agrees well with the predictions of signalling theory.
Ethics. This project entailed material testing (destruction) of a contemporary (ca. 1970s) cassowary bone dagger. It was purchased from an art dealer based in the USA, and any trade in cultural heritage raises potential ethical concerns. In the present case, a handicraft object produced for sale to tourists or art dealers does not rise to the definition of national cultural property, as regulated by the 1965 National Cultural Property (Preservation) Act or the 1970 UNESCO Convention on the Means of Prohibiting and Preventing the Illicit Import, Export and Transfer of Ownership of Cultural Property. The 1973 Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) is another potential prohibition, but no species of cassowary is presently protected under CITES. On a philosophical level, we considered the recent origin of the bone dagger and the widespread availability of comparable daggers in museum and private collections when weighing our decision to perform destructive material testing. To the best of our knowledge, our study adheres closely to the principles of professional responsibility. | 2018-05-21T22:38:44.931Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "20454b0258f64f46d32ae2692fcf38dafe543775",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.172067",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68c6f189c2228e6924c92f218bcb52b79c3446fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"History",
"Medicine"
]
} |
245367679 | pes2o/s2orc | v3-fos-license | Sodium-Glucose Cotransporter 2 Inhibitors Mechanisms of Action: A Review
Sodium-Glucose Cotransporter 2 inhibitors (SGLT2i), or gliflozins, are a group of antidiabetic drugs that have shown improvement in renal and cardiovascular outcomes in patients with kidney disease, with and without diabetes. In this review, we will describe the different proposed mechanisms of action of SGLT2i. Gliflozins inhibit renal glucose reabsorption by blocking the SGLT2 cotransporters in the proximal tubules and causing glucosuria. This reduces glycemia and lowers HbA1c by ~1.0%. The accompanying sodium excretion reverts the tubuloglomerular feedback and reduces intraglomerular pressure, which is central to the nephroprotective effects of SGLT2i. The caloric loss reduces weight, increases insulin sensitivity, lipid metabolism, and likely reduces lipotoxicity. Metabolism shifts toward gluconeogenesis and ketogenesis, thought to be protective for the heart and kidneys. Additionally, there is evidence of a reduction in tubular cell glucotoxicity through reduced mitochondrial dysfunction and inflammation. SGLT2i likely reduce kidney hypoxia by reducing tubular energy and oxygen demand. SGLT2i improve blood pressure through a negative sodium and water balance and possibly by inhibiting the sympathetic nervous system. These changes contribute to the improvement of cardiovascular function and are thought to be central in the cardiovascular benefits of SGLT2i. Gliflozins also reduce hepcidin levels, improving erythropoiesis and anemia. Finally, other possible mechanisms include a reduction in inflammatory markers, fibrosis, podocyte injury, and other related mechanisms. SGLT2i have shown significant and highly consistent benefits in renal and cardiovascular protection. The complexity and interconnectedness of the primary and secondary mechanisms of action make them a most interesting and exciting pharmacologic group.
INTRODUCTION
Sodium Glucose Cotransporter 2 inhibitors (SGLT2i), also known as gliflozins, are an exciting and highly interesting group of "relatively new drugs" that have shown consistent positive results in renal and cardiovascular protection. They inhibit the action of the Sodium Glucose Cotransporter 2 (SGLT2) in the kidney and cause glucosuria. Initially, they were thought of and developed as glucose lowering therapies, yet large clinical trials in a very short time have demonstrated clinical benefits that far exceeded what was expected. In this short review we describe the physiologic effects of SGLT2 inhibitors and discuss the clinical benefits demonstrated to date.
PHYSIOLOGY OF SGLT2 COTRANSPORTERS AND RENAL GLUCOSE HANDLING
SGLT2 cotransporters are part of a large family of symporters responsible for facilitated transport of different solutes, aided by a positive sodium gradient (1,2). There are two main sodiumglucose cotransporters in the body: SGLT1 and SGLT2. SGLT2 cotransporters are almost exclusively found in renal tissue, whereas SGLT1 are mostly found in the small intestine, heart, and skeletal muscle, aside from the kidney ( Table 1) (3)(4)(5). In the kidneys, SGLT2 and SGLT1 handle sodium and glucose reabsorption in the proximal tubules of the nephron. Their physiological function is to reabsorb 100% of filtered glucose, avoiding energy loss through glucosuria. SGLT2 cotransporters are found in the brush border of the renal tubular cells in the first segments of the proximal tubules (S1 and S2). They have a high transport capacity but low affinity for glucose and are responsible for the reabsorption of 90 to 97% of filtered glucose. The remaining 3-10% of filtered glucose is absorbed by the high-affinity, low-capacity, SGLT1 cotransporter, present in the S3 segment of the proximal tubule. Glucose exits these tubular cells back into the circulation through the GLUT2 (for cells with SGLT2) and GLUT1 (for cells with SGLT1) transporters in the basoletaral membrane. This unidirectional transport of glucose and sodium is coupled to, and maintained by, the Na-K-ATPase pump in the basolateral membrane.
The coupled work between the high-capacity/low-affinity SGLT2 and low-capacity/high-affinity SGLT1 cotransporters handles the full load of filtered glucose. This way, in a normal physiological setting, glucose reabsorption by the proximal tubules is adjusted to the variations in serum glucose concentrations. Total glucose reabsorption is directly proportional to the amount of filtered glucose. This reabsorption capacity has a natural limit (Tmax G ) that is reached when filtered glucose approximates 350 mg/min/1.73 m 2 , equivalent to between 180 and 200 mg/dL of glycemia. Glucosuria develops when hyperglycemia exceeds this Tmax G . In chronic hyperglycemia of diabetes, the kidney shifts the Tmax G to higher glucose levels, around 240 mg/dL (6). The proximal tubules increase the number of SGLT2 cotransporters to make up for the increase in luminal glucose flow (7). This increase in SGLT2 cotransporters comes at a cost of energy expenditure through the basolateral Na-K-ATPase, and is thought to be central to the pathophysiology of diabetic kidney disease (8).
SGLT2 COTRANSPORTER INHIBITORS MECHANISMS OF ACTION
The first SGLT2 inhibitor was named phlorizin, a naturally occurring phenolic glycoside derived from the root bark of the apple tree (9). It was first isolated in the 19th century and was originally thought to have antipyretic properties. Further analysis found that phlorizin causes glucosuria and it was thought to cause a diabetes-like state when administered to dogs, due to the presence of glucosuria, polyuria and weight loss.
With the characterization of renal glucose reabsorption in the proximal tubule in the 1960's, the cloning of the SGLT2 cotransporter in the 1990's (10), and further understanding of renal handling of glucose and the pharmacological effects of phlorizin, inhibition of renal glucose reabsorption was studied as a target for diabetes control. Preclinical studies in the 1980's showed that phlorizin improved insulin sensitivity in diabetic rat models without affecting insulin action in control rats (11).
Phlorizin has no oral bioavailability, so it can only be administered intravenously. The first orally available SGLT2 inhibitor, T-1095, was developed in the 1990's. It showed some improvement in HbA1c, reduction of microalbuminuria, and weight loss in rats (12,13). Unfortunately, T-1095 was not selective to SGLT2 and its action on intestinal SGLT1 caused significant gastrointestinal adverse effects and intolerance.
Following T-1095, at least seven different orally available SGLT2 inhibitors have been developed (2), three of which have been approved for use by the FDA: dapagliflozin, empagliflozin, and canagliflozin. The three of them are highly selective for SGLT2 inhibition over SGLT1.
Several clinical trials with empagliflozin (14-17), canagliflozin (18,19), and dapagliflozin (20)(21)(22) in the past few years have demonstrated impressive benefits from SGLT2 inhibition in high-cardiovascular-risk patients. They have shown significant reduction of cardiovascular and all-cause mortality, hospitalizations for heart failure, adverse cardiovascular events, and progression of albuminuria, when added to standard therapy in diabetic and non-diabetic kidney disease. Particularly interesting is the fact that the use of SGLT2 has proven beneficial for kidney disease and heart failure despite the absence of diabetes as a central pathology. Understanding the direct and indirect physiological mechanisms and effects of SGLT2 inhibition is crucial to clarify why they offer a diversity of clinical benefits.
Glucosuria: Improvement in Glucose Control
Inhibition of SGLT2 cotransporter causes glucosuria. By inhibiting the SGLT2 cotransporter, gliflozins avoid glucose reabsorption in the S1 and S2 segments of the proximal tubule. This causes a reduction in Tmax G to around 40-80 mg/dL (6) and a reduction in the renal threshold for glucosuria. To avoid significant energy loss through glucosuria, SLGT1 cotransporters compensate by increasing reabsorption to ∼40% (23). A preclinical study in rats demonstrated this by showing that double SGTL1/SGLT2-knockout mice have significantly higher glucosuria than single SGLT2-knockout mice (24). Furthermore, glucose control from SGLT2 inhibition is not significantly associated with a higher risk of hypoglycemia (25).
Natriuresis: Improvement in Blood Pressure and Reversal of Tubuloglomerular Feedback Stimulation
Together with glucosuria, SGLT2 inhibition causes natriuresis that is associated with a negative salt and water balance (32). This reduction in plasma volume is evidenced by a drop in blood pressure of 3-6 mmHg in systolic and 1-1.5 mmHg in diastolic (18,19) blood pressures. Increased natriuresis and sodium delivery to the distal nephron is central for renal protection, as it normalizes the tubuloglomerular feedback mechanism. Chronic hyperglycemia of diabetes induces a state of increased reabsorption in the proximal tubule by increasing the SGLT2 cotransporter expression [and the Tmax G (6)]. This increased glucose and sodium reabsorption reduces the delivery of sodium to the juxtaglomerular apparatus, stimulating the tubuloglomerular feedback, which in turn causes dilation of the afferent arteriole trying to "normalize" distal sodium delivery. Dilation of the afferent arteriole increases intraglomerular pressure and causes hyperfiltration, characteristic of diabetic kidney disease. SGLT2 inhibition reverses this feedback loop by increasing sodium delivery to the juxtaglomerular apparatus, inhibiting the tubuloglomerular feedback and causing constriction of the afferent arteriole (6,33). The result is a reduction in intraglomerular pressure and improvement of hyperfiltration, that is reflected as an initial drop of glomerular filtration rate (GFR). This drop in GFR is reversible when SGLT2 inhibition is discontinued and is a response to hemodynamic changes. Although this initial GFR drop associated to SGLT2 inhibition may seem significant, its magnitude is limited in most clinical instances to 2-4 ml/min. Studies with long term follow up show that it is not continuous and is significantly less than the eGFR decline observed in the placebo groups.
Improvement in Albuminuria
Clinical trials on diabetic and non-diabetic patients with chronic kidney disease (CKD) (22,34) have demonstrated that SGLT2 inhibitors reduce albuminuria significantly. This effect is independent and additive to the effect of RAAS blockade (15). The improvement in albuminuria is multifactorial, related to the vasoconstriction of the afferent arteriole, the subsequent reduction in intraglomerular pressure and hyperfiltration, as well as the improvement in systemic blood pressure.
Some studies have also suggested that podocytes benefit from SGLT2 inhibition, as they have SGLT2 cotransporters, and the use of dapagliflozin or empagliflozin reduces podocyte dysfunction and effacement (35,36) through normalization in insulin sensitivity and improvement in glucotoxicity. This would lead to an improvement in albuminuria (37).
Weight Loss and Lipid Metabolism Shift
The use of SGLT2 inhibitors induces weight loss of between 2 and 4 kg after 6-12 months of treatment (25,26,29,(38)(39)(40)(41). Initial weight loss is related to volume contraction and subsequently secondary to caloric wasting through glucosuria. ADA guidelines actually recommend SGLT2 inhibitors as initial antidiabetic therapy when weight loss is desired as part of the treatment (42).
SGLT2 inhibition and the subsequent glucosuria induces a state of relative glucose "deprivation, " shifting energetic substrate use to lipids. This reduces cellular lipotoxicity and improves oxidative stress. This also favors an increase in ketone production, which appear to be a better energetic substrate for renal and myocardial cells.
Improvement in Proximal Tubular Work and Oxygen Consumption
As described previously, hyperglycemia of diabetes induces a state of glucose and sodium hyperreabsorption in the proximal tubule, activates tubuloglomerular feedback and causes hyperfiltration. This positive feedback loop increases cellular work due to increased Na-K-ATPase activity and induces proximal tubular hypertrophy. Aside from this, increased intracellular glucose in proximal tubular cells is diverted to nonglycolytic pathways increasing advanced glycation end-products, affecting mitochondrial activity, and increasing oxidative stress. Inhibition of the tubuloglomerular feedback, hyperfiltration and increased glucose reabsorption reduces energy expenditure and oxygen consumption in proximal cells (8). Reducing serum and intracellular glucose levels, reduces cellular glucotoxicity.
Improved Oxygen Delivery and Anemia
The improvement in proximal tubular cell work and reduction in energy expenditure described above, reduces oxygen demand and increases cortical oxygen tension (8,43). Subsequent delivery of glucose to the latter part of the proximal tubule is reabsorbed by the SGLT1 cotransporters and increases energy expenditure and oxygen consumption in the renal outer medulla (44)(45)(46). The decrease in oxygen availability stimulates hypoxia-inducible factors HIF1 and HIF2 (47) and enhance the release of erythropoietin (48). This, together with a mild volume contraction, increases the hemoglobin and favors oxygen delivery to different tissues. Clinical trials have shown improvement in hemoglobin levels in patients treated with SGLT2i (49). Dapagliflozin appears to suppress hepcidin and other iron-metabolism related proteins, helping improve erythropoiesis (50).
Other Possible Effects
Gliflozins appear to reduce inflammatory markers such as IL-6, TNF, IFNγ, NF-κβ, TLR-4, and TGF-β (51-54). They also appear to improve mitochondrial function (55) reduce mesangial expansion and the number of myofibroblast in myocardial tissue (56). Empagliflozin appears to reduce IL-β inflammatory pathway in proximal tubular cells (57). These effects would reduce inflammation, fibrosis, and oxidative stress in myocardial and renal tissue. Nevertheless, all these changes appear to be secondary to the metabolic and hemodynamic effects of SGLT2 inhibition.
Adverse Effects Associated to SGLT2 Inhibition
The use of SGLT2 inhibitors is associated with adverse events that are rare and generally mild, yet should be considered. Volume contraction and osmotic diuresis are direct effects of their mechanism of action and may infrequently present of a magnitude of significance when the drug is initiated in geriatric patients and in those on diuretics. In some clinical trials, a particularly relevant yet infrequent, adverse effect described was euglycemic diabetic ketoacidosis, yet this was not present in CREDENCE and DAPA-CKD. Finally, genital mycotic infections are up to four times more frequent in patients using SGLT2 inhibitors. They are generally mild and easily treatable. Patients should be counseled to monitor signs and symptoms and to maintain adequate genital hygiene. Other adverse events such as bone fractures and amputations appeared to be associated to SGLT2 inhibition in the CANVAS trial, but they have not been replicated in other studies.
CLINICAL BENEFITS OF SGLT2 INHIBITION
Several clinical trials have demonstrated important cardiovascular and renal benefits of SGLT2 inhibition. The EMPA-REG OUTCOME (14) trial was the first study to demonstrate this in a large scale. Published in 2015, this doubleblind, multicenter, clinical trial was aimed at demonstrating cardiovascular safety of empagliflozin when added to standard therapy in high cardiovascular risk diabetic patients with an estimated glomerular filtration rate (eGFR) ≥30 ml/min/1.73 m 2 . After a 3 year follow up, empagliflozin not only proved to be safe, but improved major cardiovascular events (cardiovascular death, myocardial infarction, or stroke) by 14%, reduced all-cause mortality by 32%, cardiovascular death by 37%, and hospitalization due to heart failure by 35%. A subsequent analysis of secondary outcomes (15) confirmed that empagliflozin reduced incident or worsening diabetic nephropathy by 39%. Baseline eGFR declined in subjects on both the placebo and empagliflozin arms, yet this decline in eGFR stabilized after a few weeks in the empagliflozin group and was reverted after empagliflozin was discontinued, showing that the change in eGFR is not related to kidney injury but to hemodynamic changes induced by the drug itself. These results were notable considering that renal outcomes were not the primary outcome of the trial and 80% of subjects were already under RAAS blockade as standard therapy for diabetic kidney disease.
Similarly, the CANVAS (18) trial, in 2017, demonstrated that canagliflozin added to standard therapy in high cardiovascular risk patients reduced mayor cardiovascular events by 14%, cardiovascular death by 13%, myocardial infarction and stroke by 14 and 10% respectively, and hospitalization for heart failure by 33%. Subjects on canagliflozin had a 40% lower risk of adverse renal outcomes (renal function decline, dialysis initiation or death from a renal cause), 27% lower risk of worsening albuminuria and a 1.7 higher likelihood of improvement in albuminuria.
In 2019, the DECLARE-TIMI 58 (20) trial, which included over 17,000 patients followed for 4.2 years, showed that dapagliflozin reduced the risk of hospitalization for heart failure by 27% and of adverse renal outcomes by 27%. The large population of diabetic patients included in this trial represents the widest range of renal function in any of the cardiovascular outcome studies.
An interesting meta-analysis (58) of these trials show that SGLT2 inhibitors significantly reduced the risk of kidney failure by 29%, end-stage kidney disease by 32% and acute kidney injury by 25%. These benefits were consistent across the different subgroups of GFR and albuminuria. Altogether, these studies demonstrate cardiovascular and renal benefit from SGLT2 inhibition in patients with diabetes and high cardiovascular risk, with and without established diabetic nephropathy. Another meta-analysis (59) of the EMPA-REG OUTCOME, CANVAS and DECLARE-TIMI 58 trials stratified the subjects (n = 34,322) according to eGFR and demonstrated that SGLT2 inhibitors had a better effect in reducing adverse renal outcomes (worsening renal function, ESKD or death from renal cause) when eGFR is between 30 and 60 ml/min/1.73 m 2 .
The CREDENCE (19) trial, published in 2019, was the first designed to focus on a composite renal outcome that included end stage kidney disease (ESKD) (dialysis requirement, kidney transplantation or eGFR of <15 ml/min/1.73 m 2 ), doubling of serum creatinine or renal or cardiovascular death. Similar to CANVAS and EMPA-REG OUTCOME, subjects were diabetic, yet patients in this study had eGFR between 30 and 90 ml/min; 60% of which had to have an eGFR between 30 and 60 ml/min, and all subjects had urinary albumin-creatinine ratio (UACR) between 300 and 5,000 mg/g and optimal renin angiotensin system (RAAS) inhibition. Canagliflozin was associated with a significantly lower risk of adverse renal and cardiovascular outcomes, and the results were so evident and encouraging that, after interim analysis, the trial was stopped prematurely after 2.6 years of follow-up, displaying a reduction of 34% of the primary composite outcome in the canagliflozin group. A secondary analysis (60) of patients in the CREDENCE trial demonstrated that subjects receiving canagliflozin had a lower risk of renal and cardiovascular outcomes even when starting treatment with eGFR between 30 and <45 ml/min/1.73 m 2 .
In 2020, the DAPA-CKD (22) study was published. It included over 4,000 CKD patients, comprised 68% by diabetics and 32% by patients with CKD not related to diabetes, with an eGFR of 25-75 ml/min/1.73 m 2 and UACR of 200-5,000 mg/g, treated with dapagliflozin or placebo. The study, as CREDENCE, was stopped prematurely due to the clear benefit offered by dapagliflozin in both diabetic and non-diabetic patients with CKD. The primary renal outcome (sustained reduction of at least 50% of eGFR, ESKD or renal or cardiovascular death) was significantly lower (HR: 0.61; 95% CI: 0.51-0.72; p < 0.001) in dapagliflozin treated patients. Benefits were also independent of the baseline presence of cardiovascular disease (61). A prespecified subanalysis (62) of subjects with eGFR <30 ml/min/1.73 m 2 showed that dapagliflozin is safe and effective even in this lower eGFR levels. Similarly, the subgroup of patients with IgA nephropathy (63) treated with dapagliflozin had a lower risk of kidney disease progression, with similar safety profiles when compared to placebo.
There have been important studies to explore of the effects of SGLT2 inhibition on patients with high cardiovascular risk, with or without kidney disease and independently of the presence of diabetes. The DAPA-HF (21) and the EMPEROR-reduced (16) trials demonstrated that the use of either dapagliflozin or empagliflozin reduces cardiovascular death and worsening of heart failure in patients with heart failure and reduced ejection fraction, regardless of the presence or not of diabetes. More recently the EMPERORpreserved (17) trial also demonstrated similar benefits from empagliflozin in patients with heart failure and preserved ejection fraction.
CONCLUSIONS
The positive results discussed above have been clearly striking and consistent, demonstrating a significant improvement in cardiovascular and renal outcomes by the different SGLT2 inhibitors, when added to optimized standard therapy that includes maximal RAAS inhibition.
The cascade of events induced by the inhibition of SGLT2 cotransporters has proven beneficial to reduce cardiovascular and renal outcomes, and death in patients with and without diabetes. The exact mechanisms of cardiovascular as well as renal benefits are probably related to multiple interplaying factors, but are not completely understood (64). These include a reduction in glycemia with subsequent improvement in insulin resistance, weight loss and reduced visceral fat. Correction of glycemia reduces direct glucotoxicity and has shown improvement in cellular function in proximal renal tubular cells as well as other tissues.
Although SGLT2 inhibition favors an improvement in HbA 1c , the extent of this improvement is not enough to explain the significant clinical benefits observed in cardiorenal health. Similarly, hyperglycemia would not be a central pathophysiological issue in patients without diabetes. A possible energetic benefit is the shift to lipid metabolism, with a subsequent reduction in lipotoxicity, as well as an increase in ketone production. Central to the cardiovascular benefit is the diuretic and natriuretic effect of SGLT2 inhibition. The described improvement in volume status, sodium balance and blood pressure seem to be of relevance to both the cardiovascular as well as renal benefits. Reduction in albuminuria, inflammation and oxidative stress have also been implicated. In addition, for the renal component, the intrarenal hemodynamic mechanisms described seem to be key for the long-term improvement in the eGFR slope decline, in diabetic as well as non-diabetic patients.
AUTHOR CONTRIBUTIONS
JF-C and RC-R contributed equally to conception and initial discussions regarding the main subject matter, manuscript focus, and general outline. JF-C performed the initial literature review and drafted the first version of the manuscript and was in charge of writing the final manuscript. RC-R oversaw the structure and logical sequencing of the manuscript, perfected the drafting and ensured that the information was up to date, and added sections to the initial draft. Both authors read and approved the final submitted version. | 2021-12-22T17:01:26.299Z | 2021-12-20T00:00:00.000 | {
"year": 2021,
"sha1": "d5a6b72c9ab3c9372e956fe152395b9173094b47",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.777861/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9692b9d46ebe883509bbb5b321003d5818209523",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265103758 | pes2o/s2orc | v3-fos-license | Partnership in care: Organic systems framework strategies for patients and care providers
The organic systems framework is a conceptual social sciences theoretical framework developed by renowned author Barry Oshry. Oshry outlines how we are often blind to the context we are in and our reactions to those conditions, which leads to certain experiences. This article emanates from the author’s reflections on bringing organic systems insights to groups and organizations worldwide and how such strategies in relational systems may apply to patients and care providers working together in partnership. As patients and care providers engage in such partnerships, they enter distinctly different contexts, each with unique challenges and opportunities. Written from a first-person perspective, the author moves beyond seeing the patient as a client in the healthcare system and into the possibilities of how patients and providers can work together across contexts to create and sustain meaningful care-based partnerships.
Introduction
This article represents an effort to mobilize knowledge through a structured process of integrated reflective practice.The substance of this article is personal, professional, and academic.In keeping with reflective practice, first-person active voice is used as an authentic means of expression.This article was written after I had facilitated a recent conference workshop for an extensive international organization network in the health and wellness space.During discussions related to a key strategy inherent in Oshry's 1 organic systems framework, I was given pause to consider how Oshry's strategies had affected me personally when a member of my immediate family was diagnosed with an incurable, life-threatening illness.In hindsight, we had unknowingly applied Oshry's systems strategies at first and, after that, intentionally.The outcome was, at least for now, highly successful, and it is with this awareness practical strategies for health leaders and patients alike are shared.
Gibbs' reflective cycle 2 is an organizing structure for reflecting on events/situations to deepen our knowledge.In Gibbs' own words, "It is not enough just to do, and neither is it enough just to think.Nor is it enough simply to do and think.Learning from experience must involve links between the doing and the thinking."This article is organized using Gibbs' reflective practice framework.Gibbs' structure is that we first describe the event in detail, identify what our thoughts and feelings were in the moment, evaluate and analyze what happened, draw our conclusions, and integrate our emerging awareness into a plan for future action.
Step 1: Description of the event or situation I was invited to facilitate a half-day social systems leadership workshop called the "Organization Workshop" based on Oshry's 30 years of research in developing the Organic Systems Framework (OSF).The workshop was facilitated in Canada at a national health and wellness industry-based conference with 176 chief executive officers and their executive teams from a geographically dispersed organization.
The workshop explored partnership in organization life and the context-based patterns that emerge with great regularity. 3I modified an element of program design to create space for discussing system processes inherent in OSF.The discussion centred on four system processes: differentiation, homogenization, individuation, and integration. 4Participants engaged in practice-based discussions of Oshry's three basic patterns of relationship that fall from the system processes. 4One pattern in particular, that of "customer-provider," generated a brief but lively discussion of whether or not the pattern held up in the healthcare context and, more to the point, whether the patient could be viewed as a customer in that context.
When discussing "customer empowerment strategies" in systems, 3,5 I became aware that I was having a physiological reaction to the discussion.I was reminded of my family's experience in the health system and the dire consequences that could have ensued had we not employed Oshry's customer strategies.Some workshop participants expressed that the partnership strategies applied, and others did not.During this moment, I became aware of my lack of self-regulation since I had a deep and personal awareness of Oshry's framework, both academically and experientially.
Stage 2: Feelings and thoughts (self-awareness)
Leading up to the workshop, I felt excited, full of energy and eager to facilitate discussions of system processes, the contexts within which they occur, and the various leadership strategies that would apply.I was well-rested, although weary from travel.I was energized since this workshop was one of several elements in my Doctor of Social Science portfolio.I was also aware that I was missing my family and might feel vulnerable when considering the patient as a healthcare system customer.My family did not have a pleasant experience in our healthcare journey; the diagnosis was painful, and the care plan was convoluted and fragmented.More dissatisfaction would likely have ensued if we remained aloof from the delivery system.I was not aware of how this discussion might have impacted me at the moment, but I felt capable of working with whatever emerged.
Workshop participants seemed open-minded and willing to engage in experiential learning.Participants were assigned various roles in a learning activity designed to highlight the organic nature of whole systems.Some were very happy with their role, some were concerned others might judge them, and most were feeling the similarity to daily life.
Three years later, I am still shaken by how the "patient as customer" discussion caused an emotional reaction in me.In retrospect, this was due to a feeling of anger my family held at a time of significant vulnerability, that the care system was not working as it should.The consequences to my family could have been devastating.The system itself was overly differentiated while simultaneously lacking in integration.To this day, my sense of anger and indignation is unresolved.However, I have taken steps to help the local care system align itself with the notion of patient and family-centred care that it espouses to practice.In the moment, however, I felt a visceral reaction to the discussion; I thought that if I spoke, I would become physically ill.This was my cue to move on.
Stage 3: Evaluation
In this phase, Gibbs 2 asks us to reflect on what was positive about the situation-what went well and what did not.While in the role of facilitator of such workshops, my feelings and personal thought processes are generally not expressed.Evaluations showed that participants greatly appreciated the experiential learning design, the relevant discussions, and the applicability of the leadership strategies inherent in the framework.Only I, as a facilitator, was aware of the design changes as we collectively explored robust strategies for staying focused on partnership in care.
I did not blindly enter into the discussion of patient-ascustomer strategies.I was placing the notion of "patient" into what Oshry refers to as "bottom" condition, which is characterized by vulnerability to the direction that "they" provide. 1,6,7In this instance, the "they" were essentially any element of the system of care that made decisions that impacted my family.What I did not know, however, was the level of emotionality associated with admitting vulnerability as a patient or family member.I thought I could talk about the real-life issue without experiencing it emotionally.I was wrong; my emotions determined what I could or could not say and discuss.
Stage 4: Analysis-What sense can I make of this situation?
I was aware that the view of patient as customer is not shared among all professions in the healthcare system, nor is it shared internally to the various disciplines themselves.Numerous clinical blog sites, for example, 8 rail against the notion of "patient as customer," citing seemingly positive rationale.However, the unit of analysis remains at the level of the patient.While not entirely commonplace, many studies describe positive outcomes associated with patient involvement in decision-making, "very few publications refer to costs or negative impact of engagement, compared with positive findings." 9n terms of the unit of analysis, authors such as Edmonstone 10 speak of more extensive, different models of care that involve addressing the whole health and social care system and not the relational social system.The literature focuses predominantly on one-on-one interactions between patients and care providers.It focuses on the interpersonal elements of contact with patients: content, style and care coordination as determinants of productive partnerships. 11ome research offers hope, finding emerging alternate models of care that "foster less asymmetrical power relationships between caregivers and patients and a greater consideration of patients' lived experiences." 12Marchand et al. found that patients yielded positive outcomes when "opening [oneself] up, being a part of care, meeting me where I am," 13 which further reinforces a focus on the micro, dyadic relationship but not on the larger social system.Other comprehensive reviews also focus on the individual as the unit of analysis and seek to identify quality indicators at the level of care outcomes but do not offer comment on social system strategies where the patient resides and receives care. 14Conversely, authors such as Pomey et al. seem to focus on the legitimacy of involvement when they state, "Patient participation legitimacy is based on the recognition of patients' experiential knowledge." 15While this statement is on point, it also implies an asymmetrical power relationship.The unit of analysis can be expanded to include team-based care.However, patient comments often pertain to "the perceived purpose of teams, perceptions about the structure of a team, team-based communication, the role of patients, delivery of care." 16symmetrical power seems to be at odds with Oshry's notion of partnership.In Oshry's view, "each part of the system has its unique potential contribution to Total System Power.These potentials are often not realized […]." 7 This view suggests a balanced power dynamic, not asymmetrical, is valuable and wise.It opens up a new means to empower patients and providers within virtually all aspects of Canada's health system(s).Rider et al. also concurred that to make collaboration in larger systems work effectively; interveners should adopt a "whole systems view." 17Researchers have also found three central challenges to system-level collaboration: defining responsibilities and expectations, negotiating priorities and establishing and strengthening trust and respect. 18hree basic relationship patterns emerge in organic social systems: customer-provider, end-middle-end, and topbottom. 6When considered hierarchically, the elements of these patterns are top, middle, bottom, and customer.Tops exist in a context of accountability and complexity as they shape the organization in its environment and enact its functions.Middles exist in a context of "tearing" with multiple competing demands as they seek to integrate strategy and operations.Bottoms exist in a condition of vulnerability where "others" make decisions that affect them in major and minor ways as they deliver frontline products and services.Customers exist in a context of neglect-a world of promises made and promises broken in their role as validators of the organization's existence.What is unique about OSF is that none of the contexts identified in the hierarchical relationships is personal; they are systemic patterns that emerge as a function of our interactions with others.
Oshry's notion of tops, middles, bottoms and customers can also be taken out of a hierarchical context and viewed as sets of "conditions."For example, a physician can be "top" in the care relationship.Yet, they can also concurrently be in "bottom" condition as they are vulnerable to the direction provided by others, whether professionally or organizationally.Senior healthcare leaders can also be "middle" when caught between other people's issues and concerns.Any person can also be in a condition of "customer" whenever the service or product we receive doesn't quite meet our expectations.Oshry 6,18 also says that we are often "blind" to our reflex responses in systems, which can lead us to familiar, disempowering scenarios.Experiences, then, feel like the way things really are, and we do not see our role in contributing to them.This is at the heart of Oshry's "dance of the blind reflex." 6shry suggests we must take a stand about who we are and how we will lead in these conditions.If we take, for example, Oshry's view of customers, the stand customers can take to stay focused on partnership and not reflexively remain aloof from the system and hold it responsible for delivery is to be a customer who gets involved in the system and "help the system be more responsive to us." 6 To do so requires a different set of strategies than outlined above: strategies that are systemic and not at all personal.
Seeing patients as customers through the lens of Oshry's framework suggests they should: • Contract with their provider to build a relationship/ partnership.• Find out how the delivery system works.
• Be clear about standards and expectations.
• Stay close to the provider/producer.• Get into the process early as a partner and not late as a judge. 1,6r patients, these strategies are often tricky in asymmetrical power dynamics in which the patient is at the receiving end of treatment, a mental model that still permeates much of our health system, and societal views on the physician-patient dynamic.Patients as active partners in care are at the heart of relationshipcentred care, 13 making these strategies particularly salient.Participants were discussing customer/patient empowerment strategies when my moment of insight occurred.I was reminded of the possible downstream consequences of not enacting Oshry's strategies as my family engaged the medical system.The emotional impact resulted from the inferences and connections I was making as the lead facilitator and not something that the participants said or did.I was blind to my indignation and anger, yet alive to the feeling of being helpless and "done to" by the healthcare system.However, it was because we applied Oshry's strategies that we are where we are today.Patients and their families can use these social system strategies to remain focused on partnership.
There are things senior leaders can do to help facilitate partnerships, too.As bona fide "tops" in the system, I recommend they accomplish the following: • Develop a vision for care that includes patients as partners.
• Invest in their internal relationships with staff.
• Involve care providers and patients in decisions that impact them.• Invest in the development of staff to learn and stay focused on partnership.• Develop structures that reinforce partnership.
• Challenge mental models that relegate the patient to a hierarchical relationship.
Mental models refer to models or abstractions we create in our brain that help us understand and approximate what should happen in the real world.Consider for a moment how a patient or family seeking treatment believes our healthcare system should work.There are a variety of lenses through which we conceptualize the healthcare system. 19If we seek clarity and share our mental models, we can avoid falling out of partnership and becoming frustrated or confused.Mental models help us navigate the world, make sense of it, and take certain actions. 20In my family's case, if our shared mental model were to "listen to the care providers and follow their direction," we would remain helpless to whatever the system did next.We would have remained in "bottom" condition where "others" decided for us.Asserting our expectations was a strategy to help the social system stay focused on partnership.
Stage 6: Action plan
When I find myself in a similar conversation when delivering such workshops, my strategy will be twofold.First, I will consciously prepare for the workshop and consider examples that are less personal to me and less likely to resonate emotionally.However, this is not to say that I will abandon my personal connection to the workshop.Second, I need to process my frustration with what could have happened had we not enacted the strategies.Being more aware of the multiplicity of roles I occupy in social systems, sometimes as top, other times as middle, bottom, and customer, will forever be a work in progress.To operationalize Oshry's strategies, I should heed his words, "Stuff happens; you can take it personally or treat it systemically." 6
Conclusion
Oshry's organic system framework is explanatory and offers strategies and insights about what care providers and patients may do to remain focused on partnership.Although the examples presented in this article were from personal experiences in the healthcare system at a clinical coordination level, Oshry's insights may also be applied to any scenario where the parties are jointly committed to the success of whatever project, process, or endeavour they are in. 7
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Stage 5 :
Conclusion (synthesis and recommendations)So, what happened in the workshop, and what can I do about it? | 2023-11-11T06:18:33.501Z | 2023-11-10T00:00:00.000 | {
"year": 2023,
"sha1": "a264a882e4f630b0a9bee7b97758090b0369b359",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/08404704231211165",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a8b4abca6a8083bde9d4f3734970f6f4b123b85",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221653180 | pes2o/s2orc | v3-fos-license | Counting generalized Schr\"oder paths
A Schr\"oder path is a lattice path from $(0,0)$ to $(2n,0)$ with steps $(1,1)$, $(1,-1)$ and $(2,0)$ that never goes below the $x-$axis. A small Schr\"{o}der path is a Schr\"{o}der path with no $(2,0)$ steps on the $x-$axis. In this paper, a 3-variable generating function $R_L(x,y,z)$ is given for Schr\"{o}der paths and small Schr\"{o}der paths respectively. As corollaries, we obtain the generating functions for several kinds of generalized Schr\"{o}der paths counted according to the order in a unified way.
Introduction
In this paper, we will consider the following sets of steps for lattice paths: where (r, r), (r, −r), (2r, 0) are called up steps, down steps and horizontal steps respectively.
For a given set S of steps, let L S (n) denote the set of lattice paths from (0, 0) to (2n, 0) with steps in S, and never go below the x−axis. Let A S (n) denote the subset of L S (n) whose member paths have no horizontal steps on the x−axis. We denote by L S = n≥1 L S (n) and A S = n≥1 A S (n). Then L S 1 (n), L S 3 (n) and A S 3 (n) are the sets of Dyck paths, Schröder paths and small Schröder paths of order n respectively.
It is well known that |L S 1 (n)| is the nth Catalan number (A000108 in [8]), |L S 3 (n)| is the nth large Schröder number (A006318), and |A S 3 (n)| is the nth small Schröder number (A001003). Define a peak in a Dyck path to be a vertex between an up step and a down step. Then the number of Dyck paths of order n with k peaks is the well known Narayana number (A001263) The nth Narayana polynomial is defined as N n (y) = 1≤k≤n N(n, k)y k for n ≥ 1 with N 0 (y) = 1. In [11], Sulanke gave the generating function for the Narayana polynomial as denote the generating functions for |L S i (n)| and |A S i (n)| respectively. As one type of generalization of Dyck paths, L S 2 (n) has been studied by several authors. The generating function P S 2 (x) is given in [7] and [1] with different methods as Moreover, Coker [1] and Sulanke [11] expressed |L S 2 (n)| as a combination of Narayana numbers, and Woan [12] gave a three-term recurrence for |L S 2 (n)|. For other types of generalization of Dyck paths, readers can refer to [6] and [9]. Comparing to the above results about generalization of Dyck paths, generalization of Schröder paths has been rarely studied until Kung and Miler [7] gave the generating functions P S i (x) (4 ≤ i ≤ 6). Later, Huh and Park [5] expressed |A S 4 (n)| as a combination of Narayana numbers.
Note that we can also obtain Equation (1.2) by considering the number of runs of Dyck paths. Here a run in a lattice path is defined to be a vertex between two consecutive steps of the same kind. Let R(n, k, S 1 ) denote the number of lattice paths in L S 1 (n) with k runs. Since a Dyck path of order n with k peaks has 2n − 2k runs, we obtain from Equation (1.1) that 1 + n,k≥1 R(n, k, S 1 )x n y k = 1 + n,k≥1 N(n, k)x n y 2n−2k Motivated by the above observation, we study the number of runs for Schröder paths according to the following two types: a run is diagonal if it is the joint of two up steps or two down steps, and a run is horizontal if it is the joint of two horizontal steps.
For a Schröder path P , let dr(P ), hr(P ) and order(P ) denote the number of diagonal runs, the number of horizontal runs and the order of P respectively. Then the generating function R L (x, y, z) is defined for L ⊆ L S 3 as R L (x, y, z) = 1 + P ∈L x order(P ) y dr(P ) z hr(P ) .
In this paper, we give R L (x, y, z) for L = L S 3 and L = A S 3 . As corollaries, we obtain the generating functions P S i (x) and Q S i (x) for 4 ≤ i ≤ 6 in a unified way.
The case for Schröder paths
In the following, we use U, D and H to denote the steps (1, 1), (1, −1) and (2, 0) respectively. For a lattice P and a step s, the insertion of s at a vertex v of P is defined as following: decompose P into two parts at v as P = P 1 P 2 , where P i maybe empty. Then we connect the initial vertex of s to the end vertex of P 1 , and connect the end vertex of s to the initial vertex of P 2 . See Figure 1 for an example.
Given P ∈ L S 1 (n) with k peaks, let V denote the set of vertices of P other than runs. We then insert m H steps to P as following: (1) We firstly choose i vertices from V , and insert an H step at each chosen vertex. In this step, we have 2k+1 i choices, and each insertion has no effect to the number of runs.
(2) For the lattice path obtained after step (1), we choose j vertices from its runs, and insert an H step at each chosen vertex. In this step, we have 2n−2k j choices, and the number of diagonal runs will decrease by j after insertion.
(3) For the lattice path obtained after step (2), we insert the remaining m − i − j H steps immediately after the i + j H steps that have been inserted. In this step, we have m−i−j i+j choices, and the number of horizontal runs will increase by m − i − j after insertion.
Let Ins m (P ) denote the set of all Schröder paths obtained from P by the above insertion. Then we have On the other hand, let HL S 3 denote the subset of L S 3 whose member paths consisting of H steps only. Let UL S 3 denote the subset of L S 3 whose member paths have at least one U step. It is obvious that each path of UL S 3 can be obtained uniquely from a Dyck path by inserting some H steps as above. Thus we have Ins m (P ).
Summarizing the above discussion, we then obtain the following result.
x order(P ) y dr(P ) z hr(P ) + P ∈U L S 3 x order(P ) y dr(P ) z hr(P ) then Theorem 2.1 is derived from Equation (1.1).
The generating functions P S i (x) for 4 ≤ i ≤ 6 were derived by Kung and Mier [7]. Here we can obtain them as a direct corollary of the above result.
Proof. We use a bijection given by Huh and Park [5]. LetL S 3 (n) denote the set of Schröder paths of order n whose runs are colored in either black or white, and other vertices are colored in black only. For P ∈L S 3 (n). Let φ(P ) denote the lattice path obtained from P as following: delete all white vertices of P , and then connect adjacent black vertices with line segments. See [5, Figure 8] for an example. It is obvious that φ is a bijection from L S 3 (n) to L S 4 (n), which implies that Similarly, we can obtain P S 5 (x) and P S 6 (x) by setting the pair (y, z) to be (1, 2) and (2, 1) in R L S 3 (x, y, z) respectively. [7] gave the asymptotic formula for |L S 4 (n)|. The asymptotic formulas for |L S 5 (n)| and |L S 6 (n)| can be obtained from Corollary 2.2 in a similar way:
Using the techniques in [4] ([Chapter VI]), Kung and Miler
where α i and β i are defined as following: is the root of equation f 1 (x) = 1 − 12x + 16x 2 = 0, and (2) α 2 = 0.16243 · · · is the root of equation (3) α 3 = 0.09678 · · · is the root of equation 68998 · · · . Theorem 2.1 can also be used to study colored Schröder paths. For instance, let a(n) denote the number of Schröder paths of order n with their horizontal runs colored in one of three given colors. Then we obtain from Theorem 2.1 that The coefficients of the above function appear as sequence A186338 in OEIS, and is related to sequence A091866.
The case for small Schröder paths
A lattice path in A S 3 is said to be primitive if it does not intersect the x−axis except at (0, 0) and (2n, 0). Let P A S 3 denote the set of all primitive paths in A S 3 . Since every path in A S 3 can be decomposed uniquely into a sequence of paths in P A S 3 , we have where we useR L (x, y, z) to denote the function R L (x, y, z) − 1 for a given set L of lattice paths. We now consider the generating functionR P A S 3 (x, y, z). Note that the set UL S 3 can be partitioned as UL S 3 = 4 i=1 U i , where (1) U 1 ={P | P starts with U and ends with D}; (2) U 2 ={P | P starts with H and ends with D}; (3) U 3 ={P | P starts with U and ends with H}; (4) U 4 ={P | P starts and ends with H}.
As shown in Section 2, each path P ∈ UL S 3 can be obtained uniquely from a Dyck path P ′ by inserting some H steps, and we have the following fact: (1) if it is not allowed to insert at either the initial vertex or the end vertex of P ′ , then P ∈ U 1 ; (2) if it is required to insert at the initial vertex of P ′ , and not allowed to insert at the end vertex, then P ∈ U 2 ; (3) if it is required to insert at the end vertex of P ′ , and not allowed to insert at the initial vertex, then P ∈ U 3 ; (4) if it is required to insert at both the initial vertex and the end vertex of P ′ , then P ∈ U 4 .
Based on the above observation, we can obtain the following result after some calculation.
Proof. The proof of the above result is almost the same as that of Theorem 2.1. Here we takeR U 2 (x, y, z) as an example. By the definition of U 2 , we havē Now we can obtain R A 3 (x, y, z) as a direct corollary of the above result.
Expanding the above functions, we have The coefficients of Q S 4 (x) appear as sequence A078009 in OEIS. The generating functions Q S 5 (x) and Q S 6 (x), to our knowledge, have not been studied before. From Corollary 3.3, we can obtain the following asymptotic formulas: where α i and f i are the same as those in Section 2, and γ i is defined as following: It is well known (see, for example, [3,10]) that |L S 3 (n)| = 2|A S 3 (n)|. Comparing the asymptotic formulas of |L S i (n)| and |A S i (n)| for 4 ≤ i ≤ 6, we have the following analogue. By giving a bijection between 5-colored Dyck paths and |A S 4 (n)|, Huh and Park [5] gave the following expression for |A S 4 (n)|, which we can also prove here with generating function. Then Corollary 3.5 is derived from Corollary 3.3. | 2020-09-11T01:00:30.251Z | 2020-09-09T00:00:00.000 | {
"year": 2020,
"sha1": "77959d647777f9c6cc60c9e145a6c08fb0a69057",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "77959d647777f9c6cc60c9e145a6c08fb0a69057",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
3908337 | pes2o/s2orc | v3-fos-license | Phylodynamics and evolutionary epidemiology of African swine fever p72-CVR genes in Eurasia and Africa
African swine fever (ASF) is a complex infectious disease of swine that constitutes devastating impacts on animal health and the world economy. Here, we investigated the evolutionary epidemiology of ASF virus (ASFV) in Eurasia and Africa using the concatenated gene sequences of the viral protein 72 and the central variable region of isolates collected between 1960 and 2015. We used Bayesian phylodynamic models to reconstruct the evolutionary history of the virus, to identify virus population demographics and to quantify dispersal patterns between host species. Results suggest that ASFV exhibited a significantly high evolutionary rate and population growth through time since its divergence in the 18th century from East Africa, with no signs of decline till recent years. This increase corresponds to the growing pig trade activities between continents during the 19th century, and may be attributed to an evolutionary drift that resulted from either continuous circulation or maintenance of the virus within Africa and Eurasia. Furthermore, results implicate wild suids as the ancestral host species (root state posterior probability = 0.87) for ASFV in the early 1700s in Africa. Moreover, results indicate the transmission cycle between wild suids and pigs is an important cycle for ASFV spread and maintenance in pig populations, while ticks are an important natural reservoir that can facilitate ASFV spread and maintenance in wild swine populations. We illustrated the prospects of phylodynamic methods in improving risk-based surveillance, support of effective animal health policies, and epidemic preparedness in countries at high risk of ASFV incursion.
Introduction
African swine fever (ASF) is a complex infectious disease of swine classified as a notifiable infection to the World Organisation for Animal Health.Neither vaccine nor treatment is available against this disease.Therefore, ASF control and eradication is based on rapid field recognition, isolation of suspected cases and diagnosis, followed by implementation of strict sanitary measures [1,2].Thus, the presence of ASF constitutes devastating impacts on animal health and the world economy due to stamping out policies and trade restriction at national and international levels [1].
ASF is caused by a complex, large, enveloped DNA virus currently classified as the only member of the family Asfarviridae [3].The viral genome consists of a central conserved region and two variable ends.Therefore, ASFV DNA molecule length may range between 170 and 193 kilobase (kb) depending on isolates [4].ASFV genotyping is usually based on partial sequence analysis of the B646L gene encoding the viral protein 72 (vp72).Thus far, 24 genotypes have been identified [5][6][7].Sequence analysis of tandem repeats in the central variable region (CVR) within the B602L gene [8], the intergenic region between I73R and I329L genes [9] and the EP402R and MGF505-2R regions [10,11] permit to distinguish between closely related ASFV isolates.
This disease was described for the first time in Kenya, in 1921 [12].However, ASFV has not been confined to the African continent since its discovery.Several introductions from Africa into Europe have been described so far [1].The first incursion from Angola to Lisbon took place in 1957.Three years later, in 1960, ASF reached Lisbon for the second time from there it spread to other European, Caribbean and South American countries.The disease was successfully eradicated from all these territories except from the Italian island of Sardinia where the disease remains endemic since 1978 [13].In 2007, a new incursion of ASFV from Southeast Africa to Georgia was observed.Since then, ASF has spread northward and westward affecting the whole Caucasus region (2007) the Russian Federation (2007), Ukraine (2012), Belarus (2013), the Baltic countries (2014), Poland (2014), Moldova (2016), Czech Republic and Romania (2017) [14].To date, ASFV has been described in more than 28 sub-Saharan countries [14,15] as well as in previously mentioned European states.
In the current and past affected areas, different transmission models affecting domestic pigs, wild boar, wild African suids (warthogs, bush pigs, and giant forest hogs), and soft ticks of the genus Ornithodoros were identified [1,16].Ornithodoros ticks act as biological vectors and reservoirs of ASFV [17], able to transmit the virus even after years of absence from the viraemic hosts [18,19].On the South and East part of the African continent, the most complex transmission cycle involves wild African suids, domestic pigs and O. moubata complex ticks [1,20].On the Iberian Peninsula, a similar model was identified, where wild boar, outdoor domestic pigs, and O. erraticus ticks cohabited [17].In addition to this, a domestic pig-tick cycle without wild suids involvement has also been reported in some African areas such as Mozambique, Malawi and the Southwest of the Iberian Peninsula [19,[21][22][23].While in Eastern Europe, Sardinia and West Africa, the transmission cycle seems to implicate infected domestic pigs and/or wild boar without the involvement of ticks [20,24].
Despite efforts made to control and eradicate the disease in Europe, ASF continues to affect domestic pigs and wild boars [24][25][26] which poses a threat for other pig producers and ASFfree wild populations.Since the European Union (EU) Council Decision of 1990 (EU Official Bulletin no.L 116, 1990), ASF surveillance activities have undergone substantial developments within the EU member states.Such surveillance activities have included targeted serological and virological testing of pigs and wild boar in high-risk areas, risk assessments, epidemiological investigations, and pre-movements tests [27].Furthermore, the EU Council decision of 2002 and 2003 (Council Directive 2002/60/EC and Commission Decision 2003/422/EC) advocated for the need to assess the tick-wild boar cycle and acknowledged its importance as a key component of ASF surveillance.However, such a decision has further complicated ASF intervention measures, due to the lack of sufficient scientific data about the tick-wild boar interface [28].
Molecular surveillance of ASFV is an integral part of the disease intervention activities in Europe and Africa.Most published studies used molecular characterization of vp72 or/and CVR gene segments for genotyping, subgrouping close related isolates, [11,[29][30][31] and investigating the molecular epidemiology of the virus using traditional phylogenetic methods, such as neighbour-joining or maximum likelihood techniques [9,[32][33][34].Furthermore, such studies draw conclusions on the evolutionary origins of isolates through examining the phylogenetic, spatial and temporal characters in an entirely separate analytical setting [33][34][35][36].These studies ignored important uncertainties and parameters associated with the estimates of the phylogenetic relationships, spatial and temporal factors [37,38].Subsequently, past methodological approaches used to study the virus ignored that evolutionary and epidemiological characters of pathogens like ASF occur on approximately the same time-scale.Therefore, they must be considered in an integrated analytical setting to be properly investigated, prevent biased conclusions, and improve surveillance-related decision making [39].
In the past decade, the field of phylodynamics has become well established for investigating the evolutionary epidemiology of animal and human pathogens, [40][41][42][43][44] as it aims to model the joint evolutionary and epidemiological characteristics of the virus using a unified Bayesian statistical framework [45].This approach treats evolutionary, host species, spatial, and temporal parameters as random variables and assigns them prior probability distributions to infer their corresponding posterior probability distributions [45].This property provides a powerful molecular tool that accounts for the uncertainties in the phylogeny, viral population demographics, and spatiotemporal dispersal between geographical regions and host species, which corresponds to long-standing questions [38,46].
To our knowledge, only Michaud et al. 2013 [47] advocated for the use of phylodynamic methods for molecular dating and genotyping of ASFVs.The objective of this study was to investigate the evolutionary epidemiology of ASFV in Eurasia and Africa using the vp72 and CVR combined gene sequences collected between 1960 and 2015.We used several Bayesian phylodynamic models to reconstruct the evolutionary history of the virus to identify its population demographics and other relevant evolutionary parameters.For the first time, we used discrete-trait phylodynamic models to quantify viral population demographics through time and dispersal patterns between and within continents as well as between host species.Our study illustrates the utility of Bayesian phylodynamic methods in improving ASFV molecular surveillance and decision making related to its intervention measures.
Sequence data
Partial sequencing of the B646L gene encoding the vp72 and tandem repeats in the CVR within the B602L gene were obtained from field isolates circulating between June 1960 and July 2015 in Eurasia and Africa.All sequences used in this study are available on request at the European Union Reference Laboratory for ASF (at http://asf-referencelab.info/asf/en/) (S1 Table ).Sequences consisted of the partial sequencing of the B646L gene encoding the vp72 and tandem repeats in the CVR within the B602L gene.Sequence dataset contained information including isolate name, country of origin, collection date, and affected host.Sequencing was performed according to described procedures [48] at both national and the EU reference laboratories for ASF.The data consisted of a total of 665 sequences collected from 14 Eurasian, 8 East African and 11 West and Central African countries between June 1960 and July 2015 (these isolates had available sequences of both segments) (S2 collection for each sequence was converted into fractional years to estimate divergence times.However, 12.5% of the sequence dataset only had year-specific information; therefore, the date of collection was specified as the mid-point of the corresponding year. AliView version 1.18 [49] was used to concatenate vp72 and CVR into one gene segment (vp72-CVR) for the subsequent analysis.MUSCLE version 3.8.3 was used to align the concatenated sequences and the resulting alignment was assessed manually by translating the reading frame into amino acids.Maximum likelihood (ML) of the phylogeny and tree topology for the sequence dataset was estimated under the GTR+Γ substitution model using RAxML version 8.0 [50], in which node support was estimated using 10 through bootstrap searches with 100 ML replicates (S1 Fig) .We assessed the coefficient of concordance for the sequence data distance matrices using the 'CADM.global'function implemented in 'Ape' R statistical package [51] and rejected the null hypothesis that all matrices are incongruent (pvalue < 0.05).Our preliminary phylogenetic analyses illustrated most of the sequences were collected in the recent years (S2 This made the subsequent analyses less computationally demanding and will assure better model convergence as suggested elsewhere [52][53][54].Recombination events in the selected sequences were not detected using Recombination Detection Program version 3.0 [55].Finally, we used TempEst version 1.5.1 [56] to explore the presence of temporal structure in the sequence data and estimated a positive correlation (correlation coefficient = 0.190) between genetic divergence and sampling time.This suggested that the sequence data is suitable for the subsequent phylogenetic molecular clock analyses.
Inferring ASFV population demographics and divergence times
Virus population demographics and time to the most recent common ancestor (TMRCA) were estimated using the relaxed-clock models implemented in BEAST version 1.8.4 [57] within a Bayesian statistical framework.Best partition scheme for the substitution models of the sequence alignment was selected using the Bayesian Information Criterion (BIC) [58] implemented in PartitionFinder version 1.1.1[59].
Tree tips were calibrated with isolation dates of the sequences in order to estimate divergence time.Best fitting node-age tree model for the sequences data was selected through evaluating four parametric and one non-parametric coalescent priors to infer the most realistic population growth patterns of the virus through time [60].The parametric coalescent priors included: (1) the constant population size (CP) [61]; (2) the expansion growth (EGx) [62]; (3) the exponential growth (EG) [62]; and (4) the logistic growth (LG) [62]; while the nonparametric coalescent prior was ( 5) the Bayesian Skygrid model, which implement a Gaussian Markov random fields (GMRF) prior to smooth the trajactories of the past population dynamics [63].For each coalescent tree model, two branch-rate priors were further evaluated, namely, the uncorrelated lognormal (UCLN) and exponential (UCED) branch-rate priors [64].The continuous-time Markov chain (CTMC) hyperprior [65] implemented in BEAST was used to infer the parameters of the branch-rate prior distribution.Thus, using the Bayes factor (BF) comparison approach, the fit of ten candidate relaxed-clock models were evaluated, which comprised all combinations of: (1) a single mixed-substitution model; (2) five coalescent tree priors (CP, EG, EGx, LG and GMRF); and (3) two branch-rate models (UCLN and UCED).This was achieved through estimating the marginal-likelihood of each candidate model using 'stepping-stone' sampling (SS) [66] and 'path-sampling' (PS) [67] methods implemented in BEAST.Then the resulting marginal-likelihood estimates were used to select among the corresponding relaxed-clock models using BF comparison.
Finally, posterior parameters of the phylogeny and divergence time under each candidate relaxed model were estimated using two replicate Bayesian Markov chain Monte Carlo (MCMC) simulations for 200 million cycles and sampled every 2000 th state.Tracer version 1.6 [68] was used to calculate the effective sample size (ESS) as an evaluation criterion (i.e.ESS > 200) for the proper convergence of each MCMC simulation of every posterior parameter.From each chain, the first 10% of the samples was discarded as burn-in.Then, the resulting marginal posterior probability density was summarised as a maximum clade credible (MCC) tree, with median node ages using TreeAnotator version 1.8.4.A Bayesian Skygrid (BSg) plot was generated using Tracer to provide temporally smoothed estimates of the changes in the effective population size trajectories of the virus between 1960 and 2015.This plot could be used as a proxy for the genetic diversity of vp72-CVR gene through time [69].
Inferring ASFV phylogeographic history and transmission between host species
Phylodynamic histories of ASFV between host species from 1960 to 2015 were inferred using discrete-trait ancestral reconstruction phylodynamic models implemented in BEAST [48].Selected discrete traits included a total of three host groups namely domestic pig, wild suids and ticks (S3 Table ).The best fitting coalescent tree model and branch-rate prior combinations, described in the above analyses, was used for the subsequent phylodynamic models.The fit of the sequence data was further assessed for two candidate discrete-trait phylodynamic models, namely the symmetric model and the asymmetric model, which allows the non-zero rates of change between each pair of discrete states to be equal (reversible transitions) or differ (irreversible transitions), respectively.Furthermore, Bayesian stochastic search variable selection (BSSVS) [38] was used to eliminate non-significant elements of the matrix specifying the non-zero rates of change between each pair of discrete host species.Finally, the mean number of dispersal events between each pair of discrete traits was inferred using the Markov-jump approach [70].This method uses a robust-counting MCMC approach to infer intensity of backward and forward transitions within a matrix of discrete traits [70].
Finally, Kullback-Leibler (KL) divergence statistic [71] and association index (Ai) [72] were used to validate the discrete trait prior and posterior estimates to accommodate phylogenetic uncertainty of the selected models.The KL statistic was calculated using the Razavi function [73] and Matlab version 2016a [74] to measure the departure of the posterior estimates inferred from the selected phylodynamic models from its underlying priors.The Ai statistic was calculated using Bayesian Tip-Significance Testing (BaTS) version 1.0 [72] to test the presence of structure in the evolutionary diffusion of the virus caused by the selected discrete trait.
Demographic history of ASFV in Eurasia and Africa
Vp72-CVR ASFV sequences selected by the ML analysis (S2 Fig) significantly favoured the parametric EG coalescent tree model with the UCED branch-rate prior, using the BF comparisons (BF > 15.9) of the SS and PS marginal likelihoods estimators (S4 Table, S1 File).The inferred posterior estimate of the mean nucleotide substitution rate was equal to 3.31 × 10 −4 /site/year with a 95% highest posterior density (HPD) ranging from 8.51 × 10 −5 to 5.89 × 10 −4 .Oldest TMRCA for vp72-CVR sequences for viruses isolated from outbreaks in Africa and Europe was approximately 243 years ago in East Africa (Table 1).While, viruses isolated from Eurasia and West Africa were younger by 87 and 68 years, respectively (Table 1).The BSg plot, generated from the BSg coalescent tree model with the UCLN branch-rate prior (S1 File), showed a slow, steady increase in the virus genetic diversity through time, followed by a distinct continuous increase after the 1800s with no sign of decline (Fig 2).The estimated exponential growth rate of the virus isolates was 0.01 year -1 (95% HPD: 0.001 to 0.022 year -1 ).A higher evolutionary rate was inferred after the 1800s among branches of the virus phylogeny (Fig 3).
Phylodynamic history of ASFV between host species in Eurasia and Africa
BF comparisons indicated that the asymmetric model provided the best fit for the vp72-CVR sequences when using host species as a discrete state (BF > 50).This result suggests that the non-zero rates of change of the virus when it jumps between host species differ, and thus, the directionality of the transmission is significant.Wild suids were the most likely ancestral host for ASFV transmission between hosts ( .KL statistic suggests a decent statistical power (KL = 0.81) of the selected host species model, which indicate that the selected discrete trait (i.e.host species) generated RSPPs that are slightly different from the underlying priors.Furthermore, the borderline statistically significant AI (p-value = 0.07) suggests that the evolutionary diffusion of ASFV between hosts is relatively structured.
Discussion
This study provided new deeper insights into the evolutionary epidemiology of ASFV in Eurasia and Africa, regions that are important for virus emergence, maintenance, and spread.For the first time, Bayesian phylodynamic analyses of the combined vp72-CVR gene segments revealed that current infectious ASFV was not only the result of a complex evolutionary processes of the virus through time in Eurasia and Africa, but it was also the result of a transmission cycle between host species.This study also identified important viral dispersal and transmission routes between Eurasian and African host species.Demographic reconstruction through-time of vp72-CVR gene sequences suggests a high evolutionary rate (i.e., substitution rate = 3.31 × 10 −4 /site/year and exponential growth of 0.01 year -1 ) for ASFVs isolated from outbreaks in Eurasia and Africa between 1960 and 2015.Our estimated evolutionary rate is similar to rapidly evolving RNA viruses and relatively higher than other DNA viruses, as suggested elsewhere [47,75].Estimates of the divergence time (i.e., TMRCA) confirms the common notion of ASFV being native to East Africa [1,12], where the virus first emerged in the 1700s (Table 1).Our estimates of the evolutionary rate and TMRCA inferred from vp72-CVR sequences are similar to Michaud et al. 2013 [47].However, Michaud et al. made his inferences based on independent analyses of the three gene segments namely B646L (vp72), E183L and CP204L [47].Inferred divergence times summarised in Table 1 further confirms that common ancestors of ASFV isolated from Eurasia and West Africa were younger than those isolated in East Africa.
The estimated genetic diversity clearly represents the inferred rapid exponential growth of ASFV through time (Fig 2).Indeed, throughout the centuries, ASFV only showed this rapid growth behaviour after its ancestors emerged from West Africa and Eurasia, which coincided with the time when major trade routes between continents started to flourish and peak in the 18th and 19th centuries.Furthermore, genetic diversity of ASFV started to peak significantly after the 1800s with no signs of decline till 2015 (Fig 2).This peaking is potentially attributed to the British colonisation of Kenya, in which the swine industry became substantially larger due to the massive importation of domestic pigs [47,76].This higher growth rate may suggest expanding diversity through time which corresponds to the growing pig trade activities between continents during the 19th century.This finding may also be attributed to an evolutionary drift that resulted from either continuous circulation or maintenance of the virus within Africa and Eurasia.Indeed, many of the recently isolated ASFV lineages exhibited a rapid evolutionary rate among the branches of the inferred posterior phylogeny (Fig 3).
Results of the host species phylodynamic model strongly implicate wild suids as the ancestral host species (RSPP = 0.87) for ASFV in the early 1700s in Africa (Figs 4A and 3B).The major two branches diverging from the root of the MCC tree (Fig 4A ) represents two different transmission cycles for ASFV between host species.First, the Eurasian cycle between wild boar and domestic pigs that included only European isolates.The second, and the more complex African cycle between wild African suids, (known reservoirs and carriers of ASFV) ticks, and domestic pigs (Fig 4A).The later transmission cycle resulted in more significant diversification events of ASFV (i.e., larger sub-tree) in Africa and Europe than the earlier cycle (Fig 4A) [1,24].Also, our results indicate that the virus jumped more frequently from wild suids and ticks than from domestic pigs (Fig 4C ), which suggests that both wild suids and ticks maintained an old and indefinite transmission cycle, still present in Africa, that later started infecting domestic pigs [47].This might be explained by the ecology of some wild suids such as warthogs, that live in burrows containing infected ticks [20].Transmission from wild African suids to domestic pigs would need to be mediated by tick bites or directly, when domestic pigs feed on carcasses of infected wild animals [20].Fig 4D summarises the complex transmission cycle of ASFV in Eurasia and Africa since its divergence in the 1700s.As expected, the most significant transmission route is from wild suids to domestic pig (BSVSS BF = 479.3),while the opposite transmission route was substantially less significant (BSVSS BF = 48.6).These results suggest that the transmission cycle between wild suids and pigs, as well as within domestic pigs, are the most important cycles for ASFV spread and maintenance in Eurasia and Africa (Fig 4) [1,77].However, the inferred significant unidirectional transmission route from tick to wild suids confirms that ticks are an important natural reservoir that can facilitate ASFV spread and maintenance in wild suid populations (Fig 4D) [17,78].Furthermore, results confirm the notion that transmission cycle between pig and ticks is rare [47].
One important limitation of our study was that the reconstructed phylodynamic model was based on a biased subset of vp72 and CVR sequence data.Moreover, we only used 13% of the available sequence data due to the severe lack of phylogenetic structure in the remaining 87%.Hence, we were not able to model the phylogeographic history of ASFV between and within affected countries or geographical regions.Although we had tried to run our models using all sequence data, their convergence and uncertainty statistics (i.e., KL and AI) were severely poor.That said, our study is based on all available viruses of which their vp72 and CVR gene segments have been sequenced and associated with notable ASFV outbreaks in Eurasia and Africa, and therefore reflects our best understanding of ASFV evolutionary history on country and regional levels.Due to some limitations in the database (for instance, the fewer number of sequences coming from ticks and wild African suids), transmission models between hosts partially showed the complexity of ASF epidemiology.This fact might lead to certain bias in obtained result, especially in areas where wild suids, domestic pigs and tick cohabit.This situation is clearly manifested with regard to the transmission of ASFV between domestic pigs and ticks.In countries such as Malawi and Mozambique in Africa, and Spain and Portugal in Europe, the cycle tick-domestic pig has been described after finding ASFV positive ticks in pig shelters [21-23, 79, 80].In such scenarios, the presence of ticks caused outbreaks without any apparent wild suids involvement, long persistence of ASFV (even for years) in the environment/animal facilities, [18] and re-emergence of the ASFV in areas considered to be diseasefree [81,82].However, results obtained from this study were not able to show this transmission cycle, suggesting that there is still room for improvements when further sequences, segments or full genomes are available, as well as information related to swine farm demographics and movements within and between continents.
While Michaud et al. 2013 [47] endorsed the use of Bayesian phylodynamic methods for molecular dating of ASFV sequences as well as for revising its genotyping classification method, here we further recommend the utilization of these analytical methods for guiding risk-based surveillance, control, and prevention efforts.Our results provided plausible biological inferences about the evolutionary history of ASFV within geographical regions and host species in susceptible areas like Eurasia and Africa by using a vp72-CVR sequence data set and its related epidemiological information.Unfortunately, analytical methods used in this study have not been fully or even partially used by global animal health agencies for molecular surveillance of ASFV or guiding risk-based interventions.Instead, recent studies of ASFV continues to use traditional phylogenetic methods to infer the evolutionary history [9,33,34,83]F without quantitatively accounting for time or other epidemiological characteristics of the virus or its host species in the inferred phylogeny.In this study, we demonstrated the prospects of our analytical approach which provided deeper insights into the evolutionary epidemiology of ASFV.The ability to specify priors for ASFV evolutionary parameters and selection of different model assumptions provides a robust tool to identify new viruses, genotyping of new viral clades, and reconstruct phylogenetic relationships between isolated strains [47].
Furthermore, molecular surveillance of ASFV evolutionary characteristics can be used to evaluate the effect of intervention measures, such as movement restriction or stamping out, on the rate of evolution and genetic diversity of the virus.Indeed, including phylodynamic methods in the set of available analytical tools will support the development of effective animal health policies.It will also aid epidemic preparedness in neighbouring ASFV free-countries, especially when the genetic diversity of ASFV continues to increase as described above.
Conclusions
In this study, we presented a novel attempt to rigorously model the evolutionary epidemiology of ASFV in Eurasia and Africa using several variants of the Bayesian phylodynamic models.Results suggest that ASFV vp27-CVR gene sequences isolated from outbreaks in Eurasia and Africa between 1960 and 2015 exhibited a significantly high evolutionary rate since its divergence in the 18th century from East Africa with no sign of decline till 2015.Increase in the genetic diversity suggests a genetic drift and corresponds to the growing pig trade activities between continents during the 19th century.Furthermore, results implicate wild suids as the ancestral host species for ASFV in the early 1700s in Africa.Two important transmission routes were inferred between wild suids and domestic pig, while one unidirectional transmission route inferred from tick to wild suids.These results indicate the transmission cycle between wild suids and pigs is an important cycle for ASFV spread and maintenance in pig populations, while ticks are an important natural reservoir that can facilitate ASFV spread and maintenance in wild suids populations.We illustrated the prospects of phylodynamic methods in improving risk-based surveillance, support of effective animal health policies and epidemic preparedness in countries at high risk of ASFV incursion.
Fig 1 .
Fig 1. Geographical distribution of ASF sequences isolated in Eurasia and Africa between 1960 and 2015 (N = 665).Red circles indicate locations of ASFV isolates, where their CVR and vp72 gene segments were sequenced.The circles' size is proportional to the number of isolates.https://doi.org/10.1371/journal.pone.0192565.g001 Fig) and 100% identical, where they clustered with no phylogenetic structure on the upper branch of the ML tree (S1 Fig).Thus, we decided to discard sequences with 100% nucleotide identity (86%) and proceeded with the subsequent analyses with the remaining sequences (n = 96; S3 Fig).
Fig 2 .Fig 3 .
Fig 2. Bayesian Skygrid plot for temporal variation in the effective population size of ASF vp72-CVR genes in Eurasia and Africa between 1960 and 2015.The posterior median estimate is indicated by the red line; the blue lines correspond to the 95% HPD.Vertical dotted line represents the estimated time at which the population growth transitioned from a slow rate to a fast rate.https://doi.org/10.1371/journal.pone.0192565.g002 Fig 4A) as suggested by the substantially high root state posterior probability (RSPP = 0.87; Fig 4B).Furthermore, wild suids had the highest mean counts of relative forward and reverse transitions between host species (Forward = 41.0 vs Reverse = 37.2; Fig 4C), while domestic pig had the lowest mean counts of relative transitions between host species (Forward = 33.5 vs Reverse = 33.5;Fig 4C).Only three significant transmission routes inferred between host species (BSSVS BF > 13; Fig 4D) including two, back and forth, between wild suids and domestic pig, and one from tick to wild suids (Fig 4D).The most significant viral transmission route (BSSVS BF = 479.3)inferred from wild suids to domestic pig (Fig 4D).Finally, no significant transmission routes of ASFV inferred between tick and domestic pig (Fig 4D)
Fig 4 .
Fig 4. Host species Phylodynamics of ASF vp72-CVR genes in Eurasia and Africa between 1960 and 2015.A) MCC phylogeny with its branches coloured by the most probable host species state of their descendent nodes.B) represents the root location state posterior probability distributions and corresponds to the colour-coding of (A).C) mean forward and reverse transitions estimated by Markov jumps (MJ) approach between hosts (D) represents inferred transmission routes between host species, mean counts MJ of forward and reverse transitions and significant connections (BF > 13) with their directions between hosts.https://doi.org/10.1371/journal.pone.0192565.g004
Table 1 . Time to the most recent common Ancestor (TMRCA) of vp72-CVR genes of ASF in Eurasia and Africa between 1960 and 2015.
à Highest posterior density. | 2018-04-03T06:01:48.912Z | 2018-02-28T00:00:00.000 | {
"year": 2018,
"sha1": "bae0b9ad71b4b5dcbaf138c77f52205d45cd5dbb",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0192565&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ce8d3429b5bdeda41c799ca6f6cb261a0bdfaec5",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246566890 | pes2o/s2orc | v3-fos-license | Axillary lymphadenopathy in a liver transplant recipient: Initial manifestation of disseminated cryptococcosis
Immunocompromised patients, especially organ transplant recipients, are at risk for opportunistic infections. Cryptococcus, a ubiquitous environmental fungus, can cause potentially fatal infection in such hosts. While it can involve any organ in the human body, respiratory and central nervous systems are commonly affected. We present a case of disseminated cryptococcal infection in a liver transplant recipient in whom the initial presentation was bilateral axillary lymphadenopathy, a relatively rare clinical manifestation. Rapid diagnosis and targeted antimicrobial therapy are paramount for favorable clinical outcomes, particularly in this patient population.
Introduction
Cryptococcus is a ubiquitous environmental yeast [1]. The incidence of crytpococcosis has risen recently given a relative increase in the immunocompromised patient population such as the organ transplant recipients [1]. Clinical manifestations are diverse, though common sites of infection are the respiratory and central nervous systems [1]. Rapid diagnosis and initiation of appropriate antimicrobial therapy is important for favorable outcomes [1].
Case
A 74-year-old retired man with history of orthotopic liver transplantation 4 months prior, presented with malaise, 20 pound weight loss, and 1-2 months of diminished appetite. He denied fevers, cough, chest pain, shortness of breath, rash, headache, or abdominal symptoms. The patient had resided in Vietnam for a year in the remote past and worked in construction for about 50 years. His past medical history included coronary artery disease, hypertension, and chronic renal failure. Surgical history was significant for cholecystectomy that was unfortunately complicated by injury to the common bile duct and hepatic artery, warranting liver transplantation. There were no episodes of hepatic graft rejection. He denied sick contacts or recent travel. Immunosuppressive medications were tacrolimus and mycophenolate mofetil; opportunistic infection prophylaxis included trimethoprim-sulfamethoxazole and valganciclovir.
On examination, the patient was afebrile and hemodynamically stable. His oxygen saturation was 98% on room air; he was awake, alert, and oriented. Physical examination revealed bilateral nontender, non-matted axillary lymph nodes and a well-healed abdominal surgical scar. The remainder of the examination was essentially unremarkable. Laboratory evaluation was notable for a white blood cell count of 1560 / μL (4400-11,300 / μL) with an absolute neutrophil count of 420 / μL (2000-9300 / μL). Atypical lymphocytes were elevated at 16% (0-10%). Serum creatinine was 2.67 mg/dL and AST, ALT, and serum bilirubin were within normal limits. Non-contrast CT of the chest, abdomen, and pelvis revealed extensive lymphadenopathy including enlarged bilateral axillary lymph nodes, measuring 1.5 × 1.2 cm on the right and 1.2 × 0.5 cm on the left, and multiple enlarged mediastinal lymph nodes, largest in the pre-carinal region measuring approximately 1.7 × 2.3 cm. Two pulmonary nodules were seen in right upper lobe with cluster treein-bud opacities. Additionally, a 6.2 × 5 cm fluid collection around the hepatorenal fossa was evident.
A right axillary lymph node excisional biopsy was performed. Microscopic images of the surgical pathology specimen are shown under different staining techniques (Figs. [1][2][3][4][5]. H&E-stained sections demonstrated cleared-out spaces containing pleomorphic refractile round to ovoid cells, occasionally demonstrating narrow-based budding ( Figs. 1 and 2). The cells were positive with Periodic acid-Schiff (PAS) and Grocott's methenamine silver (GMS) stains, which stained the organisms magenta and black, respectively (Figs. 3 and 4). Mucicarmine highlighted the thick mucopolysaccharide capsule red (Fig. 5). This stain is specific for Cryptococcus species and helps differentiate from the other nonencapsulated yeast-like fungal organisms [2]. Flow cytometry did not reveal immunophenotypic findings indicative of lymphoma.
The findings above established the diagnosis of cryptococcal infection in our patient. Additionally, serum cryptococcal antigen was positive, 1:8192 (normal < 1:1) and blood cultures grew Cryptococcus neoformans. Lumbar puncture was done that showed 3 nucleated cells per μL, normal glucose, but elevated protein to 58 mg/dL. CSF cryptococcal antigen was positive to 1:32 (normal < 1:1). The abdominal collection was drained percutaneously; fluid cultures also grew Cryptococcus, suggestive of peritoneal abscess. The diagnosis of disseminated cryptococcosis was made.
An initial 2 week induction therapy with liposomal amphotericin B and flucytosine was initiated that the patient tolerated without any adverse effects. This was then transitioned to consolidative phase for 8 weeks and subsequently to the maintenance phase with oral fluconazole. A repeat CT scan was done 8 months after the initial presentation that revealed overall decrease in mediastinal and axillary lymphadenopathy as well as reduction in size of the intraabdominal fluid collection. The patient remains on suppressive antifungal therapy and doing relatively well a year from the initial diagnosis of disseminated cryptococcosis.
Discussion
The differential diagnosis for lymphadenopathy in this patient was broad and included fungal, bacterial, and mycobacterial infectious, as well as hematological malignancies such as post-transplant lymphoproliferative disorder (PTLD). The patient's occupational history of several years in construction raised concern for mycoses such as blastomycosis, cryptococcosis, coccidioidomycosis, and histoplasmosis, particularly in this immunocompromised host. Prior time spent in Vietnam also raised the possibility of melioidosis. Mycobacterial infections such as tuberculosis were also considered.
Lymphadenopathy was a prominent clinical and radiological sign in our patient. Although Cryptococcus can infect any organ, brain and lung are the most commonly involved [1]. Rarely, the initial manifestation may be lymphadenopathy alone; this presentation has been reported in both immunocompromised and immunocompetent hosts [3,4]. While fine needle aspiration cytology (FNAC) may be the initial diagnostic modality in some patients, excisional biopsy may be preferred if lymphoma is suspected [5].
Different histopathological features have been described with cryptococcal lymphadenitis, including presence of granulomas, epithelioid cells and necrosis [6]. In advanced immunosuppressive states such as in AIDS patients, granulomas may be minimal. This has been attributed to low CD 4 counts [6]. Tuberculous lymphadenitis, also characterized by necrotizing granulomas, should be considered in such clinical scenarios, especially in areas where TB is prevalent. Tissue diagnosis with appropriate staining and culture techniques will be needed to differentiate these entities [6].
Chronic fungal infections may manifest in unusual and atypical clinical presentations. A high degree of suspicion is important in immunocompromised hosts such as organ transplant recipients.
Early diagnostic modality such as obtaining tissue for rapid diagnosis is paramount to institute pathogen directed therapy in this patient population for favorable outcomes.
CRediT authorship contribution statement
All the authors have contributed to the writing of the manuscript of the case report.
Funding
No funding applicable to this article.
Consent
Not applicable. We have ensured to not report any potential identifying information in the manuscript. | 2022-02-06T17:35:58.928Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "43211e434631aac8663708141836173529174d0b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.idcr.2022.e01437",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33d835c750bdfd8491b4f5ba32991b1c05136b56",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195299126 | pes2o/s2orc | v3-fos-license | A controlled trial of a dissonance-based eating disorders prevention program with Brazilian girls
Background Given that most young women with eating disorders do not receive treatment, implementing effective prevention programs is a public health priority. The Body Project is a group-based eating disorder prevention program with evidence of both efficacy and effectiveness. This trial evaluated the efficacy of this prevention program with Brazilian girls, as no published study has tested whether this intervention is culturally sensitive and efficacious with Latin-American adolescents. Methods Female students were allocated to a dissonance-based intervention (n = 40) or assessment-only (n = 22) condition. The intervention was a dissonance-based program, consisted of four group sessions aimed to reduce thin-ideal internalization. The sessions included verbal, written, and behavioral exercises. The intervention group was evaluated at pretest and posttest; assessment-only controls completed measures at parallel times. Results Compared to assessment-only controls, intervention participants showed a significantly greater reduction in body dissatisfaction, sociocultural influence of the media, depressive symptoms, negative affect, as well as significantly greater increases in body appreciation. There were no significant effects for disordered eating attitudes and eating disorder symptoms. Conclusions These results suggest that this dissonance-based eating disorder prevention program was culturally sensitive, or at least culturally adaptive, and efficacious with Brazilian female adolescents. Indeed, the average effect size was slightly larger than has been observed in the large efficacy trial of this prevention program and in recent meta-analytic reviews. Trial registration RBR-7prdf2. Registered 13 August 2018 (retrospectively registered).
Background
Eating disorders (ED) affect 15% of females and they are marked by chronicity, relapse, distress, functional impairment, and increased risk for future obesity, depression, suicide attempts, and mortality (Allen, Byrne, Oddy, & Crosby, 2013). Although there are no data about the prevalence of ED in Brazil, it has been established that its incidence has increased in recent years (Nunes, 2006). Researchers have argued that this increase could be because of greater understanding of this subject and the more accurate diagnosis of these disorders (Nunes, 2006;Prisco, Araujo, Almeida, & Santos, 2013). According to Smink, van Hoeken, and Hoek (2012), there has been an increase in the high risk-group of 15-19-year-old girls, who meet diagnostic criteria for subclinical conditions of ED.
As 80-90% of those with eating disorders do not receive treatment (Swanson, Crow, Le Grange, Swendsen, & Merikangas, 2011), a public health priority is to broadly implement effective eating disorder prevention programs focused in eating disorders risk factors, such as body dissatisfaction.
One eating disorder prevention program with a broad evidence base is the Body Project (BP; Stice, Mazotti, Weibel, & Agras, 2000). This intervention is based on cognitive-dissonance theory, proposed by Festinger (1957), that suggest that people are motivated to maintain consistency between their behaviors and attitudes, and that when an individual engages in a behavior that is inconsistent with an attitude, they experience psychological discomfort that causes them to align their attitudes with their behavior. In this group-based eating disorder prevention program, adolescent girls voluntarily critique the thin beauty ideal in verbal, written, and behavioral exercises, which theoretically generates cognitive dissonance that prompts them to reduce their subscription to this unrealistic ideal because people are motivated to align their attitudes with their publically displayed behaviors.
BP is a four-session intervention wherein participants verbally generate costs associated with pursuing the thin ideal in response to Socratic questions, complete roleplays in which they talk facilitators out of pursuing this ideal, write a letter to a younger self on how to avoid body image concerns, and engage in acts of body activism that challenge this ideal (intervention script can be found at www.bodyprojectsupport.org). This intervention is based on The Dual Pathway Model that hypothesizes that thin-ideal internalization increases risk for body dissatisfaction, which in turn increases risk for subsequent dieting and negative affect, which increases risk for ED onset (Stice, 2001). Stice, Marti, Rohde, and Shaw (2011) confirmed that a decrease in thin-ideal internalization mediated the effects of the BP on body dissatisfaction, and further than a reduction of body dissatisfaction mediated the decline in eating disorders symptoms.
The BP is one of the few prevention programs to significantly decrease onset of eating disorders over follow-up in multiple trials, outperform active alternative prevention programs, and to produce effects in trials conducted by independent research teams in North America and Europe (e.g., Becker, Smith, & Ciao, 2005;Halliwell, Jarman, McNamara, Risdon, & Jankowski, 2015;Matusek, Wendt, & Wiseman, 2004;Serdar et al., 2014;Stice, Marti, Spoor, Presnell, & Shaw, 2008;Stice, Rohde, Shaw, & Gau, 2011). These trials have demonstrated that this cognitivedissonance intervention has produced greater reductions in eating disorder risk factors (such as thin-ideal internalization, body dissatisfaction, dieting, negative affect) and eating disorder symptoms in adolescent girls and young women with body image concerns relative to assessmentonly control conditions, and often relative to alternative interventions.
The meta-analytic review conducted by Le, Barendregt, Hay, and Mihalopoulos (2017) noted out that cognitive-dissonance interventions were effective in reducing risk factors (e.g., body dissatisfaction, thin-ideal internalization, negative affect, and dieting) and symptoms of eating disorders for late adolescents and young women. Moreover, the BP has produced effects when implemented selectively with women who have body image concerns, as well as when implemented universally to women and adolescents who were not screened for body image concerns or when implemented in an indicated fashion with women who have subclinical eating pathology (Stice, Shaw, & Marti, 2007).
It is essential to evaluate the efficacy of eating disorder prevention among different ethnic groups, in order to determine whether prevention programs need to be modified to fit the particular needs of different groups. Although there is some evidence that the BP is similarly effective for Asian American, African American, Hispanic, and European American females (Rodriguez, Marchand, Ng, & Stice, 2008;Stice, Marti, & Cheng, 2014), no randomized trials have evaluated the efficacy of the BP among young women from other cultures. Thus, the objective of the present study was to evaluate the efficacy of the BP for Latin-American girls in Brazil. We hypothesize that the BP will produce significantly greater reductions in body-image dissatisfaction, sociocultural influences by the media, disordered eating attitudes and behaviors, eating disorders symptoms, negative affect, and increase body appreciation in the intervention group than changes observed in assessmentonly controls.
Participants and procedure
Initially, the sample size was calculated using as parameters the average effect size of the larger efficacy trial of BP from the comparison among intervention and assessment-only conditions (d = 0.56; Stice, Shaw, Burton, & Wade, 2006), the expected power (0.80), the significance level (p < .05), and the statistical test (mixed repeated measures ANOVA). The minimal sample size indicated to a representative sample was 80 participants.
Recruitment was conducted via flyers and was directed to adolescent girls from the technical education integrated to the high school of the "Instituto Federal do Sudeste de Minas Gerais". This report is about girls recruited between August 2015 and March 2017. Girls with body image concerns were included in this study (accessed by the direct question "Do you have body image concerns?"). They also should not met criteria for eating disorders (accessed by the direct question "Have you ever been diagnosed with some kind of eating disorder (e.g., anorexia and bulimia)?"). Therefore, this represents a selective prevention program.
One hundred forty one adolescent females (M age = 16.25, SD = 1.4) accepted the invitation to enroll the trial. Girls that voluntarily accept take part in the study were randomized to the intervention or assessment-only condition through the website www.randomization.com. In total, 79 participants were assigned to the BP condition and 62 participants were assigned to an assessment-only condition.
The final sample in the posttest was composed by 40 participants in the intervention group, that completed the four-session protocol, and 22 assessment-only controls, who answered the questionnaire in both pre-and posttest (see Fig. Fig. 1 for a participant flowchart).
Intervention
The BP consisted of four weekly 1-h group and each intervention group included five to eight participants. Facilitators delivered the intervention using a scripted manual. Participants completed assessments at pretest and posttest and the assessment-only controls completed measures at parallel times. All the measures were selfreported with no limit of time to answer the questionnaires.
The scripted intervention manual was translated into Portuguese, aiming to be as faithful as possible to the original (see Table 1 for the script of each session and visit www.bodyprojectsupport.org for the Portuguese translation of the intervention script). The sessions took place at the school's premises, after school hours,
Primary outcomes
Body dissatisfaction Body weight and shape concerns were assessed with the 34-item Body Shape Questionnaire (BSQ). Scores ranged from 34 to 204, wherein the higher the score, the greater the body dissatisfaction. This scale has shown internal consistency (α = .96) and test-retest reliability (r = .91) for Brazilian adolescents (Conti, Cordás, & Latorre, 2009). According to the final scores, the girls were classified as having no dissatisfaction (scores of less than 80), with slight dissatisfaction (scores between 80 and 110), moderate dissatisfaction (scores between 111 and 140), or serious dissatisfaction (scores higher than 140). The internal consistency of BSQ for the present study sample, as evaluated using Cronbach's alpha, was .95 at pretest and .93 at posttest.
Sociocultural influences
The 30-item Sociocultural Attitudes Towards Appearance Questionnaire-3 (SATAQ-3) assessed the influence of the media on body image, including thin-ideal internalization, pressures to be thin, and the media as source of information about appearance. The final score was calculated by the sum of the responses, ranged from 30 to 150, and the score proportionally represented the influence of sociocultural aspects on body image. This scale has shown internal consistency (α > .91), 2-week test-retest reliability (r = .86), and factorial structure among Brazilian adolescents (Amaral, Conti, Filgueiras, & Ferreira, 2015). Cronbach's alpha in the present study sample was .94 at pretest and .90 at posttest.
Disordered eating attitudes and behaviors
The 26item Eating Attitudes Test (EAT-26), in its version validated for Brazilian girls (α = .82; Bighetti, Santos, Santos, & Ribeiro, 2004), assessed disordered eating attitudes and behaviors. The total score ranges from zero to 78 points, wherein the higher the score, the higher the risk of developing an eating disorder. Scores higher than 21 show a risky eating behavior (Garner, Olmsted, Bohr, & Garfinkel, 1982). The evaluation of the internal consistency of this scale for the present study sample was .79 at pretest and .71 at posttest.
Eating disorders symptoms The Eating Disorder Diagnostic Scale (EDDS; Stice, Fisher, & Martinez, 2004) was used to evaluate symptoms of eating disorders. EDSS is composed by 23 items and the final score is calculated by the sum of the responses. Cronbach's alpha value for the present study sample was .83 at pretest and .79 at posttest and its association with EAT-26 was .60 (p < .001).
Secondary outcomes
Depressive symptoms The 20-item version of the Children Depression Inventory (CDI) was used to evaluate the presence and severity of depressive symptoms. The total score ranges from zero to 54 points, and the higher the score, the greater the presence of depressive symptoms. This scale has shown internal consistency (α = .81) and 1-factor structure among Brazilian adolescents between 7 and 17 years old (Wathier, Dell' Aglio, & Bandeira, 2008). The internal consistency of this scale for the present study sample was .90 at pretest and .71 at posttest.
Body appreciation Body appreciation was evaluated through the 13-item Body Appreciation Scale (BAS). The final score was calculated by the sum of the responses and ranged from 13 to 65. For the BAS, the higher the total score, the higher the own body appreciation. The version used in the present study was the one translated into Portuguese and has shown internal consistency (α = .90; Caetano, 2011). Recently, the psychometric properties of this scale were provided for In session 1, participants collectively define the thin ideal, discuss costs of pursuing this ideal, and are assigned home exercises (e.g., write an essay about the costs associated with pursuing the thin ideal). 2 In session 2, participants discuss each home exercise, dissuade facilitators from pursuing the thin ideal in role-plays, and are assigned more exercises (e.g., generate a top 10 list of things young women can do to challenge the thin ideal).
3
In session 3, participants discuss home exercises, conduct role-plays challenging thin-ideal statements, discuss personal body image concerns, and are assigned home exercises (e.g., engage in a behavior that challenges their body image concerns).
4
In session 4, participants discuss home exercises, plan for future pressures to be thin, discuss perceived benefits of the group, and are assigned exit home exercises (e.g., engage in a group body activism).
young Brazilian adolescents (Moreira, Lorenzato, Neufeld, & Almeida, 2018). The internal consistency of this scale for the present study sample was .90 at pretest and .91 at posttest.
Negative affect The 15 items related to negative emotional states (e.g., sad, ashamed, angry, and nervous) of the Positive Affect and Negative Affect Scale (PANAS; Laurent et al., 1999) was used to evaluate the negative affect. The final score was the average of items responses. This scale has shown adequate internal consistency (.95 at pretest and .93 at posttest) and was significantly associated to the scores of CDI (r = .62; p < .001).
Both scales EDDS and PANAS are part of the Body Project Protocol. These scales are not validated for Brazilian adolescent girls. Thus, they were translated and back-translated and showed adequate internal consistency and significant correlations with measures of similar constructs, as described before. Since there are no validated measures to evaluate these outcomes, we included them in order to provide comparative results with others trials.
Data analysis
Initially, a descriptive analysis of all variables was carried out, including mean and standard deviation values. Due to the distribution abnormality and sample size, nonparametric statistics were used. The preliminary analysis aimed to evaluate differences in the variables between the intervention group and the assessment-only controls at pretest, using the independent samples Mann-Whitney U test (attrition analyze).
In order to verify if the participants in the intervention condition had shown significantly greater reductions than the assessment-only participants, mixed repeated measures ANOVA was carried out for each outcome. We consider the allocation (intervention and assessment-only) as the between-subjects factor and the time (pre-and posttest) as the within-subjects factor. For the outcomes with significant group × time interactions, we made the post-hoc analysis, using the independent samples Mann-Whitney U test to verify differences between the groups in the posttest and the related samples Wilcoxon signed rank test to evaluate changes in the outcome from pre-to posttest. In addition, the effect size (Cohen's d) was calculated and classified according to Cohen (1988) The analyses were performed in the IBM SPSS (Statistical Package for the Social Sciences) program for Windows, version 21.0, adopting the significance level of 95% (p < .05).
Moreover, the post-hoc tests for achieved power for each of outcomes were calculated, as well as for the average effect-size, using as parameters the observed effect-size and the final sample size. This analysis was performed using the GPower software, using a p = .05.
Preliminary analysis
With regard to attendance, a total of 50,6% of intervention participants attended all four sessions (and completed the posttest assessment), 12.7% attended three sessions, 19% attended two sessions, and 17.7% attended one session. Among the assessment-only controls, the 35.5% completed the posttest assessment.
Descriptive statistics (means and standard deviations) for each outcome are presented in Table 2. Girls in the intervention group were classified as no dissatisfied (scores smaller than 80) and the assessment-only controls showed slight dissatisfaction (scores between 80 and 110) evaluated by the BSQ. Moreover, participants of both groups did not showed disordered eating attitudes and behaviors (scores smaller than 21), according to the EAT-26. There were no differences among participants in the two conditions in outcomes at pretest (see Table 2).
Comparison and differences between groups on pre-post scores in primary outcomes
There was a significant group × time interaction for preto post change for two of the primary outcomes (see Table 3): body dissatisfaction (BSQ: F[1,50] = 13.99. p < .001) and sociocultural influence of the media (SATAQ-3: F[1,49] = 14.40, p < .001). In both cases, the interaction reflected a greater pre-to post reduction in the BP group relative to the assessment-only controls, whose scores remained stable or increased from the preto posttest. The group × time interaction did not reach significance for eating attitudes and behaviors (EAT-26: p = .08) or eating disorders symptoms (EDDS: p = .06).
Comparison and differences between groups on pre-post scores in secondary outcomes
There was a significant group × time interaction for preto post change for all the secondary outcomes: depressive symptoms (CDI: F[1,48] = 8.24, p = .006), negative affect (PANAS: F[1,48] = 6.64, p = .013), and body appreciation (BAS: F[1,56] = 9.07, p = .004), with BP participants showing greater reductions on depressive symptoms and negative affect and greater increases on body appreciation than assessment-only participants.
The observed power for each of the outcomes evaluated, taking into account the final sample size (n = 62), are presented in Table 3. Considering the average effect size (d = 0.74), the power of the study was 0.99.
Discussion
This trial evaluated the efficacy of a dissonance-based eating disorder prevention program among Latin-American adolescent females in Brazil. This study makes a novel contribution because it is important to determine whether interventions created in different cultural backgrounds are effective in different cultures.
Moreover, this is the first trial conducted in Brazil to evaluate an eating disorder prevention program with a strong evidence-base, as have been shown by recent meta-analytic reviews (Le et al., 2017;Stice et al., 2007).
One explanation for why the BP is similarly effective in different cultural backgrounds, and among participants of this study, is that this intervention is participant-driven, what may make it naturally culturally adapting. For example, the Socratic questions used in the sessions allowed Brazilian girls to describe and criticize the body-ideal promoted in Brazil. Furthermore, research suggests that cultural pressure for thinness also appears to influence body image in young women in Brazil and that the prevalence of eating disorders is similar of which that has been noted in other countries (Fortes et al., 2013;Zordão et al., 2015).
The BP reduced body dissatisfaction, replicating effects from the large efficacy trial conducted in North America (Stice et al., 2006), as well as confirming the findings of recent meta-analytic reviews about the effects of dissonance-based intervention on body dissatisfaction (Le et al., 2017;Stice et al., 2007;Watson et al., 2016). Perez, Becker, and Ramirez (2010), using BSQ as measure of body dissatisfaction, also reported a reduction on levels of this outcome in young women of cognitivedissonance intervention group. As body dissatisfaction is one of the most robust ED risk factors and significantly more common than clinical ED, interventions that are able to reduce body dissatisfaction should be encouraged and compose the public health efforts (Becker & Stice, 2017).
The BP also significantly reduced the influence of the media among the adolescent girls in the intervention According to Cohen (1988) c Calculating using the effect-size and the final sample size group, relative to the assessment-only controls, which corresponded to a large effect (d = .96). This may be considered one of the main outcomes when evaluating the efficacy of this program since the reduction of the thin-ideal internalization, also accessed by the SATAQ-3, was the main mediator of the effects of this intervention (Stice, Marti et al., 2011). Several efficacy trials have found that BP produced greater reduction on thin-ideal internalization in adolescent girls and young women relative to assessment-only control conditions as well as relative to alternative intervention (Stice, Chase, Stormer, & Appel, 2001;Stice et al., 2000Stice et al., , 2006Stice, Marti et al., 2008;Stice, Trost, & Chase, 2003). These results are confirmed in the present study. The hypothesis that BP would reduce the ED attitudes and symptoms was partially supported. Although the effects for eating disorder attitudes and behaviors (EAT-26) and eating disorder symptoms (EDDS) did not reach significance, the effect sizes were medium (d = .46 and .56 respectively) and the power for this outcome was high (0.99). Also, the effect size for eating disorder symptoms is similar to or larger than the effect size found in trials (e.g., Stice et al., 2006;Stice, Butryn, Rohde, Shaw, & Marti, 2013). A potential explanation for these limited effects is that participants in the intervention had already low mean disordered eating attitudes and symptoms at pretest. Divergent results were found by Becker et al. (2005), in the universal version of the BP. Using the EAT-26 as measure of disordered eating attitudes and behaviors, the results indicated the efficacy of the program on this outcome, with greater reductions of the scores in the intervention group when compared to controls. McMillan, Stice, and Rohde (2011) did observe significant reduction in ED symptoms at posttest, but this effect was not significant by 3month follow-up. The authors argued that maybe the self-reported measure are not sensitive enough to optimally measure change in ED symptoms.
Additionally, the BP significantly reduced depressive symptoms and the negative affect among participants (d = .80 and .75 respectively). Stice, Rohde, Durant, Shaw, and Wade (2013), using a CDI-like measure to evaluate this outcome, observed reduction on depressive symptoms in the intervention group. In general, most trials have found that the BP reduces negative affect Stice, Marti et al., 2008), though effect sizes are typically smaller for this outcome.
Further, the BP increased body appreciation, with girls who participated of the intervention having significant higher scores than those in the assessment-only condition (d = .71). This result replicates the effects observed in a trial from the UK (Halliwell et al., 2015), in which the body appreciation increased among the adolescent girls (14 and 15 years old) in the intervention group, with effects ranging from weak to intermediate (d = 0.51). This finding is important because few eating disorder prevention trials have measured positive body image (Halliwell et al., 2015;Jankowski et al., 2017). Thus, it provides evidence that dissonance-based programs, in addition to reducing pathological aspects, also promote positive attitudes toward one's body.
The interpretation of effect sizes is especially useful when comparing to other effects in the literature (Lakens, 2013). In this sense, the average effect size for the group × time interaction was d = 0.74, which reflects a greater effect size than those observed in similar trials (Halliwell & Diedrichs, 2014;Stice et al., 2006). For instance, in the efficacy trial of the BP (Stice et al., 2006), the average effect size was d = 0.59.
It is important to highlight that most of the trials that have been developed to evaluate the efficacy of the BP are selective (directed to young women with body image concerns) and results from selective prevention trials may not generalize to other sampling frames, such as young girls and women without body image concerns (Stice, Marti, et al., 2011). However, the BP has also shown efficacy when implemented universally to young women who were not screened for body image concerns (Becker et al., 2005). Participants in this study were concerned with their body image. Indeed, adolescent girls are considered as a high-risk group for body dissatisfaction and have demonstrated a normative body dissatisfaction (Duarte, Ferreira, Trindade, & Pinto-Gouveia, 2016;Littleton, 2008), which justify preventive efforts to reduce this outcome.
Despite these results, some limitations should be highlighted. First, we had a large dropout rate, which can be explained by the voluntary nature of the participation. Despite this, post-hoc tests pointed to a power of 0.99, considering the final sample size (n = 62), indicating that this study is able to identify real effects. Second, the assessment-only control condition was not a rigorous comparison condition because it did not control for demand characteristics inherent to randomized trials. But this seemed reasonable because the BP has significantly outperformed five alternative interventions in past trials, producing larger reductions in the outcomes. Third, we did not collect demographic data. However, it is important to highlight that body dissatisfaction and sociocultural influences have been observed in different socioeconomic levels as well as diverse demographic characteristics in Brazil (e.g., Laus et al., 2012). Fourth, we did not collect follow-up data. We argue that the pre-to posttest effects have been consistently reported in the literature. Also, it is difficult to keep the voluntary adhesion in long-term follow-ups in the Brazilian context, since it is not permitted to reimburse participants for completing assessments. However, given that the BP has produced significant effects through 3-year followup in multiple trials mitigates this concern somewhat. Finally, we did not use a more conservative p value to reduce the odds of chance findings because we were worried about missing true effects due to our relatively small sample size. We believe that replication is the most critical test of whether effects are reliable and the literature suggests that the BP does produce reliable effects across trials. A review of the literature indicates that 59 out of the 62 tests of the intervention effects for the core outcomes (thin-ideal internalization, body dissatisfaction, dieting, negative affect, and eating disorder symptoms) from pretest to posttest were significant in the 11 trials that Stice and his colleagues have conducted before the present trial (95%) which is reassuring because one would have expected only 3.1 out of these 62 effects to have emerged by chance (5%). Further, 34 out of the 47 tests of the intervention effects for the core outcomes were significant in 11 trials of the BP conducted by independent teams (72%), which is likewise much higher than the 2.4 out of 47 effects (5%) that would be expected based on chance. Thus, it seems highly unlikely that the effects reported herein are chance findings.
Conclusions
In conclusion, the current findings support the usefulness of cognitive dissonance-based programs in the reduction of risk factors related to body dissatisfaction and to eating disorders and suggest that the participant-driven nature of the group discussions make it naturally culturally adapting. Advances such as the inclusion of measurements of body appreciation in the intervention are noteworthy. The novel evidence that the BP was efficacious for Latin-American adolescent females in Brazil generally extends evidence that this prevention program was similarly efficacious for various ethnic groups in North America and Europe (Halliwell et al., 2015;Rodriguez et al., 2008;Stice et al., 2014).
Results from the present study also provide directions for future research on the BP. It is fundamental to test the efficacy of the proposed program with a larger sample in Brazilian context and other unique cultures, using both the assessment-only control group and an active alternative comparison intervention and longer followups. After confirming its efficacy, it is important to conduct studies aiming to evaluate the effects under realworld conditions, to confirm the effectiveness of this program. Last, it will be vital to evaluate how best to effectively implement this prevention program on a broadscale basis, with the hope of reducing the incidence of eating disorders worldwide. | 2019-06-24T14:23:23.653Z | 2019-06-17T00:00:00.000 | {
"year": 2019,
"sha1": "4ad4dfabe24b0425bc318a528075c503b1fe2731",
"oa_license": "CCBY",
"oa_url": "https://prc.springeropen.com/track/pdf/10.1186/s41155-019-0126-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ad4dfabe24b0425bc318a528075c503b1fe2731",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
202894021 | pes2o/s2orc | v3-fos-license | Effect of Rosemary (Rosmarinus officinalis L.) Leaves Extract on Quality Attributes of Chicken Powder Incorporated Fried Chicken Snacks
The present study was conducted to develop chicken meat powder (CMP) incorporated ready-to-eat shelf stable fried chicken snacks and evaluate the effect of rosemary leaves extract (RE) incorporation on physico-chemical, microbiological and sensory properties in developed product during ambient storage up to 60 days. Two different groups were made; control (without RE) and second group with RE treated (3% level). In physico-chemical properties, results showed that RE incorporation had highly significant (p<0.01) effect on thiobarbituric acid reactive substances (TBARs), free fatty acid (FFA) and tyrosine value. Similarly in microbiological parameter RE treated product had significantly (p<0.05) lower total plate count (TPC), Staphylococcus count (SC) and significantly (p<0.01) lower yeast and mold count than control. RE incorporation had highly significant effect (p<0.01) on sensory score (texture, flavour and overall acceptability except appearance) of the product during storage period. Therefore, it is concluded that RE incorporation into fried chicken snacks improved physico-chemical (TBARs, free fatty acid content and Tyrosine value), microbiological (Total plate count, Staphylococcus count and yeast and mold count) and sensory parameter (flavor, texture and overall acceptability) of the chicken snacks during 60 days storage.
Indian cooking and lifestyle have undergone tremendous changes in the last decades. In the newly rising era of fast and convenience foods, ready-to-eat foods are extensively admired. In Indian perspective, culture, traditions, customs, and taboos determine meat consumption to an immense level particularly in the rural societies (Devi et al., 2014). High mutton price, limited accessibility of fish outside coastal regions and relatively low cost in comparison to other meat, all these helped to make poultry meat the preferred and most consumed meat in India.
Perishability of meat products has been considered as a very serious dilemma, particularly in tropical countries like India, where household refrigeration facility is sparse . In this epoch, the energy demand for food preservation and improving the safety of preserved foods vis-à-vis convenience, development of shelf stable products is extremely preferable. There are lots of reports on the development of shelf stable intermediate moisture (IM) products (Kanatt, 2006). Preparation of chicken meat powder is an efficient way to cope up with the problem of perishability. Different meat product could be made by utilization of chicken meat powder in the preparation of chicken powder incorporated an Idli mix to improve the nutritional quality of the products (Bishnoi et al., 2015).
Usually cereal snacks deficient in essential amino acids such as threonine, lysine and tryptophan (Jean et al., 1996), but incorporation of animal protein such as fish, pork, beef, chicken etc, significantly enhances its nutritive value especially with respect to amino acids, flavour and taste . In the food matrix, meat based snack foods are convenient, easy to carry, highly crispy, attractive, nutritionally sound, shelf-stable (Singh et al., 2013). A verity of meat snacks exist in global market such as Jerky, Popped pork rind, Kilishi, Meat biscuit, Meat cookies, Meat noodles, Meat chips and Meat stick. Likewise chicken snacks are the deep fried, gram floor based, chicken meat powder incorporated shelf stable ready-to-eat meat product.
The most common form of deterioration in dry meat products is oxidative rancidity; which leads to extensive flavor changes, structural damage to proteins leading to loss of freshness that discourages repeat purchases by consumers. The most effective approach to avoid oxidative deterioration of meat products is to integrate antioxidants into formulations. Antioxidants either synthetic or natural have become an indispensable group of food additives mainly because of their distinctive properties of enhancing the shelf-life of food products without any harm to sensory or nutritional qualities (Nanditha and Prabhasankar, 2008). In industrial processing, mainly synthetic antioxidants such as butylated hydroxylanisole (BHA) and butylated hydroxyltoluene (BHT) are used to prolong the storage stability of meat products. However, increasing concerns over the safety of synthetic food additives have resulted in a trend toward natural products. Natural antioxidants extracted from herbs and spices demonstrate a variety of efficacy when used in different food applications (Bowser et al., 2014). Now a day's plants are the liberal source to provide man with valuable bioactive substances (Tayel and El-Tras, 2012) and thus different plant products are being evaluated as natural antioxidants to protect and enhance the overall quality of meat and meat products. These natural antioxidants from plants, in the form of extracts, have been obtained from different sources and investigated to decrease the lipid oxidation (Huang et al., 2011;Wojciaket al., 2011;Das et al., 2012) thus provide a good alternative to synthetic antioxidants. Among natural antioxidant sources, rosemary (Rosmarinus officinalis L.) is highly potent, shrubby herb, with a unique aromatic odour. Antioxidant activity in rosemary is may be due to its high phenolic compounds, such as carnosic acid, rosmarinic acid, carnosol, rosmanol, rosmariquinone and rosmaridiphenol (Riznar et al., 2006).
There is ample amount of information existing on the utilization of rosemary extract as dietary antioxidants in different food and feed formulation as well as in different meat products like chicken frankfurters (Rinzaret al., 2006), fermented lamb meat sausage (Bowser et al., 2014), chicken nuggets (Teruel et al., 2015), lamb patties (Baker et al., 2013), turkey sausage (Jridi et al., 2015), beef burger (Georgantelis et al., 2007), pork sausage (Sebraneket al., 2005) and chicken meat patties (Al-Hijazeen and Al-Rawashdeh, 2019). However, use of rosemary extract in shelf stable meat products is very scanty. Therefore, the present study has been undertaken develop ready-to-eat shelf stable ready to eat meat product and to explore the effect of rosemary leaves extract in chicken meat powder incorporated self stable ready to eat fried chicken snacks.
Extract preparation
The rosemary leaves were oven dried at 50 o C for 12 hrs followed by grinding and sieving. Pre-weighed powdered leaves were extracted with 70% Ethanol for 24 hrs at 40 o C. The extract was collected and concentrated under reduced pressure in a rotary vacuum evaporator (Labconco Corporation, USA) until semi solid consistency. The semisolid mass was oven dried at 50 o C at overnight to obtain dried extract. The extract were reconstituted with the same solvent as used for extraction to obtain 5% solutions and stored at 4°C.
Formulation of fried chicken snacks
Chicken meat powder (CMP) was prepared by mincing of spent hen meat in meat mincer (Nova Pvt. Ltd.). Minced meat was pressure cooking for 15 min and air dried at 80 o C for 9 hrs and pulverized in mixer (Maharaja White line, India). The dough was prepared by mixing gram flour, spice mix, table salt and rosemary extract (RE) was mixed with CMP. Subsequently, chicken broth was added in to mix to make dough at approx. 40 percent of the formulation and kept it for 10-15 min for conditioning. Later on, the dough was filled in the vermicelli maker machine for preparations of product. Afterward deep fried at 190 o C temperature for 45 sec. for fried chicken snacks preparation. RE incorporation could not produce any significant effect in sensory properties of the product, therefore products with highest (3%) level of RE was selected for further analysis. For the storage study RE treated as well as control sample of fried chicken snacks stored at ambient temperature in Aluminium/Polyethylene laminates bag for physicochemical, microbiological and sensory evaluation up to 60 days in each 15 days interval.
Analytical methods
Sample from both groups was taken in triplicate for physiochemical, microbiological and sensory analysis.
Physico-chemical analysis
The thiobarbituric acid reactive substances (TBARs), free fatty acid (FFA) content and tyrosine value of value of CMP incorporated fried chicken snacks was analyzed by the method of Witte et al. (1970), Koniecko (1979) and Strange et al. (1977), respectively.
Microbiological analysis
Total plate count (TPC), Staphylococcus count (SC), coliform count and yeast and mold count in the samples were determined following the methods described by APHA (1984).
Sensory analysis
The sensory properties (appearance, texture, flavor and overall acceptability) of both control and RE treated product stored at ambient temperature were carried out by 9 point hedonic scale to the method of Wichchukita and O'Mahonyc, (2014) by panelists consisting of faculty member and postgraduate students of the department at each 15 days interval up to 60 days.
STATISTICAL ANALYSIS
The experiments were replicated thrice and obtained data were analyzed using Statistical Software Packages (SPSS 16.0) following the procedure of Snedecor and Cochran (1994). P-value less than 0.05 at 5% level and 0.01 at 1% of significance were considered as statistically significant and highly significant respectively. The data were subjected to analysis of variance by ANOVA during storage study.
Physico-chemical parameter
The mean values of physico-chemical parameter of control and treatment groups are presented in Table 1. RE incorporation significantly (p<0.01) reduced the TBARs number of RE treated product as compare to control during entire storage period. The increase in TBARs value was very diminishin RE treated product and remained lowest (0.73 mg malonaldehyde (MDA) kg -1 sample) at 60 days as compare with control (0.82 mg MDA kg -1 sample). Rosemary extracts had phenolic antioxidants which react with lipid or hydroxyl radicals and converted them into stable products (Trindade et al., 2007). This result was similar with the study of Baker et al. (2013) and Al-Hijazeen and Al-Rawashdeh, (2019) in karadi lamb patties and chicken meat patties, respectively treated with rosemary extract. Sebranek et al. (2005) reported that 1000 mg/kg of RE was effective as BHA/BHT on TBARS values using precooked-frozen sausage. However, during the storage period consistence increase in TBA value of both groups were observed. This might be due to autooxidation of lipids over a period time and increased microbial population. Similar results have been found by other workers (Modi et al., 2004;Talukder et al., 2016) in chicken nuggets and mutton snack respectively.
Rosemary extract has highly significant (p<0.01) effect on FFA of the product. The initial FFA value was found to be 0.14 (oleic acid %) for both groups, but at the end of storage period (Table 1) RE treated sample (0.27) has significantly lower FFA value than the control (0.31). A similar significant observation was founded by Kenar et al. (2010) and Ucak et al. (2011) in mackerel fish burgers and sardine fillets, respectively. Guran et al. (2015) also reported similar result in fish patties treated with rosemary extract. Although during the storage period consistent increase in FFA value of product in both groups was observed. However RE treated group has diminish increase pattern as compare to control. It may be due to the natural antioxidants prevent lipid oxidation (Indumathi and Reddy, 2015). The Findings were in accordance with Kashyap et al. (2012) in chicken meat patties incorporated with natural antioxidants. Similarly Indumathi and Reddy (2015) reported lower FFA value as control in chicken meat nuggets treated with green tea, guava leaves and curry leaves during storage. Increase in FFA value with storage time was observed by Modi et al. (2007) and Idowu et al. (2010) in kebab mix and kilishi, respectively.
Results of the study reveled that RE has highly significant (p<0.01) effect on tyrosine value of the product. Initially tyrosine value of both groups was 2.11 and 1.98 mg/100g and reached up to 7.65 and 7.03 mg/100g of sample respectively for control and RE treated group respectively (Table 1). Increase in tyrosine value during storage period might be due to increase in microbial load and enhance production of proteolytic enzymes in the late logarithmic phase of microbial growth causing autolysis (Thomas et al., 2010). In the entire storage period RE treated group has significantly lower tyrosine value than the C.
The lower values of treated sample attributed due to the antioxidant activity of RE. Similar result was observed by Khare et al. (2016) in chicken cut-up parts treated with natural antioxidants.
Microbiological analysis
Microbial attributes of the control and RE treated product are presented in Table 2. Results of the study indicated that RE has significant (p<0.05) effect over TPC and SC of the products in entire storage period. Initial TPC of the product was estimated 2.24 log cfu g -1 for both groups and it reached up to 3.74 and 3.13 log cfu g -1 at the end of the study for C and RE treated product respectively. Rinzar et al. (2006) reported similar findings in chicken frankfurters.
Similarly the initial SC of the product was observed 1.55 log cfu g -1 for both groups and reached up to 2.51, 2.33 log cfu g -1 for C and RE treated product respectively at the end of storage period. Although TPC and SC increased along with storage period but increment was slower RE treated product as compare to C. Antimicrobial activity of rosemary might be due to carnosic acid, which is a major bioactive compound of the rosemary extract (Tavassoli and Djomeh, 2011). Coliform could not be founded in both groups during storage period; it might be due to absence of post processing contamination during handling of the product.
Yeast and molds were not detected on day 0 and 15 in both groups, but they appeared on the 30 th day of storage in both C and RE treated product. RE incorporation has significant (p<0.01) effect on the Yeast and mold count of the product. At the end of storage period RE treated product (1.83) has significantly lower yeast and mold than the C (2.07) log cfu g -1 of sample (Table 2). Similar results reported by Singburaudom (2015) in plant pathogenic fungi. Diminish yeast and mold count in RE treated product might be due to high phenolic content of the rosemary extract (Moghtader et al., 2011).
Sensory evaluation
Data relating to various sensory attributes of control and RE treated product are presented in Table 3. In sensory parameter expect appearance, RE incorporation has significant (p<0.01) effect on flavour, texture and overall acceptability score of the product. Although in both groups consistent decrease in sensory score were observed, but RE treated product found comparatively higher sensory score than the control. The flavour score of RE treated group at 60 th day (6.08) was higher than 45 th day of control (5.98). The decrease in overall acceptability during storage is due to increase in lipid oxidation and degradation of proteins. The decline in overall acceptability scores indirectly influenced over the scores of flavor, appearance, texture and other sensory attributes (Singh et al., 2011). The progressive decrease in sensory scores could be correlated with an increase succeeding storage days, which favour oxidative rancidity, thereby increasing the physico-chemical and microbiological parameter leads to a decrease in sensory scores. Mishra et al. (2015) also reported similar results in meat ring. In general RE treated product was highly favored by panelist because of their desirable sensory score (flavour, texture and overall acceptability).
CONCLUSION
From this study it can be concluded that a ready-to-eat shelf stable meat product can be made by chicken meat powder incorporation in the gram flour. Rosemary leaves extract incorporation improved physico-chemical (TBARs, Free fatty acid content and Tyrosine value), microbiological (Total plate count, Staphylococcus count and yeast and mold count) and sensory score (flavor, texture and overall acceptability) of the fried chicken snacks. | 2019-09-19T09:15:42.470Z | 2019-08-19T00:00:00.000 | {
"year": 2019,
"sha1": "a063b6819c602a297bb8af1d85256a4b602854cc",
"oa_license": null,
"oa_url": "https://doi.org/10.30954/2277-940x.04.2019.11",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eaedae58acc7441e71beac355fc991f888f1fca0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
11791051 | pes2o/s2orc | v3-fos-license | New Insights into X-ray Binaries
X-ray binaries are excellent laboratories to study collapsed objects. On the one hand, transient X-ray binaries contain the best examples of stellar-mass black holes while persistent X-ray binaries mostly harbour accreting neutron stars. The determination of stellar masses in persistent X-ray binaries is usually hampered by the overwhelming luminosity of the X-ray heated accretion disc. However, the discovery of high-excitation emission lines from the irradiated companion star has opened new routes in the study of compact objects. This paper presents novel techniques which exploits these irradiated lines and summarises the dynamical masses obtained for the two populations of collapsed stars: neutron stars and black holes.
Introduction
X-ray binaries (XRBs hereafter) are interacting binaries where X-rays arise from the accretion of matter onto a neutron star (NS) or a black hole (BH). Accretion processes are found in other astrophysical environments such as cataclysmic variables (i.e. interacting binaries with accreting white dwarfs), T Tauri stars, protoplanetary discs, etc. but the unique property of XRBs is the presence of central compact objects that are the remnants of collapsed, massive stars. Therefore, they provide the best laboratories to study their properties in detail, such as masses, spin or NS equation of state. This paper is not meant to give a thorough review of XRBs but instead it will focus on three selected topics with implications for our knowledge of the mass spectrum of collapsed stars. These are: 1. Evidence for BHs in XRBs: a summary of dynamical masses is presented. Excellent reviews on other aspects of XRBs can be found in [9] and [24].
Black Holes in X-ray Binaries
The mass distribution of Black Holes (BHs hereafter) has strong impact in several areas of Astrophysics, in particular SNe models, the evolution of massive stars, chemical enrichment of the galaxy, jet formation etc.. Stellar evolution theories predict ∼ 10 9 BHs remnants in the Galaxy [2] but only BHs in interacting binaries can be easily detected through X-ray radiation triggered by accretion. This is the reason why the history of BH discoveries has run in parallel with the development of X-ray astronomy. In particular Soft X-ray transients (SXTs hereafter) provide the best systems to find stellar-mass BHs, since ∼75% of these transients likely harbour a BH. SXTs are a subclass of X-ray binaries with low-mass donor stars (typically of K-M spectral types) which exhibit episodic outbursts due to mass transfer instabilities in the accretion disc [23]. During outburst, the X-ray luminosity increases by factors up to 10 6 -10 7 and, therefore, they are easily spotted by X-ray satellites. Unfortunately, the companion star is overwhelmed by the X-ray heated disc at all wavelengths, precluding its detection. However, after a few months of activity, X-rays switch off, the reprocessed light drops off several magnitudes into quiescence and the companion starts to dominate the optical flux. This provides a special opportunity for spectroscopic detection, perform dynamical studies and derive stellar masses. Figure 1 shows the cumulative histogram of BH SXTs discovered since 1966, when Cen X-2 was first detected during a rocket flight. A linear increase at a rate of ∼1.7 yr −1 is apparent since the late 80's, when a new fleet of X-ray satellites with higher sensitivity and All-Sky-Monitor capabilities became operative.
The best evidence for the presence of BHs is dynamical, i.e. a compact object whose mass exceeds the maximum allowed for stable neutron stars or ∼3 M ⊙ [36]. And this is relatively easy to prove through the observed radial velocity curve of the companion star during quiescence. The orbital period P orb and velocity semiamplitude K C combine in the standard mass function equation f (M) which relates the mass of the compact object M X and the companion star M C through the orbital inclination i and mass ratio q = M C M X . It is easy to show that f (M) yields a secure lower limit to the BH mass M X . This experiment typically requires resolving powers ≥1500 and can be performed with targets brighter than R ∼23 using current instruments on 10m-class telescopes.
But getting actual BH masses rather than lower limits is not so straightforward. The reason being that, by their very nature, BHs do not burst nor pulse and hence one cannot trace their orbital motion. We are facing a single-line spectroscopic binary where the extra observables (q and i) must be extracted from the optical star. And this can be accomplished with two further experiments: • Resolving the rotational broadening of the donor's photospheric lines. Because the donor star is filling the Roche lobe and synchonized, the rotational broadening V sin i scales with K C as a function of q [42]. Therefore, by measuring V sin i one can determine q directly. • Fitting synthetic models to the ellipsoidal modulation. The changing visibility of the tidally distorted companion star generates a double-humped light curve (the so-called ellipsoidal modulation) whose amplitude is a strong function of i and q. For extreme mass ratios q < 0.2, typical of BH binaries, the shape of the light curve is weakly sensitive to q and hence i can be easily determined [38].
By combining the mass function with constraints on q and i one gets a full dynamical solution and hence the BH mass with minimum assumptions. Further details on this prescription and possible systematics involved can be found in several reviews such as [3].
BHs have also been found in a handful of High Mass X-ray Binaries (HMXBs hereafter), i.e. XRBs with early-type massive donor stars. However, here we find several limitations which complicate the analysis. A key factor is M C , which for a HMXB is a large number and has a wide range of uncertainty. The optical star is likely to be undermassive for its spectral type as a result of mass transfer and binary evolution [35]. Furthermore, mass transfer is usually produced through winds rather than Roche lobe overflow and this has a two-fold effect. On one side, wind emission can contaminate the radial velocities of the donor star. On the other, since the optical star does not fill its Roche lobe, q and i values derived through V sin i and ellipsoidal fits may be overestimated. These caveats can only be side-lined in eclipsing binaries such as X-7 in M33, which so far is the only case where this has been possible. In addition to the eclipse duration, the distance provides an extra restriction which lead to tight constraints in the space parameter. In particular, the radius of the donor, the Roche lobe filling factor and the inclination are accurately determined and yield a very precisse BH mass [33]. [9] and [32]. a New photometric period of 30.8±0.2 days reported by [30]. b Updated after [34]. c Updated after [7]. d Updated after [27]. e Period is uncertain. See [43]. Table 1 presents an updated list of confirmed BHs based on dynamical arguments, with their best mass estimates. We currently have 21 BHs, with orbital periods between 33.5 days and 4.1 hours. The great majority are SXTs (17) while 4 are persistent HMXBs: Cyg X-1 plus the 3 extragalactic binaries LMC X-1, LMC X-3 and M33 X-7. The case of GX 339-4 deserves special mention because it is the only SXT where the presence of a BH was proven during the outburst phase. This was possible thanks to the detection of fluorescent lines arising from the irradiated companion (see Sect. 3). GRS 1915+105 is also noteworthy, not only because of its long orbital period and large mass function but also because IR spectroscopy was essential to overcome the > 25 magnitudes of optical extinction and reveal the radial velocity curve of the companion star [20]. However, it should be noted that recent photometry reports a slightly shorter orbital period and evidence for irradiated light curves [30]. The combination of these two effects will likely decrease the mass function and BH mass. XTE J1859+226 also needs revisiting because its orbital period is uncertain [43]. In summary we have 16 BH masses ranging between 4 and 16 M ⊙ with ∼5-30 % errors. These can be compared with theoretical distributions of stellar remnants such as [19]. The model includes binary interaction under Case C mass transfer (i.e. Common Envelope evolution after core helium ignition), wind mass-loss in the Wolf-Rayet phase and SN Ib explosion. The computation predicts a continuum distribution of remnants with a mass cut at 12 M ⊙ which is difficult to reconcile with some of the observed masses. However, the model entails many theoretical uncertainties which dominate the final mass spectrum such as the Common Envelope efficiency, the wind mass-loss rate or the progenitor's mass cut. Clearly more SXT discoveries and lower uncertainties in BH masses are required before these issues can be addressed and the form of the distribution is used to constrain BH formation models and XRB evolution. In addition to dynamical BHs, there are 27 other SXTs with similar X-ray spectral and timing properties during outburst 1 . Unfortunately, these BH candidates become too faint in quiescence for dynamical studies or even lack accurate astrometry. This is illustrated in Fig. 2 which shows the magnitude distribution of the 44 currently known BH transients. Dynamical studies are only possible with the current largest telescopes for sources brighter than R ≤ 23. Not shown in the figure is the heavily reddened GRO 1915+105 which was studied in the NIR. The figure depicts the bright tail of a dormant population of galactic BH SXTs which several works have estimated in a few thousand systems ( [37] and included references). Im-proving the statistics of dynamical BHs requires not only a new generation of ELT telescopes to tackle fainter targets but also new strategies aimed at unveiling new "hibernant" SXTs before they go into outburst. Quiescent BH SXts typically have EW (H α )∼ 20 − 50Å and hence they should show up in deep H α surveys such as IPHAS [17]. However, clever diagnostics need to be defined to clear out other populations of H α emitters such as cataclysmic variables or T Tauris (see Corral-Santana et al., these proceedings).
The Bowen Project
Aside from transient XRBs, there are ∼150 persistent XRBs in the Galaxy, the great majority hosting neutron stars (NS hereafter) accreting at the Eddington limit. They are considered the progenitors of Binary Millisecond Pulsars (BMPs hereafter) because is the sustained accretion during their long active lives that spins the NS up to millisecond periods. The discovery of millisecond pulses in 8 transient XRBs and coherent oscillations during X-ray bursts in 13 persistent XRBs gave strong support to this "recycle" pulsar scenario. And burst oscillations were detected in addition to persistent pulses in the transient XRBs SAX J1808-3658 [10] and XTE J1814-338 [40] with identical frequencies. This confirmed that burst oscillations are indeed modulated with the spin of the NS. The interest of these discoveries stands in the fact that one can use the orbital Doppler shift of pulses/oscillations to trace the NS orbit and obtain the X-ray mass function.
Optical emission in persistent XRBs is triggered by reprocessing of the intense X-ray radiation in different binary sites, mainly the accretion disc. The companion star is ∼1000 times fainter than the irradiated disc at optical-IR wavelengths and hence completely undetectable. This has systematically plagued attempts to determine system parameters and, in most cases, only the orbital period is known. Fortunately, there are methods which can exploit the effects of irradiation and X-ray variability. New prospects were opened by the discovery of sharp high excitation emission lines arising from the irradiated face of the companion star in Sco X-1 [39]. The most prominent are found in the core of the Bowen feature, a blend of CIII/NIII lines which are mainly powered by fluorescence. These lines trace the motion of the companion star and provided the first dynamical information on this protypical LMXB (see Fig. 3). Since then, sharp Bowen lines from companion stars have been discovered in 7 other persistent LMXBs and 4 transients during outburst: Aql X-1, GX 339-4 and the BMPs XTE J1814-338 and SAX J1808.4-3658. These transient studies beautifully demonstrate the power of this technique in systems which otherwise cannot be studied in quiescence because either they are too faint (case of GX 339-4 and the BMPs) or are contaminated by a bright interloper (Aql X-1). In particular, the case of GX 339-4 is remarkable because the Bowen study provides the first solid evidence for the presence of a BH in this classic transient.
The radial velocity curves of the Bowen lines are biased because they arise from the irradiated face of the star instead of its center of mass. Therefore, a K-correction needs to be applied in order to obtain the true velocity semi-amplitude K C from the observed velocity K em . The K-correction parametrizes the displacement of the center of light with respect to the donor's center of mass through the mass ratio and disc flaring angle α. The latter dictates the size of the disc shadow projected over the irradiated donor [25]. Extra information on q and α is thus required to get the real K C . Furthermore, useful limits to the NS mass can be set if the binary inclination is well constrained through eclipses. Table 2 summarises the NS masses obtained through the Bowen technique during several campaigns at the WHT, AAT and VLT. The list of persistent systems is almost a complete sample of Galactic LMXBs brigther than B≃19. In the cases of Aql X-1 and X1822-371 the evidence of NS more massive than canonical is very persuasive. The latter is a particularly favourable binary because it is eclipsing and the NS is a pulsar. Then its radial velocity curve is known through the study of orbital pulse delays. Good constraints on the NS velocity are also available for V801 Ara through the detection of pulse oscillations during a superburst [6]. Tight limits to the inclination and mass ratio are also available for the eclipsing EXO 0748-676 [29] and the dipper GR Mus [1]. In the remaining cases the NS mass is not well constrained due to large uncertainties in the inclination and/or mass ratio. However, it is important to stress that these are the first dynamical constraints in persistent LMXBs since their discovery, 40 years ago. Other techniques (such as the Echo Tomography) need to be exploited to further refine these limits and derive more accurate NS masses. Previous reviews presenting results of the Bowen project can be found in [5] and [14].
Echo Tomography
Echo Tomography uses time delays between X-ray and UV/optical variability as a function of orbital phase to map the reprocessing sites in a binary [31]. The optical variability can be modelled by the convolution of the X-ray light curve with a transfer function which depends on the binary geometry. The transfer function encodes information on the most fundamental parameters such as the binary inclination, star separation and mass ratio. And in particular, the component associated with the companion star is most sensitive to these parameters so detecting echoed emission from the donor offers the best opportunity to constrain them. There has been several attempts at detecting correlated optical and X-ray variability using white light or broad band filters (e.g. [41], [22]). These works have detected delays which are mostly consistent with reprocessing in the outer disc implying that the disc is the dominant source of continuum reprocessed light.
Exploiting emission-line reprocessing rather than broad-band photometry has two potential benefits: a) it amplifies the response of the donor's contribution by suppressing most of the background continuum light (dominated by the disc); b) since the emission line reprocessing time is instantaneous, the response is sharper (i.e. only smeared by geometry). Through the Bowen project we know that high energy radiation is very efficiently reprocessed by the donor's atmospheres into Bowen fluorescence lines. Therefore, we decided to search for optical echoes of X-ray variability using ULTRACAM [16] equipped with a special set of narrow band filters, centered at the Bowen blend and a red continuum. The latter is essential to subtract the continuum light and hence amplify the reprocessed signal from the companion. During an RXTE/WHT campaign on Sco X-1 correlated variability was detected at Fig. 4 Echo Tomography experiment in Sco X-1. Left: large amplitud X-ray variability and correlated optical light curve observed at orbital phase 0.5. Sco X-1 happened to be in the flaring branch state. Right: Cross-correlation functions between X-rays and optical light curves observed in the continuum (top), Bowen+HeII window (middle) and Bowen+HeII after continnum subtraction (bottom). After [26]. phase ≃ 0.5 i.e. superior conjunction of the companion star, when the heated face presents its maximum visibility [26]. Time delays of 14-16s are measured after the continuum light is subtracted from the Bowen light curves (see Fig. 4). These delays are consistent with the light traveltime between the NS and the companion star and hence provide the first evidence of reprocessing in the companion of Sco X-1. However, one needs to detect several optical echoes as a function of orbital phase in order to constrain i and q and derive masses.
In a second campaign we observed the burster X1636-536 simultaneously with RXTE and VLT+ULTRACAM. Three X-ray bursts and their corresponding optical echoes were recorded at orbital phases 0.55, 0.20 and 0.83 and these are shown in the left panel of Fig. 5. The optical bursts clearly lag X-ray burst and are also smeared, indicating an extended reprocessing site. Delay times are in the range 2-3 secs showing little evidence for orbital variability. However, these delays drift when several amounts of continuum light (parametrised by the factor cf) are subtracted from the Bowen+HeII light. And for c f ≃ 0.8 − 0.95 the 3 delays become consistent with reprocessing in the companion for M NS = 1.4 M ⊙ , q = 0.3, α = 12 • and i = 36 − 60 • , as derived through radial velocities of the Bowen lines [6]. This is illustrated in the right panel of Fig. 5. Note that, in particular, delays observed at phase ∼ 0.5 are especially sensitive to the inclination angle. The main difficulty which hinders us from constraining the inclination is the unknown amount of continuum substraction. In principle, there must be an optimum cf factor which results in a perfect subtraction. However, this is not easy to find because the continuum filter is placed ∼1500Å away from the Bowen lines due to the optical layout of ULTRACAM. New high-speed spectrophotometry devices such as ULTRASPEC will provide pure emission line light curves for echo mapping experiments. These are likely to yield accurate inclinations and, when combined with dynamical information from the Bowen lines and X-ray mass functions, the first accurate NS masses in persistent XRBs.
Conclusions
In the past 20 years the field of X-ray binaries has experienced significant progress with the discovery of 17 new BHs and 8 transient BMPs in LMXBs. Dynamical masses are available for 16 BHs but better statistics and improved errors are required before using the observed distribution to constrain XRB evolution and supernova models. Exploiting deep Hα surveys of the Galactic plane, such as IPHAS, may unveil a significant fraction of a large expected population of quiescent XRBs.
The discovery of fluorescence emission from the companion star has opened the door to derive NS masses in persistent and new transient XRBs. This is possible thanks to: i) dynamical information from irradiated donors through high-resolution spectroscopy of the Bowen blend; ii) echo-mapping reprocessing sites through simultaneous Bowen-line/X-ray lightcurves. These techniques, together with results from burst oscillations and transient BMPs, will likely provide the first accurate NS masses in XRBs in the near future and perhaps confirm the existence of massive NS. Thanks to these new techniques, which have proven their worth, the future is bright as new instruments and telescopes will allow to push ahead our sample of BHs and NS masses. High-speed and high-resolution instruments, such as OSIRIS at GTC, RSS at SALT and ULTRASPEC, will play a crucial role in this goal. | 2009-04-07T11:33:50.000Z | 2009-04-07T00:00:00.000 | {
"year": 2009,
"sha1": "dc11e5f950d947b8c498a537226e87319ec6f4de",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0904.1116",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dc11e5f950d947b8c498a537226e87319ec6f4de",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237939468 | pes2o/s2orc | v3-fos-license | Frailty in Primary Care: Validation of the simplified Zulfiqar Frailty Scale (sZFS)
Introduction: Frailty scales are used very rarely by general practitioners as they are time consuming and are not well-adapted to current needs. Thus, we have designed with general practitioners a new scale for the early and rapid detection of frailty syndrome, called the simplified Zulfiqar Frailty Scale (sZFS). Patients and methods: This scale was tested in two general medicine practices in Normandy (France) for a total of six months and compared to the GFST tool “The Gerontopole Frailty Screening Tool”. Only patients who were over 65 years old with an ADL ≥ 4/6 were included. Results: 107 were patients included in the general medicine practice, with an average age of 74 years. The sZFS questionnaire has a shorter administration time than the GFST questionnaire (p < 0.001). Its sensitivity is of 93%, and its specificity is 58%. Its positive predictive value is 57%, and its negative predictive value is 93%. The area under the curve of the sZFS scale is 0.83 [0.76; 0.91] (IC95%). Conclusion: Our frailty screening scale is simple, relevant, and quick.
Introduction
Preventing dependency is a public health objective. Frailty can be used to predict the risk of dependency, falling, hospitalization, and death. General practitioners would be the best choice of health care professional for identifying frailty, but it is hard to do this in current practice with validated tools.
There is no consensus regarding frailty diagnostic criteria. The prevalence of frailty depends on the tool used. In the European SHARE study, the prevalence of frailty varied from 6% to 43% depending on the eight tools used [1]. These tools were validated by international cohort studies for diagnosing frailty, but appear difficult to use in general medical practice. Therefore, we have developed a tool for identifying frailty in general medicine for independent subjects over 65 years old that is intended to be quick and easy to use. It takes into account various factors related to frailty risk (social, cognitive, nutritional, falls, and iatrogenic). The Fried Scale is widely known [2], but its inclusion in measurements is not routinely used for patient assessment. The Frailty Index (FI) of Cumulative Deficits (FI-CD) was proposed by Rockwood and Mitninski. It is well validated and has a higher predictive ability of adverse clinical events than other frailty measurements in both hospital and community settings [3,4], but it has some limitations and is time consuming. There is also a Frailty Index derived from the Comprehensive Geriatric Assessment (CGA). It is used as a clinical standard for frailty assessment and has been found to be highly correlated with the FI-CD [5]. It is also time consuming.
The Gerontopole Frailty Screening Tool (GFST) consists of two parts: a questionnaire performed first, and the clinician's judgement of frailty status [6,7]. A limitation of this scale is that it does not provide specific guidance for clinicians regarding the identification of frailty. Moreover, most of the items are subjective.
We therefore developed a frailty screening tool for use in primary care, referred to as the Zulfiqar Frailty Scale (ZFS). This scale was tested in a general practitioner's office for six months in Plancoët, France. This first study was published in Medcines MDPI [8]. The difference with the original ZFS scale [8] is that the simplified scale has five questions with only one social question, instead of two in the original scale. Indeed, the item "presence of caregivers" was not retained in this simplified scale, as this explained by the realization of an autonomy assessment with the ADL scale, which by a score ≥ 4/6 indicates a certain autonomy of the subject included.
The main purpose of this second study was to evaluate the ability of the fast-acting "simplified Zulfiqar frailty scale" (sZFS) tool to detect frailty among a group of elderly patients who are monitored by a general practitioner, in comparison with The Gerontopole Frailty Screening Tool (GFST).
Study Type
Prospective and observational study conducted in two general practices in the Normandy region of France.
Patients were selected to participate in the study over a period of six months, between November 2017 and April 2018.
Study Population
Our study population was made up of patients aged 65 or older who were monitored by a general practitioner and had an ADL (Activities of Daily Living) score of 4/6 or higher. Patients who did not provide their verbal consent during the introductory phase of the study, were under 65 years of age, had an ADL score of less than 4/6, or lived in nursing homes were excluded from the study.
Characteristics of the Population
The data collected were gender, age, the Activity (Katz Index of ADL [9]) and Instrumental (Lawton Index of IADL [10]) of daily living score, the medical comorbidities, the Charlson comorbidity index [11], and the weight.
Frailty Screening with the "simplified Zulfiqar Frailty Scale" (sZFS) Tool
The score was calculated by way of five indicators that measured the main functions of an elderly person [2,[12][13][14] in terms of their geriatric relevance as defined by the scientific literature. A point was assigned for each positive indicator (maximum score = 5).
Each item was selected based on its quick completion time and simplicity so that prior training was not needed. The aim of our tool is to identify five elements considered to be significant according to the literature. See Table 1 the questionnaire of the simplified ZFS tool.
These variables are significantly and independently associated with an increased risk of occurrence of negative events in terms of morbidity and mortality [15,16]. The difference with the original ZFS scale [8] is that the simplified scale has five questions with only one social question, instead of two in the original scale. See Table 1 for the description of the sZFS.
Each item, if present, accounts for one point (maximum score: 5). An elderly subject is considered as "not frail" with a score of 0/5, "pre-frail" with a score between 1/2 and 2/5, and "frail" with a score ≥ 3/5.
Frailty Screening with the GFST
The Gerontopole Frailty Screening Tool (GFST) comprises two parts: a questionnaire is performed first, followed by a clinician's judgement of frailty status [6,7,17].
Statistics
The "sZFS" score was assessed in terms of sensitivity, specificity, positive and negative predictive values, and the area under the ROC curve, using the GFST scale. A Pearson correlation matrix was used to evaluate discrepancies between the total scores and the items of each score. A paired two-sample t-test was used to compare the time it took to administer the two questionnaires. All the analyses were performed with R 3.6.1 software with an alpha risk set at 5%.
The study has been registered with the CNIL "National Commission on Informatics and Liberty".
Ethic consideration: Written consent from patients included were obtained. Internal Department Ethics Committee approved this paper for publication (N • 15-01-18).
Description of the Population
107 patients over 65 years old were included. No refusals were noted. The characteristics of the population included are detailed in Table 2. Table 3 presents the results by item of the GFST frailty score. The results show that elderly subjects were considered frail according to the GFST scale for 34.6% of the population included. See Table 3. Table 4 presents the results by item of the sZFS frailty score. The results show that the elderly subjects were considered frail according to the sZFS scale for 60.7% of the population included. See Table 4.
Study of Sensitivity, Specificity, Positive Predictive Value and Negative Predictive Value
The sensibility was at 93%, while the specificity was at 58%. The positive predictive value (PPV) was at 57% and the negative predictive value at 93%. See Table 6. Table 6. Contingency table-Zulfiqar frailty scale vs. GFST's criteria, with "pre-frail" and "robust" patients making up the "non-frail" group.
Scales Administration Time
The mean difference in administration of the scales is 9.5 s, CI [7.2; 11.8]. (See Ta 7) The mean time of administration of the sZFS questionnaire was statistically differe from the mean time of administration of the GFST questionnaire (p < 0.001). The sZ questionnaire has a shorter administration time than the GFST questionnaire.
Its use in primary care seems possible.
Discussion
The issue of screening for frailty among the elderly people is growing with the d mographic changes we are experiencing today and is set to increase in the coming ye and decades. One of the major roles in screening for frailty is played by the general pr titioner, who is at the crossroads of the latter, due to the frequent and repeated cont that he or she maintains with the elderly patient in his or her monitoring role, and t influence that he or she can have on the future of the patient in his or her role of coor nating care and management with the various other health, medical, paramedical, a social players. The psycho-medico-social reflection that comes from this frailty has giv rise in recent years to different scales or different screening scores, with the aim of prov ing optimum care for these elderly people, and particularly frail elderly people. Howev very few frailty scales are used by general practitioners as they are time consuming a are not well-adapted. We have therefore created this rapid screening scale, taking in account the clinical, psychological, and social dimension of the patient, trying to adap as well as possible to general medicine. This meant that it had to be simple, efficient, qu to implement, sensitive, and with a high negative predictive value.
Our first study published in Medicines MDPI [8] concerned only older subjects ov 75 years old. With this work, we decided to lower the age of inclusion to 65 years in ord
Scales Administration Time
The mean difference in administration of the scales is 9.5 s, CI [7.2; 11.8]. (See Table 7) The mean time of administration of the sZFS questionnaire was statistically different from the mean time of administration of the GFST questionnaire (p < 0.001). The sZFS questionnaire has a shorter administration time than the GFST questionnaire.
Its use in primary care seems possible.
Discussion
The issue of screening for frailty among the elderly people is growing with the demographic changes we are experiencing today and is set to increase in the coming years and decades. One of the major roles in screening for frailty is played by the general practitioner, who is at the crossroads of the latter, due to the frequent and repeated contact that he or she maintains with the elderly patient in his or her monitoring role, and the influence that he or she can have on the future of the patient in his or her role of coordinating care and management with the various other health, medical, paramedical, and social players. The psycho-medico-social reflection that comes from this frailty has given rise in recent years to different scales or different screening scores, with the aim of providing optimum care for these elderly people, and particularly frail elderly people. However, very few frailty scales are used by general practitioners as they are time consuming and are not well-adapted. We have therefore created this rapid screening scale, taking into account the clinical, psychological, and social dimension of the patient, trying to adapt it as well as possible to general medicine. This meant that it had to be simple, efficient, quick to implement, sensitive, and with a high negative predictive value.
Our first study published in Medicines MDPI [8] concerned only older subjects over 75 years old. With this work, we decided to lower the age of inclusion to 65 years in order to have a heterogeneity of the frailty profiles ranging from the non-frail and pre-frail character in a not insignificant proportion to the frail subjects that we see more frequently in very older patients and the results in terms of the proportion of frail subjects confirm this between the two studies.
Our scale is intended to be very simple to pass on to general practitioners. A screening tool must be simple, quick, with good sensitivity and a good negative predictive value, which is the case for our frailty screening scale. Our frailty scale has several advantages. Indeed, it does not require prior training of the medical staff, nor does it require a long time to be administered, making a medical consultation, which is already quite long when it is dedicated to elderly subjects, more time consuming. In France, the usual duration of a consultation in general medicine is 15-16 min [18]. With our frailty detection scale, the time taken to complete the procedure is less than 2 mins.
Unlike the Fried scale [2], our scale does not require any additional equipment such as a dynamometer for measuring isometric contraction. This is a real advantage in the context of large-scale screening. The advantage of our scale compared to the GFST "Geron-topole_Frailty_Screening_Tool" [6,7,17] scale, for example, also lies in a better objectivity in the nature of the items selected. Indeed, the GFST scale contains more subjective questions whereas our scale would have the advantage of being more objective while being as simple to administer. In addition, we propose a rating, which allows the general practitioner to be guided.
Our goal was to create a rapid frailty screening scale that would be useful for general practitioners. The purpose of our scale is the early detection of frail elderly people, helping to delay the loss of autonomy. The value of systematic screening for frailty in the general practice requires large-scale prospective studies. Adapted physical activity, nutritional management, and diagnosis of underlying pathologies are the main axes of interventions.
We recognize weaknesses in our work, particularly on the number of subjects included which remain limited and weak. In addition, we recognize a high rate of false positives. It would be useful to continue the work on a larger workforce, on several general medicine practices and to be able to study the real agreement between our rapid frailty screening scale and a comprehensive geriatric assessment (CGA) performed by geriatricians.
Conclusions
To be validated, our scale must be tested further in other general practices by recruiting a wider range of participants. Furthermore, the reproducibility and ability of the scale to predict potentially dangerous situations (morbidity-mortality, hospitalizations and passage to the ER) must be developed and tested on elderly patients, which will take place in the upcoming weeks and months. A study is underway in the Poitiers region, France, with the use of our scale and a comparison with the Fried scale, in two general medicine offices, and another one in Champagne Ardennes region in 12 general medicine offices, with comparison between our scale and Fried scale. These results will be communicated soon.
Funding: This research received no external funding.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Internal Department of University Hospital of Rouen and approved this paper for publication (N • 15-01-18).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.
Data Availability Statement:
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-09-28T05:12:40.738Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "35225e570a8ddaece6b3ccecccebf46d3204b233",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2305-6320/8/9/51/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35225e570a8ddaece6b3ccecccebf46d3204b233",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53093245 | pes2o/s2orc | v3-fos-license | The Four Fixation Points of the Axis: Technique and Case Report
ABSTRACT Background: Instrumentation of the axis can be accomplished through a variety of techniques including transarticular screw fixation, pars and pedicle screw fixation, translaminar screw fixation, and posterior wiring. We report on the evolution of the axial 4-screw technique. Methods: Retrospective case review. After exposure of posterior spinal elements, the medial and superior walls of the C2 pedicle were identified from within the spinal canal. A high-speed drill was then advanced under lateral fluoroscopy, which guided craniocaudal angulation. Medial angulation was based on anatomic landmarks and preoperative imaging. This was followed by placement of translaminar screws according to the technique described by Wright. When extending the construct into the subaxial spine or the occiput, lateral connectors are placed in translaminar screws, which are usually more offset. The rod is directly connected to the pedicle screws, which are usually more in alignment with the subaxial/occipital instrumentation. Results: Two male patients ages 56 and 58 underwent posterior instrumentation of the axis employing a combination of pedicle and laminar polyaxial screws. Indications included multilevel spinal cord compression and deformity in a patient with Down syndrome and cervical meningioma, respectively. Follow-up was 1 year and 5 years, respectively. Medical complications (N = 2) occurred in the patient with Down syndrome resulting in prolonged intubation with tracheostomy placement. Reduction was maintained in both patients at last follow-up. There were no neurologic, vascular, or instrumentation related complications. Conclusions: The axis serves as a versatile anchor point and offers 4 potential points of fixation. Lateral connectors play a crucial role and allow for incorporation of the C2 screws with the rest of the construct. Local anatomy will dictate the necessity and ability to place instrumentation and detailed preoperative planning is of paramount importance.
INTRODUCTION
Instrumentation of the axis can be accomplished through a variety of techniques, including transarticular screw fixation, pars and pedicle screw fixation, translaminar screw fixation, hook fixation, and posterior wiring. [1][2][3][4][5] Developments in instrumentation, image guidance/navigation, and surgical techniques have led to the placement C2 pars, pedicle, and laminar screws. 3,[5][6][7] In order to maximize the potential of the axis as a fixation point, we report on the evolution of our 4-screw technique consisting of 2 C2 translaminar screws and 2 C2 pedicle screws.
MATERIALS AND METHODS
This is a retrospective review of patients who had instrumentation at C2. All patients underwent pre-and postoperative cervical x-rays, magnetic resonance imaging, and thin-cut computed tomography (CT) scans with sagittal and coronal reconstructions. All patients had a minimum of 1 year of follow-up.
Surgical Technique
After application of Garden-Wells tongs, the patient is positioned prone on a radiolucent Jackson table. Lateral x-ray is taken to assess cervical spine alignment. After exposure, the polyaxial pedicle screws are placed first. 5 This ensures that there will be no interference from the distal tips of the laminar screws. The medial and superior walls of the C2 pedicle are first identified from within the canal with a #1 Pennfield. A high-speed drill is then advanced under lateral fluoroscopy, which guides craniocaudal angulation. Medial angulation is based on anatomic landmarks and preoperative imaging. This is followed by placement of polyaxial translaminar screws according to the technique described by Wright. 8 It is recommended to leave all of the screw heads slightly proud to allow for maximal polyaxial rotation. When extending the construct into the subaxial spine, we use lateral connectors in the translaminar screws, which are usually more offset, and directly connect the rod to the pedicle screws, which are usually more in alignment with the subaxial instrumentation. 5 Finally, the construct obtained at the C2 level consisted of a main rod passing through a C2 pedicle screw head with additional fixation with a translaminar screw linked by a lateral connector connected perpendicularly to the main rod (in similar manner as typical transversal connectors) ( Figure 1).
Case 1
The patient was a 56-year-old male with Down syndrome, cervical spinal stenosis, and spinal cord compression and myelopathy. In addition to subluxation of C5 on C6, the patient had cervical deformity with multiple dysplastic vertebrae ( Figure 2). Preoperative imaging demonstrated good C2 pedicle, pars, and lamina ( Figure 3). Due to limited bony anchor points in the subaxial cervical spine (lateral mass hypoplasia) and a planned large suboccipital decompression limiting anchor point options in the skull, we decided to maximize the amount of bony anchor points in the axis with 4point fixation at the C2 level. He was placed in preoperative halo traction, which was followed by posterior cervical decompression at C5-C7 and instrumented (DePuy/Synthes, Raynham, Massachusetts) fusion from C2 to T2. Two pedicle and 2 translaminar screws were placed in the axis ( Figure 4). This was connected with the rest of the construct via lateral connectors ( Figure 5). The patient tolerated the procedure well. Postoperative CT scan demonstrated good hardware positioning (Figure
Case 2
The patient was a 60-year-old male who presented with neck pain and restricted cervical range of motion. Advanced imaging demonstrated a lesion in the C2 vertebral body with advanced bony destruction and fracture. Biopsy reveled meningioma. Due to changes in bone structure influenced by lesion and poor intraoperative purchase, we decided to augment the C2 pedicle screws with translaminar fixation. The patient underwent an instrumented occiput to C3 posterior spinal fusion via a unilateral construct consisting of a C2 pedicle and a translaminar screw. There was destruction of the left C2 pedicle; therefore, no pedicle screw was placed on that side. At 5-year follow-up, he had a solid posterior arthrodesis with no evidence of hardware failure ( Figure 8).
DISCUSSION
Instrumentation of C2 can be accomplished via multiple possible constructs. 5 Both the C2 lamina and the pedicle have been shown to have sufficient dimensions to accept bone anchors. 9,10 This case report demonstrates the feasibility of safely placing 4 polyaxial screws into the axis. It is important to note that constructs with unilateral C2 pedicle and lamina fixation are an option in cases precluding bilateral fixation. We feel that this technique may reduce the chance of construct failure, especially in the presence of osteoporotic bone and poor lateral mass screw purchase.
The construct requires the use of lateral connectors. Although there is no rule regarding which C2 screw (pedicle vs lamina) will be best suited for a lateral connector, we usually find that the pedicle screws are in line with the distal lateral mass screws, while the laminar screws are more offset and require a cross connector. We prefer to leave the screws 2 to 4 mm proud to allow for screw head rotation as well as to close the distance between the tulip and the rod. The main issue is to place pedicular screws first to avoid interference from the distal tips of the laminar screws.
Although specific biomechanical evaluation of 4point C2 fixation has not been described, reports concerning stabilization of the operated spine segment show superiority of pedicular fixation over translaminar screw fixation, especially in lateral bending. 11 The evaluation of the pullout strength demonstrated that pedicle screws provide the strongest fixation for both initial and salvage applications; however, in salvage applications, translaminar screws provide stronger fixation than pars screws. 12 The main theoretical advantage of the 4-point fixation technique is the fact that adding translaminar screws to the pedicular fixation should result in an increase in both the strength and the endurance of the construct. Four distinct points of fixation at the same level, with screws placed at different angles and connected to the same rod, should provide high durability against pullout force.
The main disadvantage of placing 4 points of fixation at C2 is that it uses translaminar stabilization in the index surgery, whereas it is often used as a salvage fixation technique in case of revision surgery. Although we believe that 4 bony anchors at C2 may reduce instrument failures, in cases where revision surgery is necessary, other techniques, such as wiring, hook placement, or transarticular screws, are still possible. 5 The implantation technique is based on welldescribed standardized instrumentation methods and does not introduce any new complications; however, increasing the amount of bony work at any spinal level increases the risk proportionately, and complications typical for each of the techniques may occur. Possible technical problems with screw arrangement and connection of the implants may be avoided with careful preoperative planning.
The 4-point C2 fixation method is not a routine procedure and should be considered an option for selected cases rather than a standard method. C2 pedicle screws provide sufficient stabilization in most cases. However, in cases of structural defects and congenital malformations, combined with poor bone quality, strong and stable fixation is indicated. The decision to apply 4-point fixation points in C2 needs to be carefully chosen by the surgeon. It depends on many factors and is individual in each case; however, we feel that this is another option for surgeons to augment the stability of the construct. The cases we present in this report had posterior
CONCLUSION
The axis serves as a versatile anchor point and offers 4 potential points of fixation. Lateral connectors play a crucial role and allow for incorporation of the C2 screws into the rest of the construct. Local anatomy will dictate the necessity and the ability to place instrumentation, and detailed preoperative planning is of paramount importance. Indications for 4-point fixation are individual, and each case must be carefully evaluated. | 2018-11-11T01:39:44.619Z | 2018-09-28T00:00:00.000 | {
"year": 2018,
"sha1": "9bfe7eaef8048e24de6affa011f516b7f89b97ce",
"oa_license": null,
"oa_url": "http://www.ijssurgery.com/content/ijss/12/5/611.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "9bfe7eaef8048e24de6affa011f516b7f89b97ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3927542 | pes2o/s2orc | v3-fos-license | Within-Host Dynamics of Multi-Species Infections: Facilitation, Competition and Virulence
Host individuals are often infected with more than one parasite species (parasites defined broadly, to include viruses and bacteria). Yet, research in infection biology is dominated by studies on single-parasite infections. A focus on single-parasite infections is justified if the interactions among parasites are additive, however increasing evidence points to non-additive interactions being the norm. Here we review this evidence and theoretically explore the implications of non-additive interactions between co-infecting parasites. We use classic Lotka-Volterra two-species competition equations to investigate the within-host dynamical consequences of various mixes of competition and facilitation between a pair of co-infecting species. We then consider the implications of these dynamics for the virulence (damage to host) of co-infections and consequent evolution of parasite strategies of exploitation. We find that whereas one-way facilitation poses some increased virulence risk, reciprocal facilitation presents a qualitatively distinct destabilization of within-host dynamics and the greatest risk of severe disease.
Introduction
Parasitism is ubiquitous -all cellular organisms are potential hosts to damaging infectious agents, from viruses to worms. Parasites (organisms that live on or in a host and get their food from or at the expense of its host) are now recognized as dominant components of diverse biological communities, both in terms of diversity and even in terms of total biomass [1]. Given the incredible prevalence and diversity of parasites within host populations, it is unsurprising that host individuals are often found to be co-infected with multiple parasite species [2]. However, research into host-parasite interactions remains dominated by the study of single infections in isolation, with only occasional consideration for the mechanistic interactions between parasites and their ecological and evolutionary implications [3,4,5,6,7,8]. Pedersen and Fenton categorized a range of mechanisms that can cause parasite interactions, ranging from reciprocal competition (e.g. species A and species B compete for a shared resource, thus A inhibits the growth of B and vice-versa) to reciprocal facilitation (e.g. species A and species B cross-feed on the byproducts of their partner, thus A enhances the growth of B and vice-versa) [3].
Studying multi-species infections is of particular biomedical importance as several infectious diseases are complicated by secondary or opportunistic infections, for example, HIV and associated infections (such as tuberculosis) [9,10], and lyme disease and its associated tick-born infections [11]. Besides impeding host recovery [12], co-infections can create confusion and delay in diagnosis and treatment.
Over the past few years there has been an increasing interest in studying multispecies co-infections [13,14]. A recent study in wildrodent populations demonstrated that host susceptibility to a microparasite infection was significantly affected by secondary infections [14]. Their results also highlighted the possibility of different types of microparasite species associations leading to different types of interactions (one way/reciprocal positive/ negative association). For example, while infection with the bacterium Anaplasma phagocytophilum decreased the host susceptibility to Bartonella spp. infection, Anaplasma phagocytophilum increased susceptibility to cowpox virus. According to another study, host susceptibility to Streptococcus pneumonia transmission and disease are increased if previously infected with influenza [15]. The negative influence of co-infection on mortality was highlighted in a study on rainbow trout showing that fish co-infection with an ectoparasite and a bacterial pathogen significantly decreased the fish survival. Although mono-infections with the ectoparasite did not affect fish survival, it enhanced the susceptibility to the bacterial pathogen [16].
Here we use basic ecological theory to investigate the withinhost dynamical consequences of various mixes of competition and facilitation between a pair of co-infecting species. We then consider the implications of these dynamics for the virulence of co-infections and consequent evolution of parasite strategies of exploitation. We find that whereas one-way facilitation poses some increased virulence risk, reciprocal facilitation presents a qualitatively distinct destabilization of within-host dynamics and the greatest risk of severe disease.
Materials and Methods
To describe the growth of two parasite species (A and B) within a single host, we begin with the classic Lotka-Volterra two species competition equations [17].
Here, A and B represent densities of A and B, respectively, scaled to the carrying capacity of A (for details on the rescaling of model 1, see [17] and Text S1). x and y are interspecific competition coefficients, measuring the relative competitive (inhibitory) weight of an interspecific individual, relative to a conspecific individual. Finally, z is a measure of intraspecific competition within the B population (implying that the carrying capacity of B is 1/z times that of A, where z.0). These assumptions imply that single species infections will tend to stable equilibrium densities (A* = 1 for species A, B* = 1/z for species B), i.e., we describe the dynamics of chronic infections [18]. This assumption of single species chronicity can be viewed as a statement that on the timescale of superinfection (expected time from 1 st to 2 nd infection), the initial infection dynamics are relatively stable. As infection dynamics become more acute, then the incidence of superinfection will correspondingly decrease (in other words, the multiple-infection issues addressed in this paper are generally a property of parasites that are relatively chronic).
In the classic implementation of Model 1, all parameters are constrained to be positive (i.e., we have both intraspecific and interspecific competition). However, if we allow x and/or y to turn negative, we can consider the potential for reciprocal or one-way facilitation [17]. Specifically, if x,0, parasite B will facilitate the growth of A, and if y,0, parasite A will facilitate the growth of B.
To link the within host dynamics of A and B to virulence (additional mortality and/or morbidity), we assume that virulence V(t) is proportional to the densities of the two parasites [19], hence V(t) = a A(t)+b B(t).
Results and Discussion
To characterize the dynamics of Model 1, we first note that the system has 3 non-zero equilibria -either we find A alone (at A* = 1), B alone (at B* = 1/z) or A plus B coexistence (at A* = (z2x)/(z2xy) and B* = (12y)/(z2xy)). A stability analysis of these equilibria [17,20] reveals that when y.1 (i.e., if A is strongly inhibitory to B) then A alone is stable. Similarly, if x.z (i.e., if B is strongly inhibitory to A), then B alone is stable. If both of these inequalities hold (and therefore interspecific competition dominates) then both A alone and B alone are locally stable (bistable dynamics), with A dominating whenever the proportion of A exceeds (z2x)/(z2x2y+1) (i.e., A*/(A*+B*)). Bistable dynamics describe a simple resident advantage -whichever parasite establishes first is likely to resist colonization and replacement by a novel intruder ( Figure S1).
In the absence of strong interspecific competition (i.e., when x,z and y,1), neither species can exclude the other, and so we observe coexistence. In addition, in the absence of facilitation (such that 0,x,z and 0,y,1) we find that the coexistence equilibrium is stable. Figure 1 illustrates the behaviour of the coexistence equilibrium over the range of parameter values where coexistence is guaranteed. This coexistence remains stable if we introduce one-way facilitation (either x or y turning negative) and even for weak reciprocal facilitation. However, for sufficiently strong reciprocal facilitation (x,0, y,0 and xy.z) all equilibria are destabilized and the within-host dynamics enter into a runaway process, characterized by uncontrolled growth of both parasite lineages (Red region in Figure 1).
Broadly, virulence will tend to increase as x and y decrease and become negative (as facilitation dominates) ( Figure 1C and Table S1). However, if virulence is determined primarily by one or the other of the species (and the other is relatively cryptic with respect to the host) then increasing one-way facilitation can in some cases decrease virulence (Figure 2). For example, if virulence is largely defined by A (a..b) and B inhibits A (x.0), then increasing facilitation of B (increasing -y) can reduce virulence (see dashed white line on Figure 2G for an example). Figure 2 illustrates that virulence may decrease under one of the following scenarios: a) Increasing reciprocal competition, if virulence of the two parasites is symmetric (a = b, see Figure 2A, E, I); b) Increasing competition imposed by the less virulent species on the more virulent species, if there is reciprocal competition (x.0 and y.0, see Figure 2C, G); c) Increasing facilitation by the more virulent species on the less virulent strain when one-way facilitation (either x or y negative, e.g. on Figure 2C, G). These results follow from the simple effect that giving aid to (or harming) a competitor acts to increase (or decrease) competitive costs. For illustrative purpose we used a linear mapping between virulence and parasite densities in Figure 1C and 2 (i.e. V = aA*+bB*). Relaxing this assumption will change the contour spacings represented in these figures, however the primary prediction of a qualitative shift in virulence given reciprocal facilitation holds for any case where V is a monotonically increasing function of A and of B. Under this more general condition, any run-away in A and B densities will translate to an unbounded increase in virulence.
The various mixes of net facilitation and competition outlined in Figures 1 and 2 provide a simple sketch of more complex withinhost interactions, including indirect interactions via inducible (immune-mediated) defences [21] or shared phages [22]. For example, if parasite A suppresses host immunity, that may favour infection by parasite B resulting in a net indirect facilitation of B by A. Similarly, a parasite which induces a host generalised immune response can result in indirect harm to other co-infecting parasites. Note that a more mechanistic predator-prey model has been applied to understand immune-mediated within-host interspecies parasite interactions [21]. While this model focuses explicitly on parasite interactions that are mediated by the host's immune response (an indirect interaction), our model is more general by assuming both direct and indirect interactions of any net sign.
Reciprocal competition (x.0 and y.0) can be considered the default net interaction -co-infecting parasites are competing for the limited resource of one single host. However many examples of facilitation can be found in the literature. HIV and oral candidiasis is potentially a good example of one-way facilitation [23]. Candida albicans, the fungus that causes oral candidiasis, is a commensal in the normal human oral mucosa. During HIV infection, immunosuppression promotes the proliferation of this fungus beyond normal limits leading to oral candidiasis, thus HIV facilitates the growth of the fungus (if HIV = parasite A, then y,0). In return, there is no evidence that the enhanced proliferation of C. albicans has any marked impact on HIV proliferation, indicating that C. albicans remains a commensal towards HIV (x close to zero), even as it turns pathogenic towards the shared host (increasing B).
On the other hand, if the facilitation is two-sided (i.e., x,0 and y,0) the equilibrium densities of both parasites will be higher given co-infection ( Figure 1A and B). Of particular concern is the case where the reciprocal facilitation is sufficiently strong to destabilize the coexistence state (i.e., xy.z, red region in Figure 1). When this condition is met, the infection is predicted to grow without bounds demanding immediate and rigorous management. Coinfection of HIV and Mycobacterium tuberculosis is a potential example for such a dangerous collaboration. HIV not only helps reactivation of dormant Mycobacterium bacilli, but also promotes fresh infection and reinfection [24]. Specifically, HIV aids the survival and proliferation of Mycobacterium by decreasing the number of CD4 T cells, inactivating macrophage functions and affecting Mycobacterium-specific T cell response [25]. Mycobacterium on the other hand boosts the replication of HIV by some unclear mechanism [26]. It has been demonstrated that Mycobacterium can increase HIV transcription in transiently transfected T and monocytic cell lines and that Mycobacterium increases HIV production in chronically infected or acutely infected monocytic cell lines. A correlation between Mycobacterium-induced HIV production and secretion of certain inflammatory cytokines has also been observed [27,28,29].
Facilitatory interactions involving HIV are relatively well documented due to the immuno-suppressive impact of HIV and the extent of research effort into this disease. However other examples exist, for instance co-infections of Salmonella and Plasmodium are suggestive of reciprocal facilitation. Leucopenia during typhoid fever [30] caused by Salmonella can facilitate the entry and survival of Plasmodium in blood. On the other hand, iron released during RBC lysis in malaria caused by Plasmodium can boost the growth of intracellular Salmonella [31,32,33]. Thus, the combination of typhoid fever and malaria in the same host is a dangerous condition demanding rigorous management. In fact, coinfection of Salmonella and Plasmodium has been reported in several places across the globe [34,35,36,37,38]. It is likely that an increasing knowledge of the pathobiology of combination infections will lead to the discovery of many more potentially dangerous collaborations among pathogenic microbes.
Our dynamical analysis of two-species interaction highlights that the dynamics of a focal species can be significantly modulated as a result of mechanistic interactions with a second, co-infecting species (Figure 1 and 2): the equilibrium density of the focal species can be increased, decreased or entirely destabilised as a result of the interaction. These effects raise an important evolutionary questiondoes selection favour facilitatory or inhibitory (competitive) interactions with co-infecting species (i.e., changes in parasite traits underlying the interspecific interaction parameters x and y)? An important ingredient in any answer to this question is an understanding of the frequency of coinfection between focal and partner species. If coinfection (with any partner) is a relatively rare occurrence, then standard virulence evolution theory predicts selection will favour intermediate levels of 'prudent' exploitation that efficiently balance the advantages of exploitation (transmission) with the costs (host death) [18,39,40,41]. The addition of a second partner co-infection would then induce a non-adaptive perturbation, no matter whether the direction of the effect was towards higher or lower rates of within-host growth (facilitation or competition).
If, in contrast, co-infection is a common and predictable occurrence, then selection could act to modify the single species exploitation strategy given the expected sign of interaction with the partner species. The impact of within-host competition (positive x and y) on the evolution of virulence has been the subject of a diverse range of models and empirical tests, offering contrasting explanations for either an increase or a decrease in virulence as within-host diversity increases [42]. The different virulence outcomes result from selection of different mechanisms of winning a greater share of the limited host resource -increased within-host replication [39,43]; decreased contribution to collective exploitation [44,45]; increased investment in interference competition [22,46,47].
The literature on virulence evolution in mixed infections has focused almost entirely on single species interactions among strains that compete largely symmetrically for shared limited resources. What happens when we move away from this single species paradigm? A few studies have considered multi-species competitive interactions and the greater competitive asymmetries that result [48,49,50] however to our knowledge there has been no consideration of virulence evolution given facilitatory within-host interactions despite the existence of numerous empirical examples, as detailed above. We propose that repeated facilitatory interactions will select for strategies that maximize a focal species yield in the context of the predictable facilitatory perturbation from the partner species. Specifically, this may take the form of a reduced growth rate, given the expectation of facilitation restoring growth towards the prudent optimum. Under this scenario, facilitatory interactions could form part of a truly mutualistic partnership, in so far as they restored the partner dynamics towards their optima. However, a dependence on a corrective input from a partner species would leave open the possibility of even greater perturbations in the event of the establishment of an inappropriate partnership. For species facing significant uncertainty over the sign of interaction with partner species, a possible solution is to adapt plastic responses, modulating behaviours in response to changes in co-infection status [51,52].
In addition to the evolutionary context, a further and marked simplification of our model is our limitation to a two-species context. In practice, within-host parasite community structure can be vastly more complex and multi-dimensional, featuring networks of facilitatory and inhibitory interactions. The exploration of appropriately multi-dimensional community models represents an important challenge for future research. Our results hint that networks characterized by reciprocal facilitation will be significantly more prone to extinction (via host death), therefore biasing observed networks towards more robust inhibitory interactions, where the sum of parasite effects is significantly less than their effects alone. Figure S1 Bistable dynamics of the co-infection (either parasite A alone or parasite B alone at equilibrium). A, Temporal dynamics of the proportion of parasite A (p = A/(A + B)) for different initial p values ranging from 0.1 to 0.9. y = 1.2, x = 0.9, and z = 0.7. The repellor value is at p* = A*/ (A*+B*) = (z2x)/(z2x2y+1) = 0.5 (dashed line). B, The threshold of invasion by parasite A (i.e. minimum p value for which A invades) increases with x and decreases with y (z = 1). C, The threshold of invasion by parasite A (i.e. minimum p value for which A invades) increases with x and decreases with z (y = 1.2).
(EPS)
Table S1 Effect of increasing x, y, and z, on the densities of A*, B*, and on total virulence (V*) at stable coexistence (A* ? 0 and B* ? 0). | 2017-04-08T23:48:37.873Z | 2012-06-21T00:00:00.000 | {
"year": 2012,
"sha1": "e5a11e007e25f96774239d71c123e6b2e49706b6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0038730&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b23e956906575ea619a161b66c2aaa4b83e6d91",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55061739 | pes2o/s2orc | v3-fos-license | The multifunctional process of resonance scattering and generation of oscillations by nonlinear layered structures
Abstract The paper focuses on the development of a mathematical model, an effective algorithm and a self-consistent numerical analysis of the multifunctional properties of resonant scattering and generation of oscillations by nonlinear, cubically polarizable layered structures. The multifunctionality of such layered media is caused by the nonlinear mechanism between interacting oscillations—the incident oscillations (exciting the nonlinear layer from the upper and lower half-spaces) as well as the scattered and generated oscillations at the frequencies of excitation/scattering and generation. The study of the resonance properties of scattering and generation of oscillations by a nonlinear structure with a controllable permittivity in dependence on the variation of the intensities of the components of the exciting wave package is of particular interest. In the present paper, we extend our former results, and furthermore we analyze the realizability of multifunctional properties of nonlinear electromagnetic objects with a controllable permittivity. The results of our investigations (i) demonstrate the possibility to control the scattering and generation properties of the nonlinear structure via the intensity of the incident field, (ii) indicate the possibility of increasing the multifunctionality of electronic devices, of designing frequency multipliers, and other electrodynamic devices containing nonlinear dielectrics with controllable permittivity.
ABOUT THE AUTHORS
The authors perform joint research about scattering and generation of electromagnetic waves on nonlinear structures since more than 10 years. Lutz Angermann is a professor of Numerical Mathematics at the Department of Mathematics of the Clausthal University of Technology since 2001. His research is concerned with the mathematical analysis of numerical algorithms for partial differential equations with special interests in finite-volume and finite-element methods and their application to problems in Physics and Engineering. He is the author of more than 100 research papers.
Vasyl V. Yatsyk is a senior scientist at the O.Ya. Usikov Institute for Radiophysics and Electronics of the National Academy of Sciences of Ukraine (O.Ya. Usikov IRE NASU) 1998, Kharkiv, Ukraine. He authored more than 100 papers. His research interests include scattering and generation effects on nonlinear materials, numerical-analytical methods of electromagnetic theory, resonant interaction, and dispersion of waves.
PUBLIC INTEREST STATEMENT
Nonlinear dielectrics with controllable permittivity are intensively investigated and begin to find broad applications in device technology. The development of new types of dielectrics, the introduction, and production of modern functional electronic devices require a comprehensive knowledge about the properties of these materials. In this context, the properties of such solid and liquid nonlinear materials are of important, which practically enable the conversion of energy or information, modulation, detection, amplification, recording, storing, displaying, and other types of conversion of electrical, magnetic, and optical signals carrying information. The paper is devoted to the mathematical and computational investigation of a model of resonance scattering and generation of waves by an isotropic, nonmagnetic, nonlinear, layered, dielectric structure which is excited by two-sided acting packets of plane waves in the resonance frequency range. In particular, an algorithm for the numerical determination of the eigenfrequencies (resonance frequencies) and eigenfields is developed.
Introduction
Nonlinear dielectrics with controllable permittivity are subject of intense studies and begin to find broad applications in device technology and electronics, where both the radio and optical (Akhmediev & Ankevich, 1997;Chernogor, 2004;Kivshar & Agrawal, 2003;Miloslavsky, 2008;Shen, 1984) frequency ranges are of interest. We present a model of resonance scattering and generation of waves by an isotropic, nonmagnetic, nonlinear, layered, dielectric structure which is excited by packets of plane waves in the resonance frequency range in a self-consistent formulation (Angermann, Shestopalov, & Yatsyk, 2013;Angermann & Yatsyk, 2011a, 2011bYatsyk, 2011Yatsyk, , 2012Yatsyk, , 2013. We consider two-sided acting wave packets consisting of both strong electromagnetic fields at the excitation frequency of the nonlinear structure, leading to the generation of waves, and of weak fields at multiple frequencies, which do not lead to the generation of harmonics but influence on the process of scattering and generation of waves by the nonlinear structure. A self-consistent numerical algorithm is developed. Based on the linearization of nonlinear problems of scattering and generation of waves by cubically polarizable, layered structures, we provide suitable spectral problems and formulate an algorithm for the numerical determination of the eigenfrequencies and eigenfields. We restrict our considerations to dispersionless nonlinear dielectrics, however, this is not essential but only simplifies the explanations.
We discuss numerical results for the problem of third harmonic generation by resonant scattering of the wave packet by single nonlinear layers having either decanalizing or canalizing properties as well as by a three-layer structure consisting of layers with canalizing-decanalizing-canalizing properties of energy dissipation. Within the framework of a self-consistent formulation of the problem we see that the induced imaginary part of the permittivity of the layer is determined by the nonlinear part of the polarization and characterizes the loss of energy in the nonlinear medium which is spent for the generation of the electromagnetic field of the third harmonic (Angermann & Yatsyk, 2011a, 2011bYatsyk, 2012Yatsyk, , 2013. The consideration of weak fields at multiple frequencies leads only to an increase of the portion of generated energy (Angermann, Kravchenko, Pustovoit, & Yatsyk, 2013a;Angermann & Yatsyk, 2012. In particular, the investigation of a nonlinear single-layered decanalizing structure disclosed the effect of type-conversion of the generated oscillations in the case of an increasing amplitude of the incident field at the excitation frequency. In the range of third harmonic generation, this effect is also observed in the case of an increasing amplitude of the weak field at double frequency (Angermann & Yatsyk, 2012Angermann, Krevchenko, et al., 2013a. In this paper, for the first time two-sided acting fields at the scattering frequency are investigated and type-conversions by variation of the amplitude of the two-sided acting excitation fields were found.
The numerical computations of the eigenfrequencies and eigenfields of the linearized problems show that the resonant scattering and generation properties of a nonlinear structure are determined by the proximity of the excitation frequencies of the nonlinear structure to the complex eigenfrequencies of the corresponding homogeneous linear spectral problems with an induced nonlinear dielectric permittivity of the medium. In this paper, we propose an effective method to describe the processes of generation of oscillations via the variation of the relative magnitude of the Q-factor of the eigenoscillations corresponding to the eigenfrequencies of the scattering and generating structure when the intensity of the excitation field changes.
Formulation of the boundary-value problems of scattering and third harmonic generation of oscillations
In the framework of a self-consistent formulation, we investigate the problem of resonant scattering and generation of waves by a nonlinear, nonmagnetic, isotropic, cubically polarizable, linearly E-polarized = E 1 , 0, 0 T , = 0, H 2 , H 3 T layered, dielectric structure (see Figure 1), which is excited by packets of plane stationary electromagnetic waves, where the time dependency of the fields is of the form exp −in t , n ∊ N, and the vector of cubic polarization is given as Here, the variables x, y, z, t denote dimensionless spatial-temporal coordinates such that the thickness of the layer is equal to 4 ; nω = nκc are the dimensionless circular frequencies and nκ are dimensionless frequency parameters such that n = n ∕c = 2 ∕ n . These parameters characterize the ratios of the true thickness h of the layer to the lengths of the incident waves n , i.e. h∕ n = 2n , where c = ( 0 0 ) −1∕2 denotes a dimensionless parameter, equal to the absolute value of the speed of light in the medium containing the layer, Imc = 0. ɛ 0 and μ 0 are the material parameters of the medium. The absolute values of the true variables x ′ , y ′ , z ′ , t ′ , ′ are given by the formulas We consider packets of plane waves consisting of strong fields at the frequency κ (which generate a field at the triple frequency 3κ) and of weak fields at the frequencies 2κ and 3κ (having an impact on the process of third harmonic generation due to the contribution of weak electromagnetic fields): where with δ > 0, amplitudes a inc n , b inc n 3 n=1 ; angles of incidence Figure 1) and frequencies nκ, n = 1, 2, 3. Here Φ nκ = nκ sin ϕ nκ are the longitudinal propagation constants and Γ n = √ (n ) 2 − Φ 2 n are the transverse propagation constants, where ϕ nκ is the given angle of incidence of the exciting field at the frequency nκ (cf. Figure 1). The upper/lower excitation fields of the nonlinear layer are denoted by overlined/underlined symbols.
1111
-components of the susceptibility tensors of the nonlinear medium.
The scattered and generated field in a transversely inhomogeneous, non-linear dielectric layer excited by a plane wave is quasi-homogeneous along the coordinate y; hence, it can be represented as follows: Condition 1. E 1 (n ; y, z) = U(n ; z) exp iΦ n y , n = 1, 2, 3.
Here U(n ; z) and Φ nκ = nκ sin ϕ nκ denote the complex-valued transverse component of the Fourier amplitude of the electric field and the value of the longitudinal propagation constant (longitudinal wave-number) at the frequency nκ, respectively.
The dielectric permittivities of the layered structure at the multiple frequencies nκ are determined by the values of the transverse components of the Fourier amplitudes of the scattered and generated fields, i.e. by the redistribution of energy of the electric fields at multiple frequencies, where the angles of incidence are given and the nonlinear structure under consideration is transversely inhomogeneous. The condition of the longitudinal homogeneity (along the coordinate y) of the nonlinear layered structure (2) can be written as follows: Having used the representation (2) for (NL) n and the Condition 1, we obtain the following physically consistent requirement, which we call the condition of the phase synchronism of waves: It has been shown in detail in Angermann and Yatsyk (2011), Yatsyk (2011) that the Condition 2 is a formal consequence of Condition 1 and Equation (2) but not an independent assumption. We note that in view of Condition 2 the nonlinear layered structure remains longitudinally homogeneous. In this case, the quasi-homogeneous plane waves exciting the nonlinear layer at a set of multiple frequencies {n } 3 n=1 impinge on the nonlinear layer at the angles n , − n of these waves may be arbitrary (cf. Condition 2 and Figure 1).
In addition, we pose the following conditions: Condition 3. The tangential components tg (n ; y, z) and tg (n ; y, z) of the intensity vectors of the full electromagnetic fields and are continuous at the boundaries of the layered structure.
Condition 4. E
scat/gen 1 (n ; y, z) = a scat/gen for ImΓ n ≡ 0 and ReΓ n > 0-the radiation condition w.r.t. the scattered and generated fields.
(2) n = 1 , |z| < 2 ; and (L) + (NL) n , |z| ≤ 2 , The sought complex Fourier amplitudes of the total scattered and generated fields in the problem (1) incl. Conditions 1-4 at the multiple frequencies {n } 3 n=1 can be represented in the form Taking into consideration (3), the nonlinear system (1) incl. Conditions 1-4 is equivalent to a system (see Angermann & Yatsyk, 2011a, 2011b of nonlinear boundary-value problems of Sturm-Liouville type and also to a system of one-dimensional nonlinear integral equations w.r.t. the unknown functions The solution of the problem (1) incl. Conditions 1-4, represented in (3), can be obtained from (4) or (5) using the formulas U n ; 2 = a inc n + a scat/gen n , U n ; − 2 = b inc n + b scat/gen n , n = 1, 2, 3.
Self-consistent analysis of the system of nonlinear equations and eigenoscillations
According to Angermann and Yatsyk (2011a, 2011b, 2012, , Angermann, Krevchenko, et al. (2013a, Yatsyk (2012Yatsyk ( , 2013, the application of suitable quadrature rules to the system (5) leads to a system of complex-valued nonlinear algebraic equations of the second kind where n = U l (n ) -the vectors induced by the incident wave packets. A solution of (6) can be found iteratively by the help of a block Jacobi method, where at each step a system of linearized algebraic equations is solved.
The analytic continuation of the linearized nonlinear problems into the region of complex values of the frequency parameter allows us to switch to the analysis of spectral problems (Angermann & Yatsyk, 2011, 2012Shestopalov & Sirenko, 1989;Shestopalov & Yatsik, 1997;Yatsyk, 2000Yatsyk, , 2001Yatsyk, , 2013. The problem of finding the eigenfrequencies κ n and the eigenfields n reads as follows (cf. (6)): where n ∈ Ω n ⊂ H n , at ≡ inc , n = 1, 2, 3, Ω nκ are the sets of eigenfrequencies and H n denote two-sheeted Riemann surfaces (cf. Figure 2 -the vector of unknown values of the nontrivial solution at the nodes in the layer corresponding to the eigenfrequency κ n , n n = n n ; , 2 , 3 -the matrix with the given vectors n (cf. (6)).
We mention that the radiation condition to the eigenfield (cf. Condition 4) for real values of the parameters κ n and Φ nκ is consistent with the physically justified requirement of the absence of waves coming from infinity z = ± ∞ in the radiation field: The nontrivial solutions of the spectral problem (7) allow us to write the electric components of the eigenfield as follows: Im Γ n n , Φ n ≥ 0, Re Γ n n , Φ n ⋅ Re n ≥ 0, for Im Φ n = 0 Im n = 0, n = 1, 2, 3. Here: ≡ inc -a given constant value equal to the excitation frequency of the nonlinear structure, a n = U n ; 2 и b n = U n ; − 2 -the radiation coefficients of the eigenfield, -the functions of the transverse propagation (depending on the complex spectral frequency parameters κ n ), Φ n = n sin n -the given real values of the longitudinal propagation constants.
The range of variation of the spectral frequency parameters is completely determined by the boundaries of the possible analytic continuation of the canonical Green's functions (i.e. the Green's functions for the unperturbed quasi-homogeneous problems with n ≡ 1, n = 1, 2, 3) into the complex spaces of the spectral frequency parameters κ n (Angermann & Yatsyk, 2011, 2012Shestopalov & Sirenko, 1989;Shestopalov & Yatsik, 1997;Yatsyk, 2000Yatsyk, , 2001Yatsyk, , 2013. In the region < arg n < 3 ∕2 the situation is similar to the previous one up to the change of the sign of Re Γ n . The second, improper (or unphysical) sheets of the surfaces H n , n = 1, 2, 3, are different from the proper ones in that, for each κ n , the signs of both Re Γ n and Im Γ n are reversed.
The eigenfrequencies n ∈ Ω n ⊂ H n , n = 1, 2, 3, i.e. the characteristic numbers of the dispersion equations of problem (7), are obtained by solving the corresponding dispersion equations f n n = det − n n = 0 using Newton's method or a modification of it. The nontrivial solutions n of the homogeneous systems − n n ⋅ n = of linear algebraic equations (7) corresponding to these characteristic numbers are the eigenfields (9) of the linearized nonlinear layered structures with an induced dielectric permittivity (2). Obviously, the solutions n are sought up to an arbitrary multiplicative constant. Therefore, we have required that U n ; 2 = a n ≡ 1, n = 1, 2, 3, in the representation (9) of n .
Finally, we mention that the classification of scattered, generated, or eigenfields of the dielectric layer by the H m,l,p -type adopted in our paper is identical to that given in (Angermann & Yatsyk, 2012Shestopalov & Sirenko, 1989;Shestopalov & Yatsik, 1997;Yatsyk, 2000Yatsyk, , 2001Yatsyk, , 2011. In the case of E-polarization, H m,l,p (or TE m,l,p ) in the dielectric layer, i.e. along the coordinate axes x, y и z (see Figure 1). Since the considered waves are homogeneous along the x-axis and quasi-homogeneous along the y-axis, we actually study fields of the type H 0,0,p (or TE 0,0,p ), where the subscript p is equal to the number of local maxima of the function |U| of the argument z in −2 , 2 .
Numerical results
In order to describe the scattering and generation properties of the nonlinear structure, we introduce the following notation: and The quantities R + n , R − n are called scattering/generation (or radiation) coefficients of the waves w.r.t. the total intensity of the incident packet. (Note that alternatively the radiation coefficients can be chosen according to R ± n 1∕2 ).
We define by the total energy of the scattered and generated fields at the frequencies nκ and consider the quantity which characterizes the portion of energy generated in the third harmonic in comparison to the energy scattered in the first harmonic.
In the case of problem (1) incl. Conditions 1-4, for nonabsorbing media with Im (L) (z) ≡ 0, the validity of the energy balance law has been verified numerically. Computational experiments for the processes of scattering and generation of oscillations without any impact of weak fields a inc 2 = a inc 3 = 0 have shown that the error of the energy balance law does not exceed the value | | | W (Error)| | | < 10 −8 . The consideration of weak fields a inc n ≠ 0, n = 2, 3 in the investigation of the same scattering and generation processes can lead to errors in the balance equation of a few percent (Angermann & Yatsyk, 2012Angermann, Krevchenko, et al., 2013a. This indicates that the amplitudes of the weak fields a inc n ≠ 0, n = 2, 3 are sufficiently large, and that these fields can serve as a source of generation of oscillations themselves. In such situations the presented mathematical model (1) incl. Conditions 1-4 (cf. also (6)) and the linearized spectral problems (7) should take into account the complex Fourier amplitudes of oscillations at frequencies nκ with numbers n larger than three.
The study of the scattering and generation properties of the nonlinear layers is carried out by means of consideration of the eigenmodes. The computational results are shown in Figures 3-8 pairwise, for media with a value of the cubic susceptibility α = −0.01 (left column) and α = +0.01 (right column).
In the case of decanalizing media in Figure 3 (left column), the maximal portion of generated energy W 3 ∕W is observed for a inc = 24 and normal excitation ϕ κ = 0° of the nonlinear layer. In the investigated range of amplitudes and incident angles a inc ∈ 1, 24 , ∈ 0 • , 90 • , an increase of W 3 ∕W is observed for parameters corresponding to the closest values of the scattering coefficients R + ≈ R − . The maximal portion of generated energy W 3 ∕W = 0.039 does not exceed 4%.
In the case of canalizing media in Figure 3 (right column) in the range of a inc ∈ 1, 19 , ∈ 0 • , 60 • the maximal value W 3 ∕W = 0.2505 for a inc = 14 and ϕ κ = 60° reaches 25%. The increase of the portion of generated energy W 3 ∕W is achieved by increasing the amplitude a inc at incident angles ϕ κ which lie slightly above the canalizing angle. The latter corresponds to the greatest possible transparency of the scattering at the frequency κ, where the reflection coefficient R + is minimal and the transmission coefficient R − is maximal.
We can state that in the case of canalizing layers the portion of generated energy W 3 ∕W is maximal in the region of higher transparency of the nonlinear structure, see Figure 3 The nonlinear components (NL) n of the dielectric permittivities ɛ nκ at each of the frequencies κ и 3κ are determined by the magnitudes of the fields U( ;z) and U 3 ; z . For nonabsorbing media Im (L) (z) ≡ 0, taking into account the cubic susceptibility (z), the equality Im n (z) = Im (NL) n (z) holds, see (2). The increase of the amplitude a inc of the incident field at the frequency κ leads to the generation of the third harmonic field U 3 ; z . In the case under study the quantity Im (NL) (z) (or Im (z) if Im (L) (z) ≡ 0) takes positive as well as negative values along the height of the nonlinear The generated field U 3κ of a canalizing layer, observed in the range a inc ∈ 5, 22 , has the type H 0,0,10 , Figure 5 (right). In the case of a decanalizing layer, the generated field U 3κ changes its type with increasing amplitude a inc . The generation of a third harmonic field U 3κ is observed in the range a inc ∈ 4, 24 , Figure 5 (left). Here, it is of the type H 0,0,10 for a inc ∈ 4, 23 and of the type H 0,0,9 for a inc ∈ 23, 24 . The type-conversion of the generated oscillations from H 0,0,10 to H 0,0,9 with increasing a inc is due to the loss of one maximum point of the function for a inc = 23, see the point with coordinates a inc = 23, z = 1.15, Figure 5 (left).
The increase in the intensity of the excitation field leads to critical inflection points of the function (the absolute value of the amplitude of the scattered/generated field) identifying the type of oscillation. If in these points the local maximum of the function along the characteristic spatial coordinate of the investigated structure (the transverse coordinate along the height of the nonlinear layer) is lost, then the effect of type-conversion of the radiation field occurs. The amplitudes of the incident field, for which the described effect is observed, can be called the threshold of the considered types of oscillations.
The violation of symmetry in the excitation of the nonlinear structure a inc ≠ b inc (a inc = const ≠ 0, b inc = 0) leads to a violation of symmetry of the radiation coefficients R ± a inc , at the scattering frequencies κ or R ± 3 a inc , at the generated frequencies 3κ, see Figure 3.
In the case of a decanalizing layer and under the condition of symmetry of the scattered energy , there is a significant difference in the portion of generated energy in the half-spaces above and below the layer, see Figure 3 (left).
This can lead to a type-conversion effect in the oscillations of the radiation field U 3κ . In the case of normal excitation, ϕ κ = 0° of a decanalizing layer as described above the effect of type-conversion of the generated field U 3κ is detected at the threshold amplitude a inc = 23, where the condition of equality of the scattering coefficients R + a inc , = R − a inc , is satisfied, see Figure 5 (left) and the intersection of the surfaces in Figure 3 (top left). The portion of generated energy W 3 ∕W increases with increasing a inc for a normal excitation ϕ κ = 0°, see Figure 3 (left).
For a canalizing structure, at the scattering frequency the portion of reflected energy is less than the portion of transmitted energy R + a inc , < R − a inc , , and at the generation frequency the portion of radiated energy in the transmission zone slightly dominates the radiated energy in the , see Figure 3 (right). The maximal generation W 3 ∕W is achieved if the amplitude a inc increases at incident angles ϕ κ slightly above the canalizing angle (the angle of the greatest possible transparency of the structure at the scattering frequency κ), see Figure 3 (right).
Qualitative analysis of the generation properties of nonlinear layers
We discuss a possible mathematical model for the qualitative analysis of the generation properties of nonlinear decanalizing and canalizing layers. We consider the surfaces R + n , R − n , n = 1, 3 and W 3 ∕W described previously in Section 4.1 as well as the characteristic properties of the scattering and generation of oscillations by nonlinear layers, see Figure 3. In Figure 6 (left and right) we depict the cross-sections of these surfaces with the planes φ κ = 0° for a decanalizing layer and φ κ = 60° for a canalizing layer.
The particular features of the dynamics of the scattering and generation characteristics of oscillations by the nonlinear layer are caused by the proximity of the eigenfrequencies κ n of the linearized problems (7) In the case of a canalizing/decanalizing layer the increase of the excitation amplitude a inc leads to an increase/decrease of Re (NL) 1 a inc , Re (NL) 3 a inc (graphs no. 5.1, 6.1), decrease/increase of Im (NL) 3 a inc (graphs no. 6.2) and increase/decrease-decrease/increase of Im (NL) 1 a inc (graphs no. 5.2), Figure 7 (left/right). The interval of monotonic decrease of the graph no. 5.2 is localized in a range of amplitudes a inc which is determined by a closeness condition of the eigenfrequencies to the frequencies of scattering and generation, see the amplitudes corresponding to the intersection of the graphs no. 5.1 with no. 1 and no. 6.1 with no. 2 in Figure 7 (right). In this range of amplitudes, an outburst of generation of energy in the third harmonic is observable, see graph no. 7 in Figure 6 (right).
In order to describe the branches of the eigenfrequencies of the linearized problems, we use the concept of the Q-factor (Reed & Simon, 1978;Shestopalov & Sirenko, 1989;Vainstein, 1966;Voitovich, Katsenelenbaum, & Sivov, 1977) It is convenient to perform the analysis of coupled regimes of the scattered and generated fields (3) induced by the dielectric permittivity (2) of nonlinear electrodynamic structures within the framework of a self-consistent process of exchange of energy by the help of the concept of the relative magnitude of the radiated energy, see e.g. (11) We note that the proposed approach to describe the outburst of energy of oscillations by means of the relative variation of the Q-factor (14) is quite effective. It can be successfully applied for both a suffiently weak and a strong generation of energy in ranges from a few percent (Figure 3, bottom left) to dozens percents (Figure 3, bottom right) of generated energy, respectively.
A three-layer nonlinear dielectric structure
Consider a nonlinear structure with the parameters: The excitation takes place from above and below by electromagnetic fields at the basic frequency at incidence angles , 180 • − for amplitudes a inc , b inc accordingly. The Figures 9 and 10 show the properties of the nonlinear layered structure at the parameters: A three-layer structure consisting of a decanalizing layer which is located between two canalizing layers possesses novel properties of scattering and generation of oscillations. They partially resemble those properties which are inherent decanalizing and canalizing layers. Thus, in the case of a one-sided excitation a inc ≠ 0, b inc = 0 investigated in the range of amplitudes a inc ∈ 1, 38 and incident angles ∈ 0 • , 90 • of the layered structure the increase of the portion of generated energy W 3 ∕W with increasing amplitude a inc is observed at normal excitation φ κ = 0°, see Figure 9 (top). This is typical also for decanalizing structures, see Figure 3 (left). Moreover, in the case under consideration, the increase of W 3 ∕W is accompanied by an increase in the transparency of the layered structure. A canalization of energy is observed at the minimum value of the reflection coefficient R + = 0.0172 for the a inc = 38 at normal excitation φ κ = 0°, Figure 9 (top left). This is typical for The dependence of the nonlinear dielectric structure on the amplitude characteristics of the scattered and generated fields together with a spectral approach to the analysis of the linearized problems near the critical points of the branches of the amplitude-phase dispersion can be used as the basis of numerical and analytical methods for the synthesis and analysis of nonlinear structures with anomalous scattering and generation properties.
The numerical results for the scattereing and generation of a wave package by a nonlinear cubically polarizable layer are obtained by means of the solution of the system of integral equations (4). Applying Simpson′s quadrature rule, the system (4) is reduced to a system of nonlinear algebraic equations (6). The numerical solution of (6) is carried out using a self-consistent iterative algorithm based on a block Jacobi method (Angermann & Yatsyk, 2011a, 2011b, 2012Angermann, Krevchenko, et al., 2013aYatsyk, 2012Yatsyk, , 2013. The spectral problems (7) are solved by the help of Newton′s method. In the investigated range of problem parameters, the dimension of the algebraic systems was 301 and 501 in the case of singlelayered and three-layered structures, respectively. The relative error of the calculations did not exceed 10 −7 .
Conclusion
The problem of scattering and generation of waves by an isotropic, nonmagnetic, linearly polarized, nonlinear dielectric structure consisting of a cubically polarizable medium is investigated in the range of resonance frequencies, where the excitation is induced by wave packets consisting of plane waves at multiple frequencies. In extension of our previous work, here the case of two-sided acting fields is treated. The mathematical model of the boundary value problem is transformed into a system of one-dimensional nonlinear integral equations. The numerical solution of the problem is performed by the help of quadrature formulas in conjunction with an iterative method, where at each step a linear system of equations is solved. The analytic continuation of the linearized nonlinear problems into the region of complex values of the frequency parameter allows to switch to the analysis of spectral problems. That is, the eigenfrequencies and the corresponding eigenfields of homogeneous linear problems with an induced nonlinear dielectric permittivity are to be determined. Single-layered structures with both negative and positive values as well as three-layer structures with piecewise constant positive-negative-positive values of the coefficient of the cubic susceptibility of the nonlinear medium are investigated. The layers under consideration have different properties. In particular, nonlinear layers with a negative value of the cubic susceptibility show decanalizing properties, layers with a positive value of the cubic susceptibility-canalizing properties. The investigations were restricted to the third harmonic generation. The paper presents the results of the numerical analysis characterizing the scattering/generation and spectral properties of the considered structures. An effective way to describe the processes of generation of oscillations via the variation of the relative Q-factor of the eigenoscillations corresponding to the eigenfrequencies of the scattering and generating structures, when the intensity of the excitation field changes, is given. Moreover, the proposed approach applies equally well for sufficiently weak/strong energy generation in ranges from a few percent to dozens percents of generated energy. For the first time, two-sided acting fields at the scattering frequency were taken into account and a type-conversion of the oscillations could be observed. The latter effect was observed at a symmetry violation of the nonlinear problem caused by different amplitudes of the excitation fields. This effect may serve as a basis for numerical and analytical methods for the synthesis and analysis of nonlinear structures in the vicinity of critical points of the amplitude-phase dispersion, similar to the approach developed in the papers (Shestopalov & Yatsik, 1997;Yatsyk, 2000Yatsyk, , 2001. That is, mathematical models for the control of anomalous scattering and generation properties of nonlinear structures via the variation of amplitudes in a two-sided excitation of a nonlinear structure at scattering and generation frequencies near the resonance frequencies of the linearized spectral problems can be created. | 2018-12-07T15:35:08.539Z | 2016-03-23T00:00:00.000 | {
"year": 2016,
"sha1": "f53719a3c325419bdb1a892aa4c933790643095c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311940.2016.1158342",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f53719a3c325419bdb1a892aa4c933790643095c",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
268445446 | pes2o/s2orc | v3-fos-license | Variational Data Assimilation Method Using Parallel Dual Populations Particle Swarm Optimization Algorithm
: In recent years, numerical weather forecasting has been increasingly emphasized. Variational data assimilation furnishes precise initial values for numerical forecasting models, constituting an inherently nonlinear optimization challenge. The enormity of the dataset under consideration gives rise to substantial computational burdens, complex modeling, and high hardware requirements. This paper em‐ ploys the Dual-Population Particle Swarm Optimization (DPSO) algorithm in variational data assimilation to enhance assimilation accu‐ racy. By harnessing parallel computing principles, the paper introduces the Parallel Dual-Population Particle Swarm Optimization (PDPSO) Algorithm to reduce the algorithm processing time. Simulations were carried out using partial differential equations, and compari‐ sons in terms of time and accuracy were made against DPSO, the Dynamic Weight Particle Swarm Algorithm (PSOCIWAC), and the Time-Varying Double Compression Factor Particle Swarm Algorithm (PSOTVCF). Experimental results indicate that the proposed PDPSO out‐ performs PSOCIWAC and PSOTVCF in convergence accuracy and is comparable to DPSO. Regarding processing time, PDPSO is 40% faster than PSOCIWAC and PSOTVCF and 70% faster than DPSO.
Introduction
Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years.Initial conditions play a pivotal role in numerical weather prediction.In variational data assimilation methods, by incorporating the influence of observational data from various time points and constrained by the forecast model, variational data assimilation can furnish improved initial values for numerical models, consequently enhancing the predictive accuracy of the model [1][2][3][4] .This delineates a nonlinear optimization problem.Due to the substantial computational demand, model intricacy, and the influence of assimilation outcomes based on initial condition choice, stringent criteria exist for algorithmic convergence and accuracy [5] .
Researchers have done extensive research on this subject.Tian et al [6] introduced the NLS-4DVar method, a unique ensemble-variational data assimilation approach leveraging "big data", showing significant performance enhancement over the traditional NLS-4DVar method without adding computational burden.
Tamang et al [7] presented a new variational data assimilation (VDA) approach for the formal treatment of bias in model outputs and observations.This methodology leverages the Wasserstein metric, derived from the principles of optimal mass transport, to penalize the separation between the probability histograms characterizing the analytical state and a preceding reference dataset.This reference dataset is expected to possess more significant uncertainty but exhibits lesser bias than the model and the observations.Fabelet et al [8] developed an end-to-end learning approach using automatic differentiation embedded in a deep learning framework.The key novelty of the proposed physics-informed approach is to allow the joint training of the representation of the dynamical process of interest and the solver of the data assimilation problem, which may perform this joint training using supervised and unsupervised strategies.Numerical experiments on the Lorenz-63 and Lorenz-96 systems demonstrated a conventional gradient-based minimization of the variational cost, encompassing both reconstruction performance and optimization intricacy.
In Ref. [9], the primary field update method in the optimization process allowed the nonlinearity of the observation operator and numerical weather prediction model to be incorporated into the solution of the optimization problem in the incremental four-dimensional variational (4D-Var).The outer/inner models used in the incremental 4D-Var method are based on Unified Concept for Atmoshpere (ASUCA), with suitable configurations for each resolution and applied linearization.Observation operators are implemented for various observations, with unified interfaces encapsulating external simulators.Variational quality control and variational bias correction are also introduced for advanced observation handling within the variational system.Paralleliza-tion is introduced to enhance computational efficiency, including adjoint calculations.
To address the observational data from remote sensing instruments, Dennis et al [10] scrutinized the code that operationalizes the widely used Spline Analysis at Mesoscale Utilizing Radar and Aircraft Instrumentation (SAMURAI) technique for estimating atmospheric conditions based on a designated collection of observations.They deployed several strategies to substantially enhance the codes efficiency, encompassing adapting it for operation on typical high-performance computing (HPC) clusters, evaluating and refining its single-node performance, introducing a more efficient nonlinear optimization approach, and facilitating Graphics Processing Unit (GPU) utilization through Open Accelerators (OpenACC).
To enhance the accuracy and convergence efficiency of variational data assimilation, classic swarm intelligence algorithms such as the Particle Swarm Optimization (PSO) [11] and Genetic Algorithm (GA) have been incorporated into data assimilation practices.However, the precision and speed achieved in specific assimilation processes remain suboptimal.In the research on intelligent optimization algorithms within data assimilation, this aspect remains a focal point.
Liu et al [12] introduced a "prematurity" judgment mechanism, applying chaotic perturbations to the updated positions of particles to avoid falling into local optima.However, the algorithm s speed still requires improvement.
Wang et al [13] proposed a weight-optimizing particle swarm algorithm, which dynamically adapts to update the inertia weight, enhancing the algorithm s precision without improving its computational time.Addressing the issue of the low accuracy of PSO in specific assimilation processes and its poor robustness under noise interference, Liu et al [14] applied the time-varying dual compression factor PSO algorithm to variational data assimilation with discontinuous "switching" processes, establishing a novel assimilation model.Li et al [15] designed a parallel molecular motion theory particle swarm optimization algorithm (PMPSO).The fundamental idea is to divide the particle swarm into N subsets (with N not exceeding the number of CPU cores).Each subset undergoes simultaneous particle iteration to enhance the algorithm s processing speed.After each iteration, the elite particle data from each subset is transferred to the common section before the next iteration.In this manner, the algorithm enhances processing speed without compromising evolutionary accuracy.
Inspired by the above, this article addresses the issue of low accuracy observed in PSO during specific assimilation processes.To enhance the assimilation precision, we apply a Dual Population Particle Swarm Optimization algorithm based on diffusion mechanisms (DPSO) in variational data assimilation.Additionally, considering the extensive computational demand, prolonged processing time, and high equipment requirements encountered while handling massive datasets, we have improved the DPSO by employing the principles of parallel computing.Consequently, we have designed a Parallel Diffusion Dual Population Algorithm (PDPSO) aimed at reducing the algorithms processing time.This approach not only enhances the processing speed but also retains evolutionary accuracy.The PDPSO was applied to the data assimilation process.It was compared in terms of time and accuracy with the Diffusion Dual-Population Particle Swarm Optimization (DPSO), the Time-Varying Dual Compression Factor Particle Swarm Algorithm (PSOTVCF) [16] , and the Dynamic Weight Particle Swarm Algorithm [17] (PSOCIWAC).Experimental results indicate that PDPSO is notably superior in convergence precision and time compared with the PSOCIWAC and the PSOTVCF.Moreover, while retaining accuracy, its processing time significantly outperforms the DPSO.
Governing Equation
Without loss of generality, this paper adopts the partial differential equation used in Ref. [17] that evolves only for a time as the control equation in data assimilation: where q(t, l) >0 denotes the specific humidity; q c is the saturation specific humidity, called the threshold; l i represents the horizontal direction x, y or vertical direction z; t is the time variable; F(t) is the source term for other physical processes and g is the source term for parametric processes; a=a(t, l) indicates the l-direction velocity, which is a given continuous function with the first-order continuous partial derivative; q 0 (l) is the initial specific humidity, which satisfies continuously differentiable on the interval [0,L], while satisfying dq 0 /dl<0; H(q-q c ) is a unit step function: an "on-off" switch during parameterization.The numerical pattern corresponding to (1) is: where Δl represents the space step size and i is the space grid point; Δt denotes the time step size, and t k =kΔt, in which k is the space layer.N=T/Δt is the total time layer in the integration process; M+1= (T/Δt) +1 is the total number of spatial discrete points.
Particle Swarm Algorithm
Let the search space be characterized by a dimensionality of D, where a particle population of N entities is established.The position and the velocity of the i-th particle in the population can be expressed as X= (x i1 , x i2 , ... , x id ), and V= (v i1 , v i2 , ... , v id ), respectively.The current optimal position searched by particle i is denoted as p= (p i1 , p i2 , ... , p id ), and the optimal position searched in the whole population is recorded as P= (p g1 , p g2 , ... , p gd ).The positions and velocities of the particles are updated iteratively according to (4): where i=1, 2, …, N is the particle number; d=1, 2, …, D is the dimension of the particle; A 1 , A 2 are the learning factors; Y 1 , Y 2 are random numbers in [0, 1] [11] .Drawing inspiration from the diffusion phenomenon, Xu [18] proposed an innovative methodology that employs particle communication within a dualpopulation framework to replicate diffusion mechanics.This endeavor culminated in the development of the DPSO algorithm.The DPSO algorithm incorporates key concepts, including population temperature, particle dif-fusion energy, and particle diffusion probability.Utilizing these fundamental ideas as a base, the basic principles of the DPSO algorithm, in conjunction with the detailed step-by-step algorithmic procedure, are comprehensively expounded upon.
Diffusion energy Q
The energy that an object has due to its mechanical motion is called kinetic energy [18] .In analogy to kinetic energy, the energy consumed by a particle to overcome the work of gravitational potential energy to displace it from its initial position to any other position is called diffusion energy.It may be assumed that the mass of all particles is one single unit, and the magnitude of the diffusion energy of each particle can be defined as the sum of the squares of each dimensional component of the particle velocity vector and then squared.The energy associated with mechanical motion in an object is termed kinetic energy.Similar to kinetic energy, particles expend energy to overcome the gravitational potential energy and shift from their original positions.This process is known as diffusion energy.The mass of all particles is assumed to be uniform.For each particle, the magnitude of diffusion energy can be calculated by summing the squares of its velocity vectors dimensional components and then squaring the result.
v d i is the j-th dimension component of the particle velocity vector, i is the subscript of the particle in the population, and Dim is the dimension of the particle in the search space.
Temperature T
Temperature is a scalar physical quantity utilized to quantify the heat content within an object [18] .It signifies the extent of molecular thermal motions activity from a microscopic vantage point.Molecular motion theory postulates that the surface temperature of an object serves as an additional reflection of the mean kinetic energy of molecules it houses.When molecular motion is sluggish, resulting in diminished molecular kinetic energy, the object s temperature is correspondingly lower.Conversely, heightened molecular movement translates to increased molecular kinetic energy and, consequently, a higher object temperature.Temperature serves as a macroscopic manifestation of the thermal motion exhibited by the molecules comprising the object.Therefore, we regulate the temperature of the population as follows: where M is the population size.
Diffusion probability P
The diffusion probability [18] of a particle is defined as follows: where T represents the temperature of the particle population, Q i is the diffusion energy of particle i, and R=1 denotes the gas constant.The temperature of the current particle population and the diffusion energy of the particles determine the probability value of random diffusion of each particle.If the temperature of the population is greater than the diffusion energy of the particles, the particles will have a smaller diffusion possibility; otherwise, the particles will have a larger diffusion possibility.
Diffusion-Based PSO Algorithm
The DPSO algorithm employs dual populations, designated as populations A and B, respectively.The operations executed on both populations are identical.During each iteration of the algorithm, the diffusion energy for every particle is computed based on the velocity vector of each particle within population A(B).Subsequently, the temperature of population A(B) for the current iteration is determined, relying on the diffusion energy of all particles.Furthermore, the diffusion probability value of each particle is calculated using formula (6), generating a random number adhering to a uniform distribution.If this random number is less than the particle s diffusion probability value, the particle is placed into the diffusion pool of population A(B).Subsequently, two particles are randomly selected from the diffusion pool to generate a difference vector, thereby perturbing the global extremum within population A(B).A replacement occurs if the perturbed vector outperforms the global extremum within the other population B(A).In scenarios where the count of particles in diffusion pool A is two or more, two particles (m and n) are randomly selected as diffusion agents.These particles generate a difference vector, introducing a random perturbation to the global extremum, yielding a provisional vector.If this provisional vector is proved to superior to the global extremum of population B, a replacement is executed; otherwise, it remains unaltered.Parallelly, in cases where the particle count within diffusion pool B is two or more, two particles (a and b) are randomly selected as diffusion agents.These particles generate a difference vector, similarly introducing a random perturbation to the global extremum, resulting in a provisional vector.If this provisional vector outperforms population A s global extremum, a replacement occurs; otherwise, it remains unchanged.Through these sequential steps, the mechanism facilitates the exchange of information and diffusion between the dual populations.
Calculation of Fitness Function
In PSO, individuals are evaluated according to their fitness.Individuals with higher fitness are closer to the optimal solution of the objective function, and individuals with lower fitness are farther away from the optimal solution of the objective function.That is, the state with high fitness should correspond to the more optimal state of the objective function.Since variational assimilation is a minimization problem, we apply the above molecular-kinetic-theory-based particle swarm optimization algorithm to the variational assimilation problem to find the minimum value of the variational assimilation cost function, that is, the inverse relationship between the objective function value and the fitness.The variational assimilation cost function is defined as follows: where q 0 belongs to the solution space: S 0 ={q 0 (l)|q 0 (l)Î C L [0L] q 0 (l)< 00 < l < L; q 0 (0) = 0} which satisfies the physical constraints and compatibility conditions; q is the solution of q 0 by substituting into mode (1), and the discrete cost function of the corresponding formula ( 9) is: where q k i is the numerical solution of mode (3), and (q obs ) k i is the observation at time level t k = kΔt and spatial grid point l i = iΔl.
The fitness function [18] is defined as: where
Basic Process
The calculation process of applying the dualpopulation particle swarm optimization algorithm based on parallel diffusion mechanism to variational data assimilation is as follows.
a) Initialize the particles and parameters in populations A and B, including velocity, acceleration, and position, and set the maximum number of iterations to 400.b) Evaluate the data assimilation cost function of each particle in populations A and B.
c) Update the global optimal values of particles in populations A and B and the historical optimal values of particles in populations A and B, denoted by P A g , P B g P, p kA i and p kB i , respectively.d) Calculate the diffusion energy of all particles in populations A and B according to formula (6), where v d i is the particle velocity, and M is the total number of particles.
e) Calculate the temperature of populations A and B according to formula (7).
f) Calculate the diffusion probability of all particles in populations A and B according to formula (8).
g) Determine whether or not each particle in populations A and B is put into the diffusion pool.
h) Enable the communication and transmission of information between different populations through these steps by exchanging difference vectors for populations according to the rules of the DPSO algorithm.
i) Output the best of the global extremum sums of populations A and B. j) Adjust the speed and position of particles in populations A and B according to formulas (4) and (5).Y 1 and Y 2 are two random numbers from 0 to 1, and the number of iterations in the group is increased by one.
k) If the number of iterations does not reach 400, go to step b); otherwise, the DPSO algorithm ends.
PDPSO Algorithm for Optimizing Variational Data Assimilation
Although the dual-population algorithm based on the diffusion mechanism in the previous section improves the accuracy of variational data assimilation, it does not improve the assimilation time or the speed of the algorithm, due to the large amount of data to be processed in variational data assimilation.To address this problem, DPSO is improved, and a parallel diffusion double population algorithm (PDPSO) is designed.The basic idea of the algorithm is to divide the particle swarm into n subsets (n is not greater than the number of CPU cores), with each subset given to a thread control while, at the same time, performing particle iteration operations in order to increase the processing speed of the algorithm [8] ; the data of the head elite particle in each subset is passed to the public part after each iteration, and then the next iteration is performed, to allow information exchange between each subset.This approach adds diversity because parallel computing in the form of asynchronous communication effectively avoids accuracy degradation while enabling information exchange; in addition, since the essence of parallel algorithms is to maximize the utilization of hardware, the algorithm results can still maintain good accuracy.
PDPSO uses the dual-population particle swarm algorithm based on the diffusion mechanism to search for feasible solutions.The detailed implementations of the PDPSO algorithm are as follows: Initialize various algorithm parameters, such as the number of groups, the maximum number of iterations 400, etc. a) Initialize the read-write synchronization lock and create the number of threads according to the number of groups.
b) Divide the threads to randomly initialize the particles in the group and divide the population in each thread into A and B randomly and equally.c) Calculate the data assimilation cost function of each particle.If the individual optimal solution of the particle is better than the global optimal solution in the current group, replace the global optimal solution in the group with the individual optimal solution and turn to e); otherwise, turn to g).
d) The grouping thread acquires the read-write synchronization lock.
e) If the current global optimal solution is inferior to the optimal solution within the group, replace the global optimal solution within the group with the individual optimal solution.
f) The grouping thread releases the read-write synchronization lock.
g) The grouping thread calculates the relevant parameters according to formulae (8), ( 9), (10) and deter-mines whether each exchanged difference vector can replace the optimal particle of another population.
h) The grouping thread updates the particle speed according to formula (4) and the particle position according to formula (5) and adds one to the number of iterations in the group.
i) The grouping thread judges whether the thread has reached the maximum iteration number 400 set by the thread.If it reaches 400 generations, end the thread; otherwise, go to d).
j) If all grouping threads are finished, output the final result.
Simulation Environment
By using the experimental data and experimental analysis methods in Ref. [17], the time-varying dual compression factor particle swarm optimization algorithm (PSOTVCF) , the dynamic weight particle swarm optimization algorithm (PSOCIWAC) and the dualpopulation parallel particle swarm optimization algorithm (PDPSO) based on the diffusion mechanism are compared in terms of convergence accuracy and time, and a comparison with the algorithm before parallelization is also made with respect to the processing time.Among them, the inertia weight in the latter two particle swarm optimization algorithms decreases from 0.7 to 0.1 with a constant derivative; the acceleration factors of the time-varying dual compression factor particle swarm optimization algorithm are as follows: the first compression factor is constant, A 1 =2.6,A 2 =1.2; the second compression factor is in a time-varying state, A 1N =2.88,A 1M = 2.68, A 2N =2.45,A 2M =1.25; the scaling factor is set to 0.5.The initialization of 200 particles was carried out for 1 000 iterations, and the assimilation was recorded every 125 generations; in this paper, a random initialization combined with empirical knowledge was used.In 200 assimilation trials, the initial guess value of the adjoint method is taken as: where j is not only a parameter, but also represents the sequence number of an instance, and the initial particle is generated by random disturbance on each component of q 01 (l).The specific generation method is expressed as randomly generating three random numbers r 1 , r 2 and r 3 in [0, 1], d 0 =r 1 -r 2 , and the initial guess value is q 02 (l) = d 0 r 3 q 01 (l).Since the focus is on the effectiveness of different optimization algorithms in data assimilation involving switching processes, perfect observations are generated from initial observations: q obs 0 (l) = 0.28 -0.26sin( πl 2 ) In the numerical experiment, the relevant parameters in the control equation are: a= (1+t) (1-l), F(t) = A-Bt, A=8, B=11, q c =0.58, g=7.0,M=20, N=100, Δt= 0.01, Δl=0.05.All the experiments were implemented in a MATLAB R2017a programming environment on a PC with an Intel Core i5 CPU.
Convergence Accuracy
Figure 1 shows the comparison results after 1 000 assimilation trials when the number of iterations is 400.The x-axis represents the number of algorithm iterations, and the y-axis represents the logarithm of the convergence accuracy.The smaller the value, the closer the assimilated initial value is to the observed value.It can be seen from the figure that the accuracy of the PDPSO algorithm is significantly higher than that of either the PSOCIWAC algorithm or the PSOTVCF algorithm.
Figure 2 shows the trend of convergence accuracy of the two algorithms when the number of stack iterations is 50, 100, 150, 200, 250, 300, 350, and 400, respectively.As can be seen from Fig. 2, in the early stage of assimilation (before 50 generations), the results of the three methods are approximately the same; in the middle stage of assimilation (after 100 generations), PSOTVCF takes the lead and the quality of assimilation is much higher than that of PSOCIWAC and PDPSO, and there are still particles that have not yet converged; in the late stage of assimilation (after 200 generations), both PSOTVCF and PSOCIWAC have basically converged, but PDPSO still presents a state of incomplete convergence.The accuracy of PDPSO has far exceeded that of the other two methods.When the number of iterations reaches 400, PSOCIWAC converges to − 11.2 on average, PSOTVCF converges to − 13 on average, and PDPSO converges to −14.This shows that the quality of PDPSO assimilation results is much higher than that of the dynamic weight particle swarm optimization and time-varying dual compression factor particle swarm optimization.
Assimilation Time
Table 1 shows the assimilation time spent when the number of iterations is 100, 200, 300, and 400 times, respectively.To avoid abnormal data, a total of 4 groups of assimilation tests were carried out for each assimilation window, and the values are shown in Table 1 in seconds.
From Table 1, we can see that PDPSO is always about 40% faster than PSOCIWAC and PSOTVCF, and 70% faster than DPSO during the whole iteration.
Conclusion
In this paper, a dual-population particle swarm optimization algorithm (PDPSO) based on the diffusion mechanism is applied to the variational data assimilation, which improves the assimilation accuracy; to ad- dress the problem of slow speed in processing extensive data, the algorithm is improved by using the idea of parallel computing to maximize the computer hardware resources.In terms of the population update strategy, the data pertaining to the top-performing elite particles within each subset is shared with the collective after each iteration concludes.Subsequently, the ensuing iteration is executed, ensuring the integration of diversity.
The proposed algorithm shows a considerable improvement in convergence accuracy and assimilation time compared with the dynamic weight PSO algorithm and the dual time-varying compression factor PSO algorithm.However, this paper solely applies the algorithm to variational data assimilation featuring a "switch" process.In the future, extending its application to include spatial evolution could amplify the complexity of the control equation. | 2024-03-17T15:14:14.792Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "58bef1d6457d159d29f0e50cebfd5ea0e4aa340a",
"oa_license": "CCBY",
"oa_url": "https://wujns.edpsciences.org/articles/wujns/pdf/2024/01/wujns-1007-1202-2024-01-0059-08.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c019407aa1618fd4edd793a8d1a7e85928fa41f2",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
17216174 | pes2o/s2orc | v3-fos-license | Vector Borne Infections in Italy: Results of the Integrated Surveillance System for West Nile Disease in 2013
The epidemiology of West Nile disease (WND) is influenced by multiple ecological factors and, therefore, integrated surveillance systems are needed for early detecting the infection and activating consequent control actions. As different animal species have different importance in the maintenance and in the spread of the infection, a multispecies surveillance approach is required. An integrated and comprehensive surveillance system is in place in Italy aiming at early detecting the virus introduction, monitoring the possible infection spread, and implementing preventive measures for human health. This paper describes the integrated surveillance system for WND in Italy, which incorporates data from veterinary and human side in order to evaluate the burden of infection in animals and humans and provide the public health authorities at regional and national levels with the information needed for a fine tune response.
Introduction
The epidemiology of arboviral zoonoses is influenced by multiple ecological factors and, therefore, integrated and comprehensive surveillance systems are needed for early detecting the infection and activating consequent control actions. West Nile virus (WNV) is a Flavivirus belonging to the Japanese encephalitis antigenic complex of the family Flaviviridae. The genus Flavivirus also includes other arboviruses, such as St Louis encephalitis virus, Japanese encephalitis virus, Murray Valley virus, Usutu virus, and Kunjin virus. WNV is maintained in nature by birds and is transmitted primarily by the bite of infected mosquitoes acquiring the virus by feeding on infected birds [1]. Mosquitoes can also acquire the virus by transovarial transmission or by mating [2]. Migratory birds are strongly suspected to play a significant role in the introduction of WNV from endemic areas into new regions [3]. Humans, horses and other mammals are considered incidental dead-end hosts [1].
West Nile disease (WND) is a zoonosis. The infection in humans mainly occurs asymptomatically or with mild febrile illness [4]. Less than 1% of patients show severe neurological symptoms classifiable in three main syndromes: meningitis, encephalitis, and poliomyelitis (acute flaccid paralysis) [5]. In horses the disease is usually subclinical, although sometimes they may show neurological symptoms [1].
In the Western Hemisphere WNV was first detected in New York City in 1999 [6]. After that episode, the virus spread dramatically westward across the United States of America, southward into Central America and the Caribbean, and northward into Canada, resulting in the largest human epidemic of a neuroinvasive disease ever reported [7].
The first large human outbreak of WND in Europe was recorded in 1996 in Romania with 393 confirmed cases [8].
After that episode, the number of WND reported cases in horses and humans increased significantly. This apparent rise of reported cases is partly due to the improvement of both surveillance systems and diagnostic methods [3,9,10], and also to the introduction and rapid spread of WNV lineage 2 in Europe [11,12]. In the last recent years, virus circulation was observed in different Mediterranean countries with an increasing number of human cases in the Eastern Europe [13]. Some countries (i.e., Greece, Spain, Russia, Israel, Hungary, and Romania) were affected by the virus circulation for several consecutive years, supporting the hypothesis of a possible local endemisation of the infection.
As various animal host species take part, with different epidemiological roles, to the transmission of WNV, a multispecies surveillance approach is required. Some European countries focused their surveillance program only on human population (i.e., Albania, Kosovo, Montenegro) [14], whereas others integrated these activities with general and/or targeted surveillance in equines (such as Croatia, Spain, Greece, Portugal, France, Romania, Cyprus, and Morocco) [14][15][16]. Moreover, in some European nations (i.e., in Italy, Greece, United Kingdom, Spain, Germany, Hungary, and Serbia) the surveillance on mosquito populations is added to that addressing humans and equines, with the aim of detecting the WNV early during the season [11,12,17]. In this regard, the highest percentage of mosquito pools tested positive for WNV was reached in Serbia in 2013 (5.5%) [12].
In Italy, the first outbreak of WND was identified in horses in Tuscany region during the late summer of 1998. Fourteen animals showed neurological disorders [18], but no cases of human encephalitis were reported [9]. Following this epidemic, a national veterinary surveillance plan was put in place in 2001 to identify the geographical areas at risk for reintroduction of the WNV infection. This surveillance plan was coordinated by the National Reference Centre for the study of Exotic Animal Diseases (CESME) and carried out in 15 Italian wetlands. It was based on entomological monitoring and periodical serological testing of sentinel chickens and equines [19,20]. The surveillance system did not detect any relevant circulation of WNV in animals until 2008, when the virus was detected in mosquitoes, birds, equines, and humans in the area surrounding the Po river delta, involving eight provinces in three northern regions: Emilia Romagna, Veneto, and Lombardy [20][21][22].
Based on what happened during 2008, the national WNV veterinary surveillance system was revised and new activities were added, aiming at identifying as early as possible the virus circulation all over the country and implementing measures for the prevention and control of human infections [20].
In light of the increasing number of confirmed cases in animals and the circulation of the virus in a wider geographical area, a human surveillance plan for the West Nile neuroinvasive disease (WNND) was put in place in 2008 in Emilia Romagna and Veneto regions, which allowed to detect the first eight indigenous cases of WNND in humans [9].
In 2009, the results of the veterinary surveillance system showed the resurgence of infection mostly in the same geographical areas of previous year, but new foci were reported in central Italy, in Tuscany and Latium, rather far from the areas infected in 2008 [22]. Cases of human WNND increased to 18 in 2009 (nine cases in Emilia Romagna, seven in Veneto, and two in Lombardy regions), occurring in the same geographical areas where WNV circulation was detected in mosquitoes and animals (chickens and equine) [9].
In August 2010, new foci of infection were observed in Sicily and Molise regions, respectively, in southern and central Italy. These outbreaks confirmed the WNV ability of spreading to new areas, affecting new host populations [20,23]. In 2011, WND outbreaks were confirmed in six regions: Sardinia, Sicily, Friuli Venetia Giulia, Veneto, Basilicata, and Calabria, where clinical cases in horses and neurological signs in birds were observed.
Between 2010 and 2011 seventeen new human cases of WNND were reported from three regions (Veneto, Friuli Venetia Giulia, and Sardinia) with a 23.5% case-fatality rate [9].
Following the geographical spread of WNV westward, the Directorate General for Prevention of the Italian Ministry of Health (MoH) issued on the spring of 2010, a national plan for WNND human surveillance that integrated human and veterinary surveillance [24]. Since then, the national surveillance plan on imported and autochthonous human vector-borne disease (chikungunya, dengue and West Nile disease) was issued and revised annually [25][26][27].
The main objective of this paper is to describe the components of the integrated surveillance system for WNV established in Italy with the aim of evaluating the burden of the disease in animals and humans and providing the Public Health local and national authorities with the needed information to fine tune response. To this aim, the existing data exchange flows between veterinary and human systems and the results of 2013 surveillance are presented. Target species of the veterinary surveillance activities include migratory and resident birds, horses, and poultry. The entomological surveillance is based on a certain number of mosquito collection sites placed in the three above mentioned areas for identifying possible WNV vector species and determining their abundance and spatiotemporal distributions [30]. Active bird surveillance is focussed on the following species: Magpie (Pica pica), Hooded Crow (Corvus corone cornix), and Jay (Garrulus glandarius), which are sampled and virologically tested in AVC. Serological testing of sentinel chickens and backyard poultry is foreseen as a possible alternative in case the planned activities on resident birds could be not carried out. Passive surveillance on birds mortality is carried out throughout the country, and target species include Blackbird (Turdus merula), Starling (Sturnus vulgaris), Jackdaw (Corvus monedula), Magpie (Pica pica), Jay (Garrulus glandarius), Hooded Crow (Corvus corone cornix), and Collared dove (Streptopelia decaocto). In addition, any episode of abnormal or increased mortality in other wild birds must be reported to veterinary authorities.
WNV Circulation Surveillance Plan in
Entomological surveillance aims at identifying the mosquito fauna, defining the composition of vector populations and the species responsible for WNV transmission in the enzootic and epizootic cycles of the disease, investigating their ability to overwinter.
Moreover, countrywide passive surveillance on neurological cases observed in equines is coupled with the serological survey performed in sentinel horses three times per year (in May, August, and September) in AR. In addition, when the viral circulation is detected in zones not previously affected by the infection, further activities are put in place to better identify the extent of the infection.
Veterinary WND Surveillance Information System.
In 2008, the Department of Veterinary Public Health Nutrition and Food safety (VPH Department) of the MoH appointed the CESME to develop an information system collecting data on animal disease outbreaks, using standard procedures and templates for data input and output [31]. The new information system, called SIMAN, was developed to provide a tool for the management of epidemic emergencies, to collect and communicate outbreak data to the MoH, the European Commission and the World Organization of Animal Health (OIE) in compliance with current national and international legislation [32,33]. SIMAN was firstly used by the veterinary services during the large WND epidemic occurred in 2008. The data reported to SIMAN allowed the veterinary services to have the full picture of outbreaks distribution and to plan further investigations [34].
Given the complex epidemiology of the disease and the multidisciplinary approach needed for its surveillance, an integrated and comprehensive system for the management of WND outbreaks and surveillance activities was established.
In particular, new tools were developed in SIMAN for (i) the registration of sentinel chickens and equines into the National Database of livestock and holdings (BDN); (ii) recording and managing the laboratory results; (iii) publishing weekly and daily reports describing the outcomes of the surveillance activities.
A web-based geographic information system (WebGIS) was also developed for displaying thematic maps and to help the veterinary services to explore the area surrounding the outbreak, to create buffers around the reported cases, and to download the list of equine farms placed within the buffers. An automatic procedure extracting every night all data inconsistencies and errors assured the necessary quality checks. In case of errors, an automatic alert email is sent to the veterinary services asking for data verification and correction.
WNND Surveillance Plan in Humans.
The national plan for human surveillance defines as "affected areas" all the provinces (secondary administrative units) where laboratory-confirmed WNV infections in animals, vectors, or humans have been notified in the previous year or during the current surveillance period (between 15 June and 30 November, as considered the period with the highest vector activity). The identification of an affected area immediately triggers the "surveillance area" that represents the regional territory of the affected area [27].
In the affected area, local health authorities have to implement an active surveillance system for WNND in workers employed in the farms where equine cases have been identified and in individuals living or working in the surrounding area (province). Moreover, the measures for vector control have to be implemented immediately. At the same time, passive surveillance on human neurological cases has to be set up in the surveillance area, requesting physicians to report all probable and confirmed WNND cases using a modified European case definition [35]: a patient with fever ≥38.5 ∘ C and neurological symptoms (encephalitis, meningitis or Guillain-Barré syndrome or acute flaccid paralysis) and at least one of the following laboratory criteria: (i) for probable case: anti-WNV specific antibody response in blood; Polimerase Chain Reaction (PCR) positive in urine; (ii) for confirmed case: viral isolation in blood or cerebrospinal fluid (CSF); anti-WN IgM positive in CSF; PCR positive in blood or CSF; confirmed presence of anti-WN antibodies in blood by neutralization test.
The list of regional reference laboratories is also provided in the WNND national surveillance plan. When a probable case is reported, the regional reference laboratory has to proceed for confirmation using one of the above reported laboratory methods. In case the neutralisation test is not available at the regional laboratory, patient's sera are sent to the National Reference Laboratory at Istituto Superiore di Sanità (ISS) for further confirmatory tests. When WNND human cases are confirmed, immediate WNV nucleic acid amplification test (NAAT) screening of all blood and haematopoietic stem cells donations must be ordered in the affected areas. Additional screening of solid organ donations in surveillance areas (regions) are also introduced [36]. At the national level, all blood, tissue and solid organ donors who travelled to an affected area have to be temporarily deferred for 28 days starting with the day they left the affected area [37].
Case
Reporting System of Human WNND. All human cases are notified by regional authorities to the MoH and to the Italian National Centre for Epidemiology, Surveillance and Health Promotion (CNESPS-ISS) using a specific password-protected web-based system (http://www .simi.iss.it/inserimento dati.htm), which permits to report probable and confirmed cases, adding available epidemiological, clinical and laboratory information. The database is accessible also for the National Blood Centre and to the National Transplant Network, which implements precautionary measures on blood donation and transplant activities on the basis of data on WNND human cases.
In addition to the activities foreseen by the national WNND surveillance, an enhanced regional surveillance for WNV fever (WNF) was established in Veneto, Emilia Romagna, and Lombardy regions. The case definition for WNF was the following: a person showing fever ≥38.5 ∘ C (or history of fever in the last 24 h) for a period no longer than seven days, from 15 July to 30 November, with no recent history of travels to tropical countries and absence of other comorbidities accounting for the febrile illness.
All WNF confirmed cases are notified by the regional authorities to the MoH and to the CNESPS-ISS through the web-based system.
The Surveillance Systems Integration.
During the vector activity period, a data exchange protocol is in place between SIMAN and the CNESPS-ISS to jointly define and update the map of the affected areas (provinces). The identification of new ACV and AE following the veterinary activities immediately triggers the establishment of the "affected areas" and the "surveillance areas" as foreseen by the human WNND surveillance plan. When an outbreak is confirmed in SIMAN or a laboratory result confirms WNV circulation in a given territory, measures for the prevention and control of the infection in humans are immediately applied in the affected areas. Veterinary and human surveillances are, therefore, linked each other and work as a chain reaction.
Together with the animal and entomological monitoring, the surveillance of WNND human cases allows to detect the virus circulation in a given geographical area and to obtain an estimation of its magnitude through the systematic detection of emerging clinical cases.
The data flow in the web-based integrated surveillance system is shown in Figure 2.
WNND Human Surveillance.
From 15 June to 30 November 2013, 44 autochthonous human cases of WNND were confirmed. The majority of patients were male (61.3%) with a median age of 73 years (range: 42-89 years). The onset of cases ranged between 21 July and 21 September: 75% had the symptoms onset in August, which represented the peak month in 2013 (Figure 3). None of the cases travelled abroad during the incubation period.
The distribution of WNND confirmed cases by age and region/province of exposure is showed in Table 1: the majority of WNND cases in 2013 were reported from Emilia Romagna (20 cases) followed by Veneto (13 cases), Lombardy (10 cases), and Apulia (1 case).
The majority of cases reported symptoms of encephalitis (70.5%), followed by meningitis (38.6%), polyradiculoneuritis (9.1%), and other neurological symptoms (18.2%). None of the patients had history of vaccination against other arboviruses. Seven cases died, corresponding to a 16.3% case fatality rate.
From 15 June to 30 November, 34 confirmed cases of WNF were also reported to the MoH and CNSPS-ISS by During the surveillance period the CNESPS-ISS published a weekly bulletin available in electronic format on the website of the ISS (http://www.epicentro.iss.it).
Outcomes from the Integrated Surveillance System.
Data collected by both the veterinary and human surveillance systems in the previous year (2012) allowed to identify 9 regions (surveillance areas) in which the WNV human surveillance had to be performed in 2013 (Basilicata, Latium, Friuli Venetia Giulia, Sardinia, Veneto, Emilia Romagna, Lombardy, Calabria, and Sicily). Moreover, considering the geographical characteristics of Basilicata, also its neighbouring region (Apulia), although not directly affected, was included in the surveillance areas as well. Figure 3 shows the ten Italian Regions under surveillance.
During 2013 the veterinary surveillance activities confirmed the circulation of WNV in six regions (Sardinia, Veneto, Emilia Romagna, Lombardy, Calabria, and Sicily), of which three reported human cases (Veneto, Emilia Romagna, and Lombardy). One human case was reported also in Apulia, although no animals and vectors tested positive for WNV.
Discussion
The existence of surveillance systems able to represent an early warning tool is pivotal for preventing the spread of infectious diseases. The rapidity of the application of appropriate control measures after the detection of an emerging infectious/disease is crucial for the success of any intervention. Between the first occurrence of WND in Italy in 1998 and its reemergence in 2008, WND was considered an exotic disease for the Italian territory and the main objective of the surveillance activities in place at that time was to evaluate the possible reintroduction of the virus. In this context, the data collected by the national system for the notification of animal diseases (SIMAN) were useful for a rapid epidemiological evaluation, to define areas at risk for human transmission and to facilitate the implementation of effective and prompt control measures.
In Italy the first WNND human cases were detected in 2008, when a human surveillance system was implemented in the areas where the WNV circulation was demonstrated among animals and vectors. Since then, human cases of WNND have been reported every year in Italy, with a pick of incidence in 2013 (44 confirmed cases). As shown in Figure 4, the number of reported WNND human cases has been recently increased, due to the greater attention to the disease by the national authorities, and thanks to a better integration of human and veterinary surveillance.
In fact, an early warning system for WNV detection, based on animal and entomological surveillance, can provide the basis for targeted public health interventions and risk communication activities, aiming at reducing the risk of human infection. Since the first occurrence of the virus the multispecies surveillance plan in place in Italy was capable of confirming the WNV ability of spreading to new geographical areas and infecting different host populations. Veterinary surveillance activities, therefore, were particularly useful to assess and monitor the evolution of the epidemiological situation, providing the public health authorities with precious and timely information on where and when WND prevention and control actions in humans had to be put in place. The existence of an effective double way communication system between veterinarian and human health authorities ensured a prompt implementation of the preventive measures and a more accurate assessment of the epidemiological situation of the disease as well as a more precise estimation of the extension of the infection. Well-conducted veterinary surveillance program allowed to identify the territories at major risk for WNV circulation and to set up human surveillance activities in these geographical areas. It is noteworthy that some regions, where WNV circulation was demonstrated in animal and vectors, did not reported any human case. On the contrary, in Apulia region, one autochthonous human case of WNND was confirmed in 2013, although no WND cases in horses or in other animal species were reported, although the detection three years before of a sporadic virus circulation in poultry farms in a bordering territory highlighted the suitability of that area for virus transmission. These findings confirm the crucial role of an integrated human, animal and vector surveillance in order to timely set up preventive measures. In this context, the veterinary surveillance carried out in sentinel animals, bird and vector populations can play a crucial role for foreseeing human transmission, even considering the limits of sensitivity of a surveillance system for vector-borne diseases.
Entomological surveillance is also a central aspect, allowing at early detecting the circulation of the virus [38][39][40]. In some provinces of northern Italy, the detection of WNV circulation through entomological surveillance was as early as July, largely more in advance than human cases occurrence [38].
With regard to circulating viruses, in the last three years WNV lineage 2 was detected in several Italian foci, apparently showing an extension of its spread and a more important contribution played by this lineage in the overall epidemiological situation in Italy. In addition, the co-circulation 7 of lineages 1 and 2 in the same area [22] may create the favourable conditions for possible changes in the virulence of the viral strains, potentially leading to unexpected and adverse consequences.
Conclusions
The integrated human and veterinary information systems provide the Competent Authority with a large amount of data and information on WNV circulation, thus, allowing the evaluation of planned actions and, if needed, their improvement or revision. In 2013, the integrated human, entomological, and animal surveillance system was able to monitor the spread of WNV and supported the application of control measures for blood transfusions and organ donations, preventing the transmission of the disease among human population.
In conclusion, the Italian experience represents a good example of collaboration among different sectors of public health (human, veterinary, entomologists, and blood and organ donation authorities) in a "one health" perspective [41]. Vector borne diseases, in fact, need a multidisciplinary and integrated approach, that is more effective to assure animal and human health, as well as the environment protection. In case of zoonoses, such as WND, this approach is of paramount importance for a better and holistic understanding of the prevention of the diseases and the maintenance of both human and animal health. | 2016-05-17T10:23:05.420Z | 2015-03-22T00:00:00.000 | {
"year": 2015,
"sha1": "19e47034e8c471d89522aa6a927908b1e9442391",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2015/643439.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9598d3537c228c8bac4e818ef8ae84741de26ea3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263918687 | pes2o/s2orc | v3-fos-license | Towards intelligent design assistants for planar multibody mechanisms
General‐purpose mechanisms can perform a broad range of tasks but are usually rather heavy and expensive. If only particular movements need to be executed, more efficient special‐purpose mechanisms can be employed. However, they typically require an expert to design the system based on manual inspection of simulations and experimental results. This procedure is not only time‐consuming, but the outcome also depends on the expert's experience. Hence, the design process stems from subjective criteria while only a limited number of structurally different mechanisms can be considered. In contrast, a design assistant can consider a broad range of mechanisms and leverage multi‐objective optimization to retrieve optimal designs for the given task. Due to the systems being synthesized based on mathematical functions rather than individual experience, the assistant allows a more transparent development of optimal problem‐specific mechanisms compared to the conventional process. Experts can then fine‐tune and analyze the proposed designs to compose the final system. In recent years, neural networks have been utilized to directly learn the inverse mapping from a trajectory to a mechanism design. This requires some parameterization of the trajectory to be fed into the network. In this work, we evaluate various preprocessing methods for the trajectory on a simple mechanism design model problem. We assess multiple configurations such as different neural network sizes, applying input‐output normalization, and varying the number of features. Consequently, we investigate and compare the trends and robustness of the implemented methods.
Yet, a lot of tasks only require the execution of a certain trajectory.Here, special-purpose robots can be tailored to the specific problem.Hence, they can be designed to withstand particular vibrations and be made lighter and cheaper.Due to a specialized design, these robots may also be easier to control.However, nowadays, this requires a domain expert to design the system which makes this process expensive and the outcome is based on the expert's experience.Automated design assistants could bridge this gap and consider a vastly broader range of mechanisms while being based on objective and transparent performance criteria.Ideally, an intelligent design assistant could already incorporate control properties in the design phase.
A core challenge when designing a special-purpose robot is the design synthesis.This includes how many joints should be used, what types of joints, the number of degrees of freedom (DOFs), and much more.Finding optimal designs constitutes a discrete optimization problem that exhibits a non-smooth objective function and does not offer gradient information.
In recent years, neural networks have been utilized to directly learn the inverse task of proposing a mechanism design given a specified trajectory.The first work to simultaneously optimize the mechanism design and the dimensional synthesis using neural networks is by Yim et al. [1].In their work, a neural network predicts the parameters of a spring-block model given a preprocessed trajectory.As input features, they use the coefficients of a Fourier series which is applied to a centroid distance function of the trajectory.The same procedure is also used in the literature [2][3][4][5][6].In general, data preprocessing (i.e., the way how the data is presented to a neural network) is a crucial aspect of accurate and robust neural network performance.While their work produced satisfactory results, there are several other potential procedures how to extract features from a trajectory.However, there exists no extensive evaluation comparing different preprocessing methods in the context of automated design assistants for mechanism design using neural networks.
In this work, we introduce various approaches how to extract features from a trajectory which can then be used as inputs for a neural network.To this end, the neural networks learn the inverse problem of mapping a (preprocessed) trajectory to mechanism parameters.We investigate the different procedures and evaluate their performance on a simple model problem.Additionally, different network sizes, the effect of normalization, and varying the feature vector length are investigated.The idea is to outline methods that exhibit decent and robust performance without the need for expert-guided hyperparameter tuning.
This work is structured as follows; In Section 2, the model problem is introduced alongside ways to parameterize different trajectory representations.Additionally, details about the neural network setup are given.In Section 3, the results of some analyses are presented.Section 4 discusses these results and Section 5 concludes the work.
METHODS
In this section, the model problem is introduced that we use to evaluate the different preprocessing methods.We then detail various representations of a trajectory and ways to parameterize them to feed them into a neural network.Lastly, details about the neural network setup are presented.
Model problem
For the comparison of preprocessing methods, we use the four-bar linkage as a model problem as shown in Figure 1 (left).We keep all parameters fixed and only vary the link length 3 (blue) of the mechanism.The revolute joint is actuated with a constant velocity and for every degree the location of the coupler point is measured.Hence, 360 discrete points are sampled from the trajectory.Examples of trajectories for different values of 3 can be seen in Figure 1 (right).The task of the neural networks is the inverse problem of predicting the value 3 given a (preprocessed) trajectory of the coupler point .To be able to feed a trajectory into a neural network we have to extract features from it.In the following, we will look at different representations of a trajectory and ways to parameterize them.
Representation of trajectories
Subsequently, the different approaches examined in this paper for representing the coupler point trajectories are introduced.Later, when comparing all approaches, we use the same number of features f for all approaches, which specifies the length of the input vector to the neural networks.
Coordinate function (CF)
One possibility is to represent each coordinate dimension as a separate function, which we will subsequently refer to as coordinate functions (CFs).Thus, we have a periodic function for () and for (), where ∈ [0, 1] describes one mechanism revolution around joint .However, when we parameterize the CFs, we may only use f ∕2 features to represent each CF.
Centroid/origin distance functions (CDF, ODF)
The distance functions compute the distance of each two-dimensional point on the trajectory to a reference point.The periodic distance function is given by 2 , where for the centroid distance function (CDF), we set and as the centroid of the contour, that is, the mean of the CFs.For the origin distance function (ODF), we set = = 0.A visualization for different values of 3 can be seen in Figure 2.Both of these distance functions condense the two-dimensional paths into one-dimensional periodic functions.Hence, they essentially reduce the dimensionality of the trajectory information.Compared to CF we can use twice as many features to approximate the function.However, there is a loss of information due to the distance computation.In conclusion, there is a trade-off between the accuracy to which one can recover the original function and the information contained within that signal.
Parameterization of trajectories
Next, we describe various approaches how features can be extracted from the trajectory representations such that they can be used as input vectors for a neural network.
Sparse data points (Coarse)
This method extracts a set of points from the trajectory so that they are equally spaced in the rotation angle of one revolution.This implicitly contains information about the speed of the end effector as points further away from each other correspond to regions with higher velocity while points closer together indicate regions of slower movement.However, this approach does not include any further preprocessing as it uses raw data points from the trajectory.
Fourier coefficients (FC)
A Fourier series can approximate periodic functions by summation of sine and cosine functions of increasing frequency.The series is given by )), where is the period of the function.The fitted coefficients 0 , , and for ∈ {1, … , } can then be used to describe the approximated function, where is the -th harmonic.We apply a Fourier series to the centroid distance function (CDF-FC), the origin distance function (ODF-FC), and the coordinate functions (CF-FC).The CDF-FC representation has previously been used in the literature [1][2][3][4][5][6] for the description of closed-loop trajectories.
Elliptical Fourier descriptors (EFD)
This method is based on the elliptical Fourier analysis method, see [7].It is an extension of Fourier series to twodimensional closed contour lines.Hence, this method is closely related to CF-FC and the major differences are within the implementation details.Additionally, the elliptical Fourier descriptors (EFD) can be normalized in the calculation and adapted such that they parameterize the shape in a manner invariant to orientation, scaling, and translation.The implementation used in this work is provided in the literature [8].
Polynomial regression coefficients (Poly)
We fit a polynomial with monomial basis of degree to the data points from the (preprocessed) trajectory.The polynomial is given by () = 0 + ∑ =0 , where the regression coefficients are obtained by least-squares optimization.The regression is applied on the centroid distance function (CDF-Poly), the origin distance function (ODF-Poly), and the coordinate functions (CF-Poly).In the case of the coordinate functions, one polynomial is fitted for each coordinate function with degree ∕2.
Training setup
For the data set, 201 trajectories are generated with 3 ∈ [2.5, 5.5] equally spaced in the interval, using a train-validationtest split of [0.7, 0.2, 0.1].Each trajectory is preprocessed by one of the presented methods while varying the number of features (number of extracted points, number of Fourier coefficients, number of regression coefficients) that are fed into the neural network.For a Fourier series this equates to adding more sine and cosine terms and for polynomials it constitutes using monomials of higher order.We use fully connected feed-forward neural networks with ReLU activation in all hidden layers.Different network sizes are examined (e.g., one hidden layer, two hidden layers, and three hidden layers), where each hidden layer contains 40 neurons, and the analysis is conducted without normalization and with unit normalization.When applying unit normalization, each input feature is normalized individually to the [−1, 1] range using the entire data set.The same procedure is applied to the output 3 values.For each configuration (e.g., preprocessing method, number of input features, neural network size, normalization type), 20 independent neural networks are trained using randomly sampled training data sets.Overall, this amounts to training 12360 individual neural networks.The Adam optimizer is used with a learning rate of 0.001.The mean performance of the networks is reported in the following.
RESULTS
For a fixed number of features f , one of the described preprocessing methods was run and 20 individual neural networks of the same size have been trained on randomly sampled data.These networks were tested on unseen test data and had to predict the 3 value given preprocessed trajectories.Figure 3 shows the mean squared error (MSE) between the 3 prediction and the ground truth value, averaged across the 20 trials.The left column of the figure refers to using neural networks with one, two, and three hidden layers.The plots in the right column show the same setup but applying unit normalization on the inputs and outputs.Notice that the preprocessing methods which used polynomials did not work robustly across every evaluated feature vector length (i.e., the brown, pink and grey lines).The plots also show that unit normalization has a significant impact on the prediction accuracy of the networks.It improves the prediction error by almost two orders of magnitude as can be seen on the vertical scale.We also see that no method consistently outperforms all others.However, Coarse, EFD, and CF-FC perform robustly across all feature lengths and neural network configurations.CDF-FC performs robustly over all input vector sizes but when applying unit normalization it does not benefit as much as the previously mentioned methods.The same plots have been generated for the standard deviation across the 20 trials for each configuration.They show a similar behaviour and, thus, are omitted here.
DISCUSSION
In general, training neural networks exhibits a large number of different hyperparameters such as network architecture, optimizer choice, learning rate, initialization strategy, activation function and more.For this evaluation, no hyperparameters have been extensively tuned.This means that for all methods there is possibly margin for improvement.Hence, the error plots should not be taken as absolute performance measures but only as an indication of the robustness and relative performance of the preprocessing methods.Generally, for the usage in automated design assistants, preprocessing methods with decent out-of-the-box performance should be preferred over methods where the performance is highly sensitive to the hyperparameters.After all, expert-guided hyperparameter tuning conflicts with the idea of automated design assistants.
In the plots of Figure 3, the number of features f are varied along the -axis.This corresponds to varying the length of the input vectors and as such the number of features used to describe the trajectories.A suitable prepocessing methods should exhibit low MSEs independent of the specific choice of f .Hence, preprocessing methods that consistently achieve a low MSE across the entire -axis in all plots can be regarded as robust.These methods are preferred in the use of an automated design assistant.
The Coarse method performed robustly across different numbers of extracted points and led to good results.This is surprising since it does not compress information from the entire trajectory.Especially for very few extracted points the complete trajectory information is rather scarce.One possible explanation could be that, since we only vary one parameter, the corresponding trajectories change rather smoothly as can be seen in Figure 1 (right).Thus, we have a bijective mapping from extracted points to the value of 3 , which seems to give a very clear learning signal.However, this only holds due to the simplicity of the problem.For more complex problems like varying multiple parameters at once, different trajectories could have very similar points extracted.In these instances, the neural networks could fail to recover the ground truth parameters and the performance of Coarse could worsen.
On the other hand, EFD seems to be particularly suitable for representing closed-loop trajectories due to the fact that the coefficients are invariant to translation, rotation, and dilation of the contour.This is a useful property as these variations do not correspond to structurally different mechanisms but only to a different mounting and sizing of the same mechanism.Hence, it makes the inverse learning task easier as it removes unnecessary complexity.
The evaluation showed that the input-output normalization to the [−1, 1] range had significant impact on the prediction accuracy.The overall MSE was improved by almost two orders of magnitude.However, to be able to normalize the inputs and outputs, the minimal and maximal values have to be computed using the entire preprocessed data set.This might not always be possible, especially in scenarios when the test set is not previously known.In these cases, lower and upper boundaries can be estimated and used for normalization.Another approach would be to only use the training set for normalization meaning that the networks would potentially have to extrapolate on some of the unseen test data.
It is important to note that the studied model problem is rather simple since it consists of just predicting a single mechanism parameter given a (preprocessed) trajectory.Future work will investigate how the preprocessing methods compare when being applied to more complex problems.This can be the prediction of multiple parameters at once for a given mechanism or predicting coefficients of a meta model that can capture different mechanism designs simultaneously.In these instances, a good representation of the trajectory will be crucial as the design space is significantly larger and data points may only be sparsely distributed.It will be interesting to see if the difference between the preprocessing methods becomes more apparent.
CONCLUSION
In this work, an evaluation of different preprocessing methods for the inverse learning task of predicting a mechanism parameter given data from its trajectory is presented.Different representations and parameterizations for the trajectories are shown.Furthermore, different neural network sizes and the influence of normalization are examined.We showed that no method signficantly outperforms all other methods across all configurations.The CDF-FC method, which has been used widely in the literature, performs robustly across all feature lengths but methods like EFD and Coarse robustly lead to better results.The analysis indicated that polynomials are not suitable to parameterize trajectories in the context of an automated design assistant as they are sensible to the choice of the regression order, where the optimal regression order for a given trajectory is generally not known a priori.Normalizing the inputs and outputs to the [−1, 1] range significantly improves the prediction accuracy on the test data by almost two orders of magnitude.
F I G U R E 1 F I G U R E 2
Left: Four-bar linkage where all parameters are fixed apart from 3 (blue).Trajectories of the coupler point are computed for one mechanism revolution.Right: Trajectories of the coupler point for different values of 3 .Distance functions for different values of 3 .Left: centroid distance function.Right: origin distance function.
3
Mean squared errors for the prediction of 3 values on unseen test data for different neural network sizes.Left column: neural networks without normalization.Right column: applying unit normalization on the inputs and outputs. | 2023-10-13T15:05:39.082Z | 2023-10-10T00:00:00.000 | {
"year": 2023,
"sha1": "56c5b391c2d2736a496a6bbceb9a0edafda18afd",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pamm.202300060",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "d0672eb213478756b330963fe5c96ee109a9f7bd",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
264812784 | pes2o/s2orc | v3-fos-license | Optimal linear response for expanding circle maps
We consider the problem of optimal linear response for deterministic expanding maps of the circle. To each infinitesimal perturbation $\dot{T}$ of a circle map $T$ we consider (i) the response of the expectation of an observation function and (ii) the response of isolated spectral points of the transfer operator of $T$. In each case, under mild conditions on the set of feasible perturbations $\dot{T}$ we show there is a unique optimal feasible infinitesimal perturbation $\dot{T}_{\rm optimal}$, maximising the increase of the expectation of the given observation function or maximising the increase of the spectral gap of the transfer operator associated to the system. We derive expressions for the unique maximiser $\dot{T}_{\rm optimal}$ in terms of its Fourier coefficients. We also devise a Fourier-based computational scheme and apply it to illustrate our theory.
Introduction
A C 3 uniformly expanding map of the circle T : S 1 → S 1 is well known to display a linear response to its unique invariant density f 0 .That is, differentiable changes to the map T lead to differentiable changes in f 0 (see [4] for a survey on the subject).Classically, linear response is often phrased as the differentiability of the expectation S 1 c(x) • f 0 (x) dx of an observable c : S 1 → R. Each infinitesimal perturbation Ṫ ∈ C 3 of T leads to an infinitesimal perturbation R( Ṫ ) of f 0 .If Ṫ is constrained in some meaningful way, for a particular observation c, it is natural to ask whether there is a perturbation Ṫ that maximises S 1 c(x) • R( Ṫ ) dx, and if so, whether such a Ṫ is unique.For c ∈ L ∞ we show that there is in fact a unique maximiser under mild conditions on the feasible set of perturbations.When the perturbations Ṫ are norm-constrained, e.g. by a Sobolev norm ∥ Ṫ ∥ H 4 we derive a relatively explicit formula for the unique maximiser Ṫ in terms of Fourier coefficients.
We pose a similar question for the effect of perturbations on the isolated spectrum of the transfer operator L of T .Perturbations Ṫ of T lead to perturbations of L, which in turn lead to perturbations of the isolated spectrum and associated eigenprojections.If Ṫ is constrained in a meaningful way, it is natural to ask whether there is a Ṫ that maximises the rate of change the magnitude of an isolated spectrum point λ 0 .If the isolated spectral point λ 0 is the largest-magnitude spectral point inside the unit circle, it controls the exponential rate of mixing of the system.Therefore one can phrase this spectral optimisation question as "does there exist a perturbation Ṫ that maximises the infinitesimal change λ in the mixing rate?".In order to answer such quantitative questions, we derive an expression for λ (the derivative of λ with respect to the perturbation Ṫ ) in terms of Ṫ .Under mild conditions on the feasible set of perturbations we show that there is a unique maximiser Ṫ and when this feasible set is norm constrained in e.g.H 4 , we construct an explicit formula for the optimal Ṫ in terms of its Fourier coefficients.
To numerically estimate the unique perturbation Ṫ that maximises the expected response of a given observable c, we devise a Fourier-based numerical scheme.This scheme estimates the transfer operator L, the action of the resolvent (I − L) −1 , and all other derivatives and integrals involved in computing the Fourier coefficients of the unique maximiser Ṫ .To numerically estimate the unique perturbation maximally affecting the mixing rate of the dynamics, we use a related Fourier scheme.In addition to estimating the transfer operator L 0 and its outer spectrum when acting on W 1,1 (S 1 ), we also numerically approximate the eigenvector v 0 corresponding to the selected isolated spectral value λ 0 , and a representative ϕ 0 ∈ H 1 of the corresponding adjoint eigenfunctional φ 0 ∈ (W 1,1 (S 1 )) * of L * 0 acting on (W 1,1 (S 1 )) * .Each of the above terms are crucial pieces of the quantitative expression for the objective function we optimise.
Our theory and numerics are illustrated in two examples.In the first example we consider a circle map T with a slightly "sticky" (derivative near to 1) fixed point at x = 0. We show that if the observation c takes large values at x = 0, the perturbation Ṫ retains the fixed point at x = 0 and further reduces the derivative, making it more sticky.This increases the proportion of time that orbits spend near x = 0 and increases the expectation of the observable.We then show that if the observation function c takes on its maximal value away from x = 0, the optimal perturbation Ṫ sharply moves the fixed point from x = 0 in an attempt to weight the invariant density toward larger values of c.
In the second example we construct a circle map with a positive isolated spectral value larger than 1/ inf x |T ′ (x)|.The presence of the relatively large isolated spectral value is due to almost-invariant intervals to the right of x = 0 and to the left of x = 1.We show that the perturbation Ṫ that maximally slows the mixing rate (maximises λ) appears to try to strengthen this almost invariance.
Although the questions posed in this paper are inspired by [1] and [2], which considered related linear response optimisation for finite-state Markov chains and Hilbert-Schmidt operators, respectively, the deterministic case treated here required a redevelopment of the optimisation approach, and an entirely distinct and more challenging perturbation theory.In particular, a certain amount of technical work was needed to get explicit formulas for the response of the isolated spectrum, see Appendix A. The question of optimising the outer spectrum (and therefore the mixing rate) has been addressed for flows in the presence of small noise, when the underlying vector field is periodically [13] and aperiodically [12] driven.
Other related works consider the problem of finding the infinitesimal perturbation achieving a prescribed desired response direction (in the case of many infinitesimal perturbations achieving the same response one again looks for an optimal one).This problem was also called the "linear request problem" and studied in [24,16,15] from a theoretical point for some classes of deterministic and random maps, and in applications in cellular automata [29] and climate [8].In particular the work [16] considers the case of expanding maps and the problem of finding a minimum-norm infinitesimal perturbation resulting in a given response for the invariant density of the system.
An outline of the paper is as follows.In Section 2 we formalise the class of dynamical systems and perturbations we consider.Proposition 3 summarises the fundamental theory concerning differentiability of invariant densities and spectral points for expanding maps.Section 3 recaps relevant results from convex optimisation.Section 4 formally sets up the optimisation problem to maximise the rate of change of the expectation of an observation function c. Proposition 7 states that there is a unique maximiser Ṫ and Theorem 8 provides explicit expressions for the Fourier coefficients of the optimal Ṫ .Section 5 sets up the response maximisation problem for isolated spectral values λ.Proposition 9 verifies there is a unique optimal perturbation Ṫ and Theorem 11 states explicit formulae for the Fourier coefficients of the optimal Ṫ and the optimal value of the corresponding response of λ.Section 6 describes our computational approach to estimate all of the relevant objects required to numerically solve the two main optimisation problems, and Section 7 illustrates our theory and numerics through two examples.Finally, in the Appendix, we recall some known results about linear response of invariant measures and resolvent operators, and from these facts we derive general response formulas for invariant measures and isolated eigenvalues we use in the paper.
Linear response of invariant densities and isolated eigenvalues
In this section we consider the response of the physical measure of an expanding circle map, and of the leading eigenvalues of the associated transfer operator, to deterministic perturbations of the map.We begin by setting up the class of dynamical systems we will consider.Proposition 2 then recalls known continuity properties for the physical measure and isolated spectrum under deterministic perturbations of the map, and Proposition 3 states the corresponding linear response results.Proposition 3 also contains explicit formulas for the response of these objects under suitable deterministic perturbations of the system.These explicit formulas will be used in the computation of the optimal (response-maximising) perturbation.
Several linear response results in the literature treat perturbations that compose the dynamics with a diffeomorphism near to the identity, e.g.[4].In this paper we consider additive perturbations applied directly to the map, as we believe this is natural for applications.
Let us consider some δ > 0 and a family of C 3 maps {T δ : S 1 → S 1 } δ∈[0, δ) satisfying the following assumptions: The dependence of the family T δ on δ is differentiable at δ = 0 in the following strong sense: (1) where Ṫ ∈ C 3 (S 1 , S 1 ) and we say a family of functions We study the statistical properties of these map perturbations through their associated transfer operators.It is well known that if we consider the action of L δ on a suitable Sobolev space, this operator is quasi-compact (see e.g.[28]).We denote the Sobolev space of functions having weak kth derivatives in L p by W k,p .We recall the definition of the transfer operator associated to a map of the circle and of the derivative operator associated to perturbations as in (A2).
Definition 1.The transfer operator L δ : W 1,1 (S 1 , C) → W 1,1 (S 1 , C) associated to an expanding map T δ : S 1 → S 1 is defined by ) is defined as For the moment, we simply call L the "derivative operator"; in Appendix A.4.3 we show that L indeed arises as a derivative of the family of operators L δ at δ = 0. We denote by f δ an invariant probability density for T δ .Such densities are fixed points of the operators L δ ; that is, L δ f δ = f δ .It is well known that an expanding map has a unique invariant probability density which is absolutely continuous with respect to the Lebesgue measure on S 1 (see e.g.[27,28]).
We now recall a series of facts about stability (Proposition 2) and linear response (Proposition 3) of the invariant density and isolated spectrum of expanding maps when subjected to deterministic perturbations.The following proposition is a well-known fact about the stability of the spectral picture of exanding maps, which can be obtained by the results of [21] (see also [28] for details).
(I) There is a unique invariant probability density L δ has a simple isolated eigenvalue λ δ and the map δ → λ δ is continuous.
Linear response of the invariant measure of expanding maps under deterministic perturbations is a classical result (see [4]).A response formula for deterministic additive perturbations (see (1)) similar to (4) was given in [16].Differentiability of the isolated eigenvalues of the transfer operator associated to expanding maps under deterministic perturbations is due to [18] (see also [5]).Since we need an explicit formula for this derivative, in Proposition 3 we also provide such a formula (see (5)).To the best of our knowledge, the formula ( 5) is new in the deterministic setting.It mirrors the expression in Corollary III.11 [19], which in our notation applies to a family of quasi-compact operators L δ that are continuously differentiable with respect to δ in operator norm.The operator perturbations induced by a differentiable family T δ of maps satisfying (A0)-(A2) do not fit into this framework and require a more careful treatment, this will be done using theory developed in [19] and [18], as laid out in the Appendix.Proposition 3. Consider a family of maps T δ : S 1 → S 1 for δ ∈ (0, δ) satisfying (A0), (A1) and (A2) as above.
Recap of convex optimisation
We will consider a set P ⊂ C 3 (S 1 , R) of allowed infinitesimal perturbations Ṫ to the map T 0 .We are interested in selecting an optimal perturbation Ṫ in terms of (i) maximising the rate of change of the expectation of a chosen observable, and (ii) maximising the rate of change in the magnitude of an isolated eigenvalue.Because we wish to perform an optimisation we consider P inside some Hilbert space H.We will also assume that P is bounded and convex, which we believe are natural hypotheses; convexity because if two different perturbations of a system are possible, then their convex combination -applying the two perturbations with different intensities -should also be possible.Definition 4. We say that a convex closed set P ⊆ H is strictly convex if for each pair x, y ∈ P and for all 0 < γ < 1, the points γx + (1 − γ)y ∈ int(P ), where the relative interior 1 is meant.
We briefly recall some relevant results from convex optimisation.Suppose H is a separable Hilbert space and P ⊂ H. Let J : H → R be a continuous linear function.Consider the abstract problem to find p * ∈ P such that (6) The existence and uniqueness of an optimal perturbation follows from properties of P .
Proposition 5 (Existence of the optimal solution).Let P be bounded, convex, and closed in H. Then problem (6) has at least one solution.
Upgrading convexity of the feasible set P to strict convexity provides uniqueness of the optimum.
Proposition 6 (Uniqueness of the optimal solution).Suppose P is closed, bounded, and strictly convex subset of H, and that P contains the zero vector in its relative interior.If J is not uniformly vanishing on P then the optimal solution to (6) is unique.
Note that in the case J is uniformly vanishing, all the elements of P are solutions of the problem (6).See Lemma 6.2 [12], Proposition 4.3 [2], or Corollary 3.23 of [10] for the proofs of Propositions 5 and 6 and more details.
Optimization of the expectation of an observable.
Let c ∈ L ∞ (S 1 , R) be an observable.We consider the problem of finding an infinitesimal perturbation Ṫ of our map T 0 that maximises the rate of change of expectation of c.If c were an indicator, for example, one could control the invariant density toward the support of c.Given a family of maps T δ satisfying (A0), (A1), (A2) with invariant densities f δ , we denote the response of the system to Ṫ by This limit is converging in L 1 as proved in Proposition 3.Under our assumptions we easily get Hence the rate of change of the expectation of c with respect to δ is given by the linear response of the system under the given perturbation.To take advantage of the general results of the previous section, we perform the optimisation of Ṫ over a closed, bounded, convex subset of a suitable Hilbert space H containing the zero perturbation.Because we require Ṫ ∈ C 3 in Proposition 3 we select H = H 4 (S 1 , R) and consider a convex closed set P ⊆ H.To maximise the RHS of (7) we set J ( Ṫ ) := − c(x) • R( Ṫ )(x) dx and set P to be the unit ball in H 4 (S 1 , R), which is bounded, convex, and contains the zero vector.We hence consider the problem min Proposition 7. If J is not uniformly vanishing, there is a unique optimum map perturbation Ṫ for Problem (8).
Proof.The result will directly follow from Propositions 5 and 6, once we verify that J : The first two terms in the product are well known to be bounded in our case (see Proposition 19 and Lemma 18).Because f 0 ∈ W 3,1 and T ′ 0 ∈ C 3 is uniformly bounded below, there exists C 2 such that 1.An explicit formula for the optimal perturbation.We now state a theorem identifying this optimal perturbation for the problem (8).
Theorem 8. Let c ∈ L ∞ (S 1 , R) be an observation function and the family of maps T δ satisfy (A0), (A1) and (A2).The perturbation Ṫ ∈ H 4 (S 1 , R) that maximises the expected linear response S 1 c(x)R( Ṫ )(x) dx (solves Problem (8)-( 9)) is given by Ṫ = ∞ n=−∞ a n e n , where e n (x) = exp(2πinx) and the coefficients a n are given by (10) We write the continuous (by the proof of Proposition 7) linear functional J : H 4 → R as an L 2 inner product Define a second continuous linear functional g : H 4 → R using the norm constraint: For a general continuous linear functional F : H 4 → R we denote by ∆F( Ṫ , T ) the Gâteaux variation of the functional F at Ṫ in the direction T , i.e. ∆F( Ṫ , T ) = lim h→0 F( Ṫ + h T )/h.Given Ṫ ∈ H 4 , ∆J ( Ṫ , T ) exists for all T ∈ H 4 by linearity of J ; indeed (11) ∆J ( Ṫ , T ) = J ( T ) = c, The variation of the functional g is defined similarly, and it is straightforward to show that A variation ∆F is called weakly continuous if lim Ṡ→ Ṫ ∆F( Ṡ, T ) = ∆F( Ṫ , T ) for each T .It is clear from the explicit expressions given above that both ∆J and ∆g are weakly continuous variations.
We form the Lagrangian L( Ṫ , ν) = J ( Ṫ ) − νg( Ṫ ), with Lagrange multiplier ν ∈ R. Having established weak continuity of the variations of J and g, the Euler-Lagrange Multiplier Theorem [33] (section 3.3), guarantees that a necessary condition for Ṫ be a local extremum of the constrained Problem (8) is that: (13) ∆L( Ṫ , T ) = 0 for all T ∈ H 4 .and g( Ṫ ) = 0. We express Ṫ ∈ H 4 as By continuity of ∆J ( Ṫ , •) : H 4 → R and density of smooth functions in H 4 , we may equivalently insist that (13) holds for each T = e n = exp(2πinx).That is, for each n ∈ Z one has: where final equality uses e m • e n = δ m,n .Thus, for n ∈ Z, the coefficients a n are given by ( 14) Notice that each b n has zero mean as the derivative of any periodic function integrates to zero.Further note that the integral in (14) makes sense, because e n ∈ C ∞ , f 0 ∈ C 3 (since T 0 ∈ C 4 ) and T ′ 0 ∈ C 3 , and therefore the function b n is in C 2 (in particular uniformly bounded with |b n | ∞ = O(n) due to the derivative acting on e n ) and in W 1,1 .
We now verify that the above a n define a Ṫ ∈ H 4 .Because we divide by n 8 in (14), the above growth estimate of b n leads to a decay rate of |a n | ≤ C/n 7 .It is a fact that if the Fourier coefficients a n (f ) of some function f satisfy a n (f ) = O(1/n k+1+γ ) for γ > 0 then f ∈ C k .Thus the optimal perturbations Ṫ with Fourier coefficients given by a n in (14) Finally, we show that ν > 0. Setting T = Ṫ in (13) and using (11) and (12) we have Because J ( Ṫ ) is maximal, we have J ( Ṫ ) > 0. The above display equation then implies that ν > 0. □
Optimisation of the spectrum
We again consider a family of maps {T δ } satisfying (A0), (A1) and (A2), and assume that L 0 has a simple eigenvalue λ 0 satisfying |λ 0 | > α.By Proposition 3 the corresponding eigenfunction v 0 lies in W 3,1 .Proposition 2 guarantees the existence of a continuous family of simple eigenvalues λ δ for the perturbed operators L δ : W 1,1 → W 1,1 .Proposition 3 then show the differentiability of this family, providing a formula for its derivative at δ = 0, namely λ( Ṫ ) = φ 0 ( L( Ṫ )v 0 ).We wish to optimise λ( Ṫ ) as a function of the map perturbation Ṫ .This optimisation will be performed on the separable Hilbert space H 4 (S 1 ) ⊂ C 3 (S 1 ).We therefore define a linear functional J : To simplify the explicit formulae appearing later in Section 5.1 for the optimal perturbation Ṫ , we assume that λ 0 is real and positive.We therefore wish to select a perturbation Ṫ of T 0 so as to maximise the rate of increase of λ 0 under the perturbation Ṫ (in other words, maximise λ): max Proposition 9.If J is not uniformly vanishing there is a unique optimal map perturbation Ṫ for Problem (8).
Proof.Because H 4 (S 1 ) is a separable Hilbert space and the unit ball in H 4 (S 1 ) is a strictly convex, bounded, closed set containing the zero element, in order to apply Propositions 5 and 6, it is sufficient to check that J : H 4 → R is continuous.We have to verify that for fixed and so it is sufficient to show that 1 by Lemma 18.Thus we can conclude that there is a constant C > 0 such that Explicit formula for optimal solution.We wish to minimise J : From Proposition 9 we know there is a unique optimum.In order to write an explicit formula for the optimal Ṫ we require a representative of Proof.Since H 1 (S 1 ) ⊆ W 1,1 (S 1 ) we have that (W 1,1 (S 1 )) * ⊆ (H 1 (S 1 )) * and therefore φ 0 ∈ (H 1 (S 1 )) * .The result follows by Riesz representation theorem.□ We now state a theorem identifying this optimal perturbation.
Theorem 11.Let the family of maps T δ satisfy (A0), (A1) and (A2) and consider a family of isolated eigenvalues λ δ .The perturbation Ṫ ∈ H 4 (S 1 , R) that maximises the expected linear response λ( Ṫ ) of λ 0 (i.e.solves Problem ( 16)-( 17)) is given by Ṫ = ∞ n=−∞ a n e n , where e n (x) = exp(2πinx) and the coefficients a n are given by (19) Moreover, the maximal linear response is given by Proof.We follow a similar strategy to the proof of Theorem 8. Given Ṫ ∈ H 4 , we first need to show that ∆J ( Ṫ , Thus by Lemma 10, there is a which is finite for each T ∈ H 4 , and so by linearity of J , we see that for each Ṫ , .
The variation of the functional g is handled identically to the proof of Theorem 8; one obtains Weak continuity of the variations ∆J and ∆g follows as in the proof of Theorem 8. We form the Lagrangian L( Ṫ , ν) = J ( Ṫ ) − νg( Ṫ ), with Lagrange multiplier ν ∈ R. Having established weak continuity of the variations of J and g, the Euler-Lagrange Multiplier Theorem [33] (section 3.3), guarantees that a necessary condition for Ṫ be a local extremum of the constrained Problem ( 16)-( 17) is that: where final equality uses e m • e n = δ m,n .Thus, for n ∈ Z, the coefficients a n are given by ( 21) Notice that each b n has zero mean as the derivative of any periodic function integrates to zero.The integrals in ( 21) makes sense, because e n ∈ C ∞ , v 0 ∈ C 3 and T ′ ∈ C 3 and is bounded uniformly below by 1. Therefore the function b n is in C 1 (in particular uniformly bounded with |b| ∞ = O(n) due to the derivative acting on e n ) and in H 1 .We verify that the above a n define a Ṫ ∈ H 4 in exactly the same way as in Theorem 8.The fact that ν > 0 follows exactly as at the end of the proof of Theorem 8.The final claim of the theorem follows from (15), using Lemma 10 to represent φ 0 ( L( Ṫ )v 0 ).□
Numerical approach
The computations revolve around evaluating the integrals in (10) and (19).We begin by discussing the common elements of these computations and then discuss specific elements in the subsequent subsections.In this numerical section we will denote L 0 by simply L to avoid confusion with the approximate transfer operator at resolution N , which we denote by L N .
We first build a projection of L (the action of L in frequency space) on complex exponentials e n (x) = exp(2πinx).A finite spatial grid {0, 1/N, 2/N, . . ., (N − 1)/N } ⊂ S 1 corresponds to the N Fourier modes {e −N/2+1 , . . ., e N/2 } via discrete Fourier transform.A matrix LN is constructed from estimates of the integrals LN,nm := with the latter expression above estimated using fast Fourier transform of e n •T on a grid eight times finer than the N -grid.Matrix multiplication by LN updates the Fourier coefficients of a function, corresponding to applying L to the function itself; in particular L(e n ) ≈ m LN,mn e m .See [11] for details.6.1.Numerical computation of the optimal response of the expectation of an observable.Referring to (10), (1) We estimate f 0 as the inverse transform of the leading eigenvector fN,0 of LN .
(5) To numerically estimate (I − L) −1 L((e n f 0 /T ′ ) ′ ) we solve the linear system ŷN = and ŷN,1 = 0.The latter condition ensures that the first (constant) Fourier mode is zero, corresponding to mean-zero functions, which is the appropriate subspace for the resolvent (I − L) −1 to act on.(6) The integral with c is performed by taking a dot product of the discrete Fourier transform of the observation ĉ and ŷN .(7) The result is scaled by the appropriate denominator in (10) to produce a n up to a scaling factor controlled by ν.
In the examples we show results obtained by constraining the perturbation Ṫ according to the standard Sobolev norm ∥ • ∥ H 4 and weighted Sobolev norms ∥f ∥ 2 Increasing γ penalises high derivatives less and allows for optimal Ṫ with greater irregularity.
6.2.Numerical computation of optimising an isolated spectral value.The isolated spectrum may be estimated by the outer spectrum of L; see [34] for formal statements.We compute LN as described above and consider one of its eigenvalues λ 0 satisfying λ 0 > 1/ inf |T ′ 0 |.Associated with λ 0 is an eigenvector v 0 , which we estimate as the inverse Fourier transform of the eigenvector v0 of LN corresponding to the eigenvalue λ 0 .Referring to (19) we see that we must estimate L((e n v 0 /T ′ ) ′ ) and the representative ϕ 0 of φ 0 in H 1 .The former object is calculated according to steps 1-4 in Section 6.1, followed by an inverse FFT.The calculation of the representative of φ 0 is described in the next subsection.6.2.1.Computing a representative ϕ 0 of φ 0 in H 1 .By the adjoint eigenproperty of φ 0 we have (22) φ We seek a representative of φ 0 in H 1 and use the ansatz −N/2+1 a m e m .We ask that (22) holds for f = e n , n = −N/2 + 1, . . ., N/2.Thus, using (18) we wish to find a m , m = −N/2 + 1, . . ., N/2 such that ( LN,pn āp ) That is, a satisfies In other words, the conjugate a coefficients form a left eigenvector of LN , suitably scaled.Since λ 0 and the LN are known numerically, we may easily solve for the a n , n = −N/2 + 1, . . ., N/2.
Examples
We illustrate Theorems 8 and 11 in the following two subsections.In both cases we set N = 512.7.1.Optimising the expectation of observables.We define the expanding circle map T (x) = 2x − (0.9/2π) sin(2πx) computed modulo 1.The lower expansivity of T at the fixed point x = 0 leads to greater values of the invariant density nearby x = 0.A graph of T and its invariant density are shown in Figure 1.
Figure 2 illustrates the optimal perturbations Ṫ for c(x) = cos(2πx) and various weights γ used in the H 4 norm.The map perturbations seek to increase the expectation of c.Because optimal map perturbations for different weights γ.Each perturbation has unit norm in its corresponding γ-weighted norm.As γ increases, the optimal map perturbation may become more irregular as a smaller penalty is paid for the first to fourth-order derivatives.the maximal value of c occurs at x = 0 it is advantageous for the perturbations to retain the fixed point at x = 0 while simultaneously reducing the expansivity of the map at the fixed point.Such perturbations make the fixed point more "sticky" and will lead to invariant densities with even greater values at x = 0, leading to increases in expectation of c.
Figure 3 carries out the same experiments, replacing the observation function with c(x) = sin(2πx).Now, there is an imperative to remove the sticky fixed point to move invariant mass away from x = 0 and toward x = 0.25, where c takes its maximum.As shown in Figure 3, the strategy is to displace the fixed point by moving it to the right from x = 0. optimal map perturbations for different weights γ.Each perturbation has unit norm in its corresponding γ-weighted norm.As γ increases, the optimal map perturbation may become more irregular as a smaller penalty is paid for the first to fourth-order derivatives.7.2.Optimising the spectrum.To optimise the spectrum, we require an isolated eigenvalue λ 0 of the transfer operator L satisfying We construct a piecewiselinear Markov map of S 1 by linearly connecting the points (rounded to 4 decimal places) x ∈ {0.0, 0.1197, 0.2045, 0.2453, 0.3369, 0.3874, 0.49, 0.5875, 0.6336, 0.7343, 0.7695, 0.8523, 1.0} to their respective images (to be taken mod 1) T 0 (x) ∈ {0.0, 0.1976, 0.3032, 0.4505, 0.5885, 0.7312, 1.0, 1.1976, 1.3032, 1.4505, 1.5885, 1.7312, 2.0} as shown in Figure 4 (left).This Markov map has an isolated spectral value of λ 0 ≈ 0.8231, while 1/ inf |T ′ 0 | ≈ 0.6579; see Figure 4 (right).We are not aware of another two-branch2 circle map in the literature whose Perron-Frobenius operator has a positive isolated eigenvalue strictly inside the unit circle, but larger than the reciprocal of the magnitude of minimal slope.
We then smooth this map by convolving with a bump function κϵ, using ϵ = 1/40; κ is a constant chosen so that S 1 b = 1.The smoothed map satisfies 1/ inf |T ′ 0 | ≈ 0.6579 and the second largest magnitude eigenvalue of LN for this smoothed map T 0 is λ 0 ≈ 0.6992; as we no longer discuss the original piecewise linear map we reuse the notation T 0 and λ 0 .The graph of the smoothed map T 0 , its numerical spectrum, and estimates of the eigenvector v 0 and the representative φ 0 are displayed in Figure 5. Figure 6 illustrates the optimal perturbations Ṫ to maximally increase the isolated spectral value.Using Theorem 11 we may also compute the linear responses of λ 0 with respect to the optimal map perturbations Ṫ shown in Figure 6 for various γ-weighted H 4 norms.For γ = 1, 25, 50, and 200 we find that λ( Ṫ ) is approximately 0.5758, 3.1427, 11.1590, and 125.51, respectively.These values indicate that as we reduce the penalty on the irregularity of the perturbations Ṫ by increasing γ, we may increase the corresponding linear response of λ 0 without limit.To put these numbers in perspective, we remark that even a movement of the spectral value λ 0 by an amount 0.1 would be dramatic, and with γ = 200, making a macroscopic peturbation of T 0 by Ṫ /1000 would exceed such a movement (up to linear approximation).
Appendix A. General Linear Response formulas for eigenvalues and eigenvectors and application to expanding maps
In this section we recall general results for the linear response of fixed points, eigenvalues and eigenvectors of Markov operators under suitable perturbations.We then develop the estimates that are necessary to apply the results to expanding maps and deterministic perturbations.Let X be a compact Riemann manifold and let m be its normalized volume measure.Denote by L 1 (X, m) or simply by L 1 the space of m-integrable functions.We consider a sequence of Banach spaces with B w ⊆ L 1 (X, m) and the norms satisfying Let us suppose that B ss contains the constant functions.For i ∈ {ss, s, w} define the following (closed) zero-mean function spaces V ss ⊆ V s ⊆ V w by V i := {f ∈ B i : X f dm = 0}.We will consider Markov3 operators acting on these spaces.If A, B are two normed vector spaces and T : A → B we denote the mixed norm ∥T ∥ A→B as A.1.Abstract linear response of invariant densities.Under the general assumptions and notations above we are now going to state an abstract result on the linear response of invariant densities.Similar results and constructions appear in the literature, applied to specific classes of examples, we include a proof of the statement for completeness.
Theorem 12. Let us consider δ > 0, δ ∈ [0, δ) and a family of Markov operators L δ : B w → B w .Suppose that for δ ∈ [0, δ) there is a probability density v δ ∈ B s such that and that there is Lv 0 ∈ B s such that Suppose that for δ ∈ [0, δ) the resolvent operators (Id − L δ ) −1 of L δ are defined and bounded from V s to V w , Then Proof.We have that for each δ ∈ [0, δ), v δ is a fixed point of L δ .Using this we get We remark that for each δ, L δ preserves V s .Since ∀δ > 0, L δ −L 0 δ v 0 ∈ V s and by the assumptions, ( 25), (26), for δ small enough (Id − L δ ) −1 : V s → V w is a uniformly bounded operator.We can apply the resolvent to both sides of the expression above to get (27) implies that in the B w topology Abstract linear response of the resolvent and linear response of eigenvalues.
In this section we provide general statements about the response of eigenvalues and resolvent operators.The results presented follow from classical statements we adapt to our purposes.Corollary 14 illustrates the abstract linear response of the resolvent and Proposition 15 provides differentiability of an isolated eigenvalue and of its corresponding eigenvector in B w .We recall the abstract linear response result for the resolvent operator proved in [18] and we apply it to get linear response formulas for simple eigenvalues and eigenvectors of transfer operators.Recalling the Banach spaces B ss , B s , B w from the last subsection, let us suppose that all are separable and that B ss is dense in B s .Let us consider δ > 0, δ ∈ [0, δ) and a family of Markov operators L δ and an operator L : B s → B w satisfying the following assumptions: there are C ≥ 0 and 0 ≤ α < 1 such that for each n ∈ N and δ ∈ [0, δ): Under these assumptions one has: Theorem 13 ([18]).Consider a family of operators L δ satisfying the assumptions (GL1),...,(GL6).Further consider δ ′ > 0, α ′ > α and the set where σ(L 0 ) = σ s (L 0 ) ∪ σ ss (L 0 ) and σ s (L 0 ), σ ss (L 0 ) denote the spectrum of L 0 acting on B s and B ss respectively.
We remark that (29) gives the first-order change of the resolvent when the operator is perturbed; in fact from (29) one has the following immediate rearrangement ( 30) and the following Corollary: Corollary 14.Under the above assumptions and with the same notations We will also make the following assumption on the spaces involved, which is well known to imply together with the assumptions (GL1) and (GL2), the quasicompactness of the transfer operators L δ when acting on B s and on B ss .(GL7) B s is compactly immersed in B w and B ss is compactly immersed in B s .
Let 1 be the density representing the indicator function of S 1 .Let λ 0 be a simple isolated eigenvalue for L 0 acting on B s .Lemma III.3 of [19] now ensures the existence of a unique eigenfuction φ 0 ∈ B * s of the adjoint operator L * 0 : B * s → B * s corresponding to λ 0 (L * 0 φ 0 = λ 0 φ 0 ), scaled so that φ 0 (1) = 1.Suppose λ δ is an isolated, simple eigenvalue for L δ acting on B s .Suppose v δ is an eigenvector of L δ associated to λ δ .To quantitatively address the differentiability of v δ and λ δ we need to scale v δ consistently.We will rescale v δ in a way that φ 0 (v δ ) = 1 for all δ ∈ [0, δ].Let us consider a simple eigenvalue λ δ ∈ σ(L δ ), and θ > 0 such that It is well known that this is a projection (Π 2 λ δ = Π λ δ ) and it does not depend on θ or on the circle {|z − λ δ | = θ}, which can be replaced by any smooth simple curve only containing the simple eigenvalue λ δ .Furthermore We now consider the dependence of the isolated eigenvalue λ δ on δ.First we consider the associated eigenvector v δ and state a formula for its derivative with respect to δ. Proposition 15.Suppose the family of transfer operators L δ satisfy the assumptions (GL1),..., (GL7).Suppose |λ 0 | > α (see (GL2)) is a simple isolated eigenvalue for L 0 acting on B s .Then (I) λ 0 is also an eigenvalue of the operator applied to B ss .Furthermore, for δ small enough L δ has a family of simple isolated eigenvalues (both for the operator applied to B s and B ss ) λ δ with λ δ → λ 0 as δ → 0. (II) each λ δ has an eigenvector v δ ∈ B s , rescaled by φ 0 (v δ ) = 1 as described before, and as δ → 0 (32) lim Further, one has with convergence in B w .Moreover, the function δ → λ δ is differentiable.
Proof.Lemma A.3 of [5] (following a similar result proved in [6]) implies that if (i) B s is separable and L : B s → B s is a continuous linear map preserving a dense continuously embedded subspace B ss , (ii) L : B ss → B ss is continuous and (iii) the essential spectral radius of L considered both as acting on B s and on B ss is bounded by 0 < ρ < 1, then the simple eigenvalues of L : B s → B s and L : B ss → B ss in {z ∈ C| ||z|| > ρ} coincide, the associated eigenspaces also coincide and are contained in B ss .The assumptions (GL1),...,(GL3) and (GL7) imply that the essential spectral radius is bounded by α (see e.g.Lemma 2.2 of [7]).The assumptions (GL1),...,(GL3) also imply the required continuity properties for the operators acting on B s and B ss , thus we can apply Lemma A.3 of [5] and establish that λ 0 is a simple eigenvalue of L 0 applied on B ss and the associated eigenspace is generated by an eigenvector v 0 ∈ B ss .As a classical consequence of the assumptions (GL1),...,(GL4) and (GL7) the spectral stability theorem of [21] establishes that for δ small enough, λ δ is simple and the associated eigenspaces are generated by v δ ∈ B s .Furthermore one has that lim δ→0 |λ δ − λ 0 | → 0 and φ 0 (Π λ δ (v 0 )) varies continuously in B s at δ = 0.This takes care of part (I) and part (II) up to equation (32).
For the remainder of part (II), putting together (30) and (31) we have where O δ represents an operator such that there is a constant C so that ≤ C|δ| η for sufficiently small δ.Thus, since for δ small enough, {|z − λ δ | = θ} also encircles λ 0 where Since, as proved above, v 0 ∈ B ss , we then have where ||q δ ||w δ → 0 and using the fact We then get: For the differentiability of δ → λ δ , let us consider the normalised eigenvector v δ as used before.Using that L δ (v δ ) = λ δ v δ we get To conclude we will argue that both terms on the LHS of ( 34) and the first term on the RHS of (34) converge in the B w topology; this implies that the second term on the RHS of (34) also converges, yielding differentiability of δ → λ δ .Regarding the first term on the LHS of (34), by (GL4) and (32) we have that lim δ || w = 0; thus we may replace L δ with L 0 in this term.Furthermore, L δ −L 0 δ v 0 converges in B w by (GL6) and converges in B w as proved above.Comparing the LHS and RHS of (34) we see that lim δ→0 3. An abstract formula for the linear response of the spectrum.Proposition 16 constructs the abstract formula for the derivative of the eigenvalue.
Now we are ready to prove that |φ
→ 0. By ( 38) and ( 39) we can bound as follows: choosing n as above, for each δ ∈ [0, δ′ ) we get which, since ϵ is arbitrary, can be made as small as wanted as δ → 0, proving the statement.□ A.4. Verifying abstract transfer operator conditions using properties of expanding maps.In this section we develop more explicit estimates that allow us to apply the above abstract theory to expanding maps and suitable perturbations (verifying (A0),(A1),(A2)).This will lead to the proof of Proposition 3. In the following we will then consider as a stronger and weaker spaces B ss , B s , B w considered above, the spaces of Borel densities in a Sobolev space W 3,1 , W 1,1 , L 1 .The transfer and derivative operators associated to expanding maps will be defined as acting on densities as in Definition 1.In the following we will recall some basic facts on the properties of such operators.
A.4.1.Uniform estimates for individual maps.Assumptions (GL1)-(GL3) require the transfer operators to satisfy uniform norm and Lasota-Yorke estimates.These hypotheses will hold when the operators are associated to a uniform family of maps.
Definition 17.
A set U M,N of expanding maps of S 1 is called a uniform C k family with parameters M ≥ 0 and N > 1 if it satisfies uniformly the expansiveness and regularity condition: It is well known that the transfer operator associated to a smooth expanding map has some regularization properties when acting on suitable Sobolev spaces (see e.g.[28] and [17]).This is expessed in the following Lemma.
Lemma 18 ([28], Section 1.5; [17], Lemma 29).Let U M,N be a uniform C k family of expanding maps of S 1 .The transfer operators L T associated to each T ∈ U M,N satisfy a uniform Lasota-Yorke inequality on W i,1 (S 1 ): Lemma 18 allow us to establish that the transfer operators associated to expanding maps satisfy the assumptions (GL1),...,(GL3).These properties, along with the compact immersion of W k,1 in W k−1,1 allow one to classically deduce that the transfer operator L 0 of a C k expanding map T is quasi-compact on each W i,1 (S 1 ), with 1 ≤ i ≤ k − 1.Furthermore, by topological transitivity of expanding maps, 1 is the only eigenvalue on the unit circle.All of the above leads to the following classical result; see e.g.[28], Section 3 or [17] Proposition 30.Define the spaces In particular, the resolvent (Id − L 0 ) −1 := ∞ j=0 L j 0 is a well-defined and bounded operator on V i .
A.4.2.Small perturbation estimates.In this subsection we recall some more or less known estimates (see e.g.[14]) showing that a small perturbation of an expanding map induces a small perturbation of the associated transfer operator when considered as acting from a stronger to a weaker Sobolev space.This will allow us to verify that the assumption (GL4) applies to suitable deterministic perturbations of expanding maps.
Proposition 20.Let {T δ } δ∈[0,δ) be a family of C 2 expanding maps such that T 0 ∈ C 3 .Let L δ be the transfer operators associated to T δ and suppose that for some K ∈ R one has Then there is a C > 0 such that ∀f ∈ W 1,1 : Proof.In [14], Section 7 it is proved that if L 0 and L δ are transfer operators of C 3 expanding maps T 0 and T δ , such that for some K ∈ R and (45) is established.From this we can also recover (46), using the explicit formula for the transfer operator: Noting that T ′ 0 (y) = T ′ 0 (T −1 0 (x)) we can compute the derivative of (48) And similarly for L δ .Note that ). Hence applying (47), (49) and the fact that L is a weak contraction on L 1 for some C 1 , C 2 ≥ 0 depending on T 0 but not on f.This proves (46).□ Small perturbations in a mixed norm sense -namely W 1,1 into L 1 -(see (GL4) and Proposition 20) will imply a classical fact: the stability of the resolvent.The following will be used in the proof of Proposition 3 to verify assumption (26) when invoking Theorem 12.
Proposition 21.For δ ∈ [0, δ), let T δ : S 1 → S 1 be a family of C 3 expanding maps.Suppose that the dependence of the family on δ is differentiable at 0 in the sense of assumptions (A0), (A1),(A2), then Proof.The result follows from Theorem 1 of [21].This theorem says that if a family of bounded linear operators L δ acting on weak and strong Banach spaces B s , B w such that (i) there is a compact immersion of B s into B w , (ii) the family of operators satisfy (GL1), the first equation of (GL2), (GL3) and (GL4), and (iii) the spectral radius of Equation (50) will follow directly by applying this theorem to the transfer operators L δ associated to a family of expanding maps T δ satisfying (A0), (A1), (A2) considering B s = V 1 and B w = V 0 .For (i) the compact immersion is well known from the Rellich-Kondracov theorem.For (ii), note that for the family of maps T δ , (GL1), (GL2) and (GL3) are verified by the Lasota-Yorke inequalities established in Lemma 18, and the small perturbation assumptions needed at (GL4) are established in Proposition 20.For (iii), we note that by Proposition 19, the spectral radius of ) the theorem can be applied, implying (50).□ A.4.3.The derivative operator.In this section we study the operator L representing a "first derivative" of the transfer operator with respect to the perturbation parameter δ.The next result is similar to [16, Proposition 3.1] but has been strengthened to be quantitative and uniform, in order to verify the assumptions (GL5), (GL6) of Theorem 13 and the assumption (24) of Proposition 16.
Proposition 22.Let T δ : S 1 → S 1 , where δ ∈ [0, δ) be a family of C 3 expanding maps.Suppose that the dependence of the family on δ is differentiable at 0 in the sense of (A2).Let L δ be the transfer operator associated to T δ .Let us consider again the operator L : w .
with convergence in the C 1 topology (and therefore also in the W 1,1 and H 1 topologies).
Moreover, one has a quantitative and uniform convergence on the unit ball B W 2,1 (S 1 ) : (53) sup We denote by {y δ i } d i=1 := T −1 δ (x) and {y 0 i } d i=1 := T −1 0 (x) the d preimages under T δ and T 0 , respectively, of a point x ∈ X. Futhermore, we assume that the indexing is chosen so that y δ i is a small perturbation of y 0 i , for 1 ≤ i ≤ d.Before presenting the proof of Proposition 22 we state a lemma.Lemma 23.For y δ i ∈ T −1 δ (x) we can write where we say Proof of Lemma 23.For a family of functions F δ we will say that +∞ for some δ > 0. Let T δ,i be the branches of T δ .We have (55) .
Now let us compute
dδ .Let us fix x ∈ S 1 and write (56) ) and F i (δ, x) = O(δ 2 ) for each x, then we will show that F i (δ, x) satisfies (54).
For the first two claims, let us fix x ∈ S 1 .Substituting (56) into the identity T δ (y δ i (x)) = x we can expand where by ( 2), E(δ, x) satisfies (54).Further, ).Since T 0 ∈ C 2 we can write the first term in the right hand side of (57) as for some ξ ′ ∈ S 1 and use that T 0 (y 0 i (x)) = x to cancel terms on either side of (57) and get 0 = T ′ 0 (y Recalling that T ′′ 0 is uniformly bounded on S 1 , for each fixed x, as δ → 0 we can then identify the first-order terms in (59) as δT ′ 0 (y and thus . Since F i (δ, x) = o(δ) we get that for each x (61) By (56) we have This shows that dF i (δ,x) dx is uniformly bounded and then by (61) and the compactness of S 1 for each i.This means that for δ small enough |F i (δ, x)| ≤ 1 2M for each x and then inserting this in (60) we get and from which we get that (63)
Now we analyze the fist part of the right hand side where for the first summand, since y δ i (x) = y 0 i (x) + δϵ i (x) + F i (δ, x) we get for some ξ ∈ S 1 (depending on x), by Lagrange theorem.Then we get The second summand is expanded similarly as follows since applying the Lagrange theorem |ξ − T ′ 0 (y 0 i (x))| can be small as wanted when δ is small enough and then ξ is bounded away from 0.
Putting the two summands expanded together and recalling that proving the statement. □ The following simple lemma on the convergence of compositions of functions will also be useful.
Proof.To prove (69) we can write and while it is obvious that the first summand goes to 0, for the second we have and (69) is proved.To prove (70) we write and (70) is proved.□ We now prove Proposition 22.
Proof of Proposition 22 .We again denote by {y δ i } d i=1 := T −1 δ (x) and {y 0 i } d i=1 := T −1 0 (x) the d preimages under T δ and T 0 , respectively, of a point x ∈ X.We can write .
In the following we will analyze the terms (I), (II), (III) showing the convergence of these terms to the three summands in the right hand side of (51) both in the C 1 topology (see (52)) and in the uniform sense required by (53).
Term (I):
For the first term we first differentiate the expansion 4 We can then write ) is C 1 on the circle with uniformly bounded norm when δ is small enough, we have that lim with convergence in the C 1 topology, using Lemma 23, (69), and (72) in the final equality.This proves that (I) converges in the C 1 topology to the first summand of (51).
We now turn to the uniform convergence in (53) for term (I).By uniform expansivity of T 0 and (72) there is when δ is sufficiently small.By the Sobolev Embedding Theorem, K 1 is uniform for w ∈ B W 2,1 (0,1) .We apply (75) and (70 , we see that as δ → 0 one has 1 δ By (76), we obtain and again by the Sobolev Embedding Theorem, and the uniformity of constants noted immediately after (76) this is uniform for w ∈ B W 2,1 (0, 1).
Term (II):
We prove the convergence of the second term of (71) to the second summand of (51) both in the sense of ( 52) and (53).Suppose w ∈ C 2 .Using the Taylor expansion with Lagrange remainder we have that where ξ lies between y δ i and y 0 i .Using Lemma 23 we get Since w ′′ is uniformly bounded, we get Thus, As in (74), by Lemma 23, one has in the C 0 topology (77) lim Now we prove the convergence in C 1 .To begin this we first prove that in the C 1 topology (78) lim , using Lemma 23 for the second equality.Since w(y δ i ) → w(y 0 i ) in C 1 then by the expression immediately above, ] tends to 0 in C 1 and (78) is proved.Using this we now upgrade the convergence of (77) to C 1 .By (78) it is sufficient to prove in the C 0 topology.By uniform expansivity of T 0 this is equivalent to proving in the C 0 topology.Thus let us compute To handle the first summand in (80) we remark that by Taylor expansion with first-order Lagrange remainder we can obtain ). with ξ between y 0 i and y δ i .By the uniform continuity of w ′′ we get (w ′′ (ξ) − w ′′ (y 0 i )) → 0 and then (w ′′ (ξ) − w ′′ (y 0 i ))(y δ i − y 0 i ) = o(δ).uniformly on the circle, then by Lemma 23 again By this we also have that w(y in the C 0 topology.The second summand in (80) is estimated as in the proof of Lemma (23) (see the calculations from (65) to (66)) obtaining .An example of the integration regions considered in (84).The grey region delimited by the inverse branches y 0 0 , y δ 0 is the region on which the integral in the first term of (84) is defined, the union of the grey and the yellow region is the region (delimited by all inverse branches of the map before and after perturbation) on which which the integral the last term of (84) is defined.
Continuing, we write uniformly for w ∈ B W 2,1 (0, 1).□ A.5. Proof of Proposition 3. In the following Lemma, we establish more precise regularity for the eigenvectors of the simple eigenvalues of the transfer operator.
Proof.Lemma A.3 of [5] (whose statement is recalled in the proof of Proposition 15) can be applied to the transfer operator L 0 : W 1,1 → W 1,1 , obtaining that v 0 ∈ W 2,1 as follows.
By Lemma 18 we get that L 0 satisfies Lasota-Yorke inequalities both with W 1,1 and L 1 and with W 2,1 and W 1,1 as a weak and strong spaces.This implies that L 0 : W 1,1 → W 1,1 is continuous and preserves W 2,1 .Furthermore the restriction of L 0 to W 2,1 is also continuous as an operator on W 2,1 .It is also well known that by the Lasota-Yorke inequalities (Lemma 18) and the compact immersion between the strong and the weak spaces provided by the Rellich-Kondracov theorem, the essential spectral radii of L 0 : W 1,1 → W 1,1 and L 0 : W 2,1 → W 2,1 are smaller than 1 inf(T ′ 0 ) (see e.g.Lemma 2.2 of [6]).Thus Lemma A.3 of [5] can be applied, giving that the generator v 0 of the one dimensional eigenspace associated to λ 0 is contained in W 2,1 .Now, since for a C 4 map, Lemma 18 also gives a W 3,1 , W 2,1 Lasota Yorke inequality,
Figure 2 .
Figure 2. Upper left: observation function c(x) = cos(2πx).Other panels:optimal map perturbations for different weights γ.Each perturbation has unit norm in its corresponding γ-weighted norm.As γ increases, the optimal map perturbation may become more irregular as a smaller penalty is paid for the first to fourth-order derivatives.
Figure 3 .
Figure 3. Upper left: observation function c(x) = sin(2πx).Other panels:optimal map perturbations for different weights γ.Each perturbation has unit norm in its corresponding γ-weighted norm.As γ increases, the optimal map perturbation may become more irregular as a smaller penalty is paid for the first to fourth-order derivatives.
Figure 4 .
Figure 4. Left: the graph of the lift of the two-full-branch piecewise-linear circle map constructed by joining the 13 point pairs (x, T 0 (x)) listed toward the beginning of section 7.2 and shown as orange dots in the figure.Right: 12 eigenvalues, shown as blue dots, of the 12 × 12 matrix representing the restriction of the transfer operator of the Markov map to the invariant subspace spanned by 12 indicator functions supported on the domains of linearity of the map.The essential spectrum bound produced by the inverse of the minimum slope of the 12 linear branches is shown in red.
Figure 6 .
Figure 6.Optimal perturbations Ṫ for different weights γ.Each perturbation Ṫ has unit norm in its corresponding γ-weighted norm.As γ increases, the optimal map perturbation may become more irregular because a smaller penalty is paid for the first to fourth-order derivatives.The values of λ( Ṫ ) are (in order upper left, upper right, lower left, lower right): 0.5758, 3.1427, 11.1590, and 125.51, demonstrating that as we allow greater irregularity in the γ-weighted norm we are able to increase the spectral response.
Figure 7
Figure 7.An example of the integration regions considered in (84).The grey region delimited by the inverse branches y 0 0 , y δ 0 is the region on which the integral in the first term of (84) is defined, the union of the grey and the yellow region is the region (delimited by all inverse branches of the map before and after perturbation) on which which the integral the last term of (84) is defined. | 2023-11-01T06:42:54.870Z | 2023-10-29T00:00:00.000 | {
"year": 2023,
"sha1": "6bb9b7e80933f12d2d9a111064516302e8ae87b6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6bb9b7e80933f12d2d9a111064516302e8ae87b6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
19633435 | pes2o/s2orc | v3-fos-license | Structure and Function of the PriC DNA Replication Restart Protein*
Collisions between DNA replication complexes (replisomes) and barriers such as damaged DNA or tightly bound protein complexes can dissociate replisomes from chromosomes prematurely. Replisomes must be reloaded under these circumstances to avoid incomplete replication and cell death. Bacteria have evolved multiple pathways that initiate DNA replication restart by recognizing and remodeling abandoned replication forks and reloading the replicative helicase. In vitro, the simplest of these pathways is mediated by the single-domain PriC protein, which, along with the DnaC helicase loader, can load the DnaB replicative helicase onto DNA bound by the single-stranded DNA (ssDNA)-binding protein (SSB). Previous biochemical studies have identified PriC residues that mediate interactions with ssDNA and SSB. However, the mechanisms by which PriC drives DNA replication restart have remained poorly defined due to the limited structural information available for PriC. Here, we report the NMR structure of full-length PriC from Cronobacter sakazakii. PriC forms a compact bundle of α-helices that brings together residues involved in ssDNA and SSB binding at adjacent sites on the protein surface. Disruption of these interaction sites and of other conserved residues leads to decreased DnaB helicase loading onto SSB-bound DNA. We also demonstrate that PriC can directly interact with DnaB and the DnaB·DnaC complex. These data lead to a model in which PriC acts as a scaffold for recruiting DnaB·DnaC to SSB/ssDNA sites present at stalled replication forks.
Replication of circular chromosomes found in many bacteria is initiated by sequence-specific binding of the DnaA initiator protein to the origin of replication, oriC, which promotes duplex DNA melting (1)(2)(3)(4). Single-stranded DNA (ssDNA) 4 exposed by DnaA unwinding is rapidly bound by the ssDNAbinding protein (SSB). DnaA, along with the helicase loader DnaC, then directs loading of the replicative helicase, DnaB, onto the SSB-coated ssDNA (4 -7). The remaining replication proteins are recruited through protein interactions to form the full replication complex, termed the replisome (8 -12). With each round of replication, two replisomes are loaded at oriC to replicate bidirectionally around the chromosome until converging at the terminator region (13).
Replisomes assembled at oriC frequently encounter physical barriers, such as damaged DNA or genome-bound protein complexes (e.g. transcription machinery), that can stall and/or prematurely dissociate the replisome from the DNA template (14). Estimates from studies in Escherichia coli suggest that very few replisomes translocate to the replication terminus without dissociating at least once during each replication cycle (15). Because unrepaired premature termination events lead to incomplete replication, genome instability, and cell death, DNA replication restart mechanisms that reload replisomes onto abandoned replication forks are essential in bacteria (16). Due to the sporadic nature of replication failure, DNA replication restart pathways must recognize abandoned replication forks in a structure-specific and sequence-independent manner to enable reloading of the replicative helicase.
E. coli encodes three genetically defined DNA replication restart pathways that rely on distinct subsets of proteins: PriA/ PriB/DnaT, PriC/Rep, and PriA/PriC (17). In vitro reconstitution of the PriA/PriB/DnaT pathway shows that it relies on a complex multiprotein hand-off mechanism to reload the DnaB helicase (18,19). PriC, in contrast, is able to mediate DnaB loading without a requirement for Rep or other restart proteins in vitro (20). Although PriC is not well conserved among bacterial species, the simplicity of PriC-mediated DnaB loading makes it an excellent system for probing the minimal requirements for replication restart.
Given that DnaB loading at oriC is heavily regulated, reloading of the helicase at stalled forks is also likely to be a regulated process to ensure that the replisome does not assemble at improper sites. Accordingly, replication restart proteins are activated by recognition of appropriate replication fork substrates (20). Recent advances in defining the mechanisms underlying PriC-mediated replication restart have provided insights into how fork recognition and remodeling occur. Interaction between PriC and SSB is required for both fork recognition and remodeling, whereas an interaction between PriC and ssDNA is predicted to play a role in mediating fork recognition (21)(22)(23). Despite these recent studies, a deeper understanding of PriC replication restart mechanisms has been hampered by the limited amount of structural information on PriC and the lack of insights into how DnaB is recruited to PriC-bound replication forks.
To better define the mechanisms of DNA replication restart, we have determined the NMR structure of PriC from Cronobactersakazakii.PriCconsistsofacompactbundleoffive ␣-helices with residues that mediate interactions with SSB and ssDNA clustering together on the protein surface in adjacent binding sites. A biochemical study of PriC variants with altered conserved surface residues confirms the critical contribution of the SSB and ssDNA binding sites for in vitro DnaB loading.
Moreover, conserved regions outside of these binding sites were found to be essential for PriC function in vitro and in vivo. Finally, we demonstrate that PriC directly binds to DnaB and the DnaB⅐DnaC complex. Taken together, these data support a model in which PriC acts as a scaffold that recruits DnaB to SSB/ssDNA present at stalled replication forks.
Results
In Vitro and in Vivo Functions of CsPriC-To determine the structure of full-length PriC, we initially attempted to crystallize PriC proteins from several bacterial species. Although crystals were not obtained for any of these targets, C. sakazakii PriC (CsPriC; 41% identical, 55% similar to EcPriC (Fig. 1A)) demonstrated better solubility characteristics than other PriC homologs, making it a potential target for structure determination by NMR. The activity of CsPriC was therefore examined to determine whether it retained the in vitro and in vivo activities expected for a bona fide PriC.
We first measured CsPriC binding to a peptide comprising the SSB C terminus (SSB-Ct) and to ssDNA, both of which are The asterisk marks the location of the 32 P label. Shown is a quantification (top) of the percentage of product unwound for each condition. The grid (middle) indicates which reaction components were included (ϩ) or omitted (Ϫ) in each lane. Data are the mean of three replicates with one S.D. shown as error. E, co-transduction analysis demonstrating that CsPriC can complement a priC deletion in E. coli. Data are the number of co-transductants (⌬priB Tet R ) versus the total number of Tet R colonies tested. Linkages are obtained from 2-4 individual transduction experiments. **, p Ͻ 0.001 in a 2 test using the transduction frequency for the wild type as the expected value.
activities that have been reported for EcPriC (21,23). Isothermal titration calorimetry (ITC) analysis showed that CsPriC bound to the SSB-Ct peptide with a K d of 2.1 Ϯ 0.4 M (Fig. 1B), closely matching the K d for the EcPriC⅐SSB-Ct complex measured under the same conditions (3.7 Ϯ 0.6 M (23)). A fluorescence polarization-based ssDNA binding assay showed that CsPriC also bound a 5Ј-fluorescein-labeled dT 15 oligonucleotide with K d values of Ͻ5 nM in the absence of NaCl and 25.9 Ϯ 0.9 nM in the presence of 75 mM NaCl (Fig. 1C). EcPriC bound the same DNA with K d values of 9.2 Ϯ 2.6 and 52.8 Ϯ 3.7 nM, respectively, under the same conditions (Fig. 1C).
We next utilized an in vitro assay to test whether CsPriC could load EcDnaB onto a synthetic replication fork structure as has been previously observed for EcPriC (20,23). In this assay, a radiolabeled forked DNA substrate is prebound by SSB, which blocks spontaneous loading of DnaB from the DnaB⅐DnaC helicase⅐loader complex. The addition of EcPriC relieves this blockage and stimulates DnaB loading, as scored by DnaB-mediated DNA unwinding. As expected for a bona fide PriC, the addition of CsPriC to this reaction facilitated DnaB loading onto the DNA (Fig. 1D).
Finally, CsPriC in vivo activity was assessed by determining its ability to allow deletion of the priB gene from an E. coli ⌬priC strain. Ordinarily, the simultaneous deletion of both priC and priB is lethal in E. coli, because this combination eliminates all replication restart pathways (17). However, the presence of a functional priC gene on a plasmid confers viability in chromosomal priC priB double mutant strains (23). Expression plasmids encoding CsPriC or EcPriC (positive control) or lacking a priC gene (negative control) were transformed into a priC303::kan E. coli strain, where a kanamycin resistance marker had been inserted to disrupt the priC locus. These strains then underwent P1 transduction with a ⌬priB donor carrying a linked tetracycline resistance marker (Tet R ). Tet R colonies were screened for the successful co-transduction of the ⌬priB allele, which would indicate that the plasmid-borne PriC is functional in vivo and can support the priB deletion. As expected, 0 of the Tet R colonies co-transduced the ⌬priB mutation in the negative control (empty vector), whereas ϳ56% of the Tet R colonies co-transduced the ⌬priB mutation in the positive control (EcPriC) (Fig. 1E). Consistent with its function in vivo, ϳ24% of the Tet R transductants with the CsPriC-encoding plasmid carried the ⌬priB mutation (Fig. 1E). Although this co-transduction frequency is lower than that of the positive control, the fact that multiple colonies carried the priB deletion indicates that CsPriC can complement EcPriC function. Taken together, these data demonstrate canonical PriC function for CsPriC, making it an appropriate target for structure determination.
NMR Structure of Full-length CsPriC-We next used heteronuclear 1 H/ 13 C/ 15 N NMR data and residual dipolar coupling measurements to determine the NMR structure of CsPriC. The low energy bundle of NMR structures had a backbone root mean square deviation of 0.62 Å in well folded regions. A relatively high number of unambiguously assigned NOEs per residue (14.0) and the use of residual dipolar coupling constraints contributed to the high precision of the final coordinates ( Table 1).
The CsPriC structure consists of five ␣-helices arranged in a compact bundle, with an extended 20-residue loop connecting ␣1 and ␣2 (Figs. 1A and 2). A search for proteins that share structural similarity with CsPriC using the DALI server (24) revealed over 4,000 related folds within other proteins or protein domains. This large number most likely arises from the high frequency with which helical bundle folds are found in proteins. Surprisingly, a previously reported NMR structure of an N-terminal fragment of EcPriC (22,25) was not identified in the DALI search due to significant differences in the arrangement of helices between the two structures (Fig. 3). In the N-terminal EcPriC fragment structure, the N-terminal-most helix, ␣1, substitutes for the position of ␣4 in the full-length structure. The position of this helix may shift to compensate for the absence of ␣4 in the N-terminal fragment structure.
Structural Insights into Protein and DNA Interaction Sites in PriC-Previous studies have identified two residues in EcPriC, Arg-121 and Arg-155 (Arg-121 and Arg-151 in CsPriC), that are essential for stabilizing its interaction with SSB (Table 2 (23)). These residues were found in adjacent ␣-helices in the CsPriC structure, with the helical arrangement placing the two basic side chains in close proximity (Figs. 2 and 4A). Additional EcPriC residues that have been implicated in mediating the SSB interaction (Phe-118, Arg-129, and Tyr-152 in EcPriC (21); Tyr-118, Arg-129, and Leu-148 in CsPriC) are also localized to this region (Table 2 and Figs. 2 and 4A). The electrostatics of the PriC SSB-Ct binding site resemble those observed in other SSBassociated proteins in which basic residues bind to the ␣-carboxyl group of the C-terminal Phe and to the Asp side chains within the SSB-Ct element (26 -30) (Fig. 2B). The PriC SSB binding site is also evolutionarily well conserved (Fig. 2B), consistent with the essential nature of the PriC/SSB interaction for PriC-mediated DNA replication restart in vivo (23). Interestingly, the SSB-Ct binding site in PriC differs from SSB binding sites in other proteins in that it is much "flatter" than those previously observed. Analysis by the program EPOS BP (31) failed to identify pockets on the surface of CsPriC, whereas SSB-Ct binding pockets in other binding proteins range from 330 (exonuclease I (29)) to 700 Å 3 (PriA helicase (30)) in the absence of SSB-Ct binding. It is possible that a structural rearrangement takes place in PriC to form the pocket needed for accommodating the SSB-Ct element. A similar SSB-Ct dependent rearrangement was observed for E. coli ribonuclease HI, which lacks an apparent binding pocket in isolation but forms a 480-Å 3 pocket in the ribonuclease HI⅐SSB-Ct complex (26). The addition of an SSB-Ct peptide to high concentrations of CsPriC causes the protein to precipitate, supporting the possibility of an SSB-induced structural rearrangement but also precluding NMR studies of the CsPriC⅐SSB-Ct complex.
Previous mutagenesis studies have also identified PriC residues with roles in ssDNA binding, including Arg-107, Lys-111, and Lys-165 (Table 2 ( 21)). The equivalent residues in CsPriC (Arg-107, Arg-111, and Arg-161) and other basic residues on helices 2, 4, and 5 form a highly electropositive groove on the surface of PriC that is adjacent to the SSB-Ct binding site (Figs. 2B and 4B). Conservation of residues along helix 2 is low relative to those on helix 4 and 5. DNA binding roles for EcPriC residues Phe-118 and Arg-121 have been suggested as well (21), and these residues are adjacent to the previously identified basic cluster. It is possible that ssDNA could extend across the surface of PriC.
It has also been reported that PriC can oligomerize and that three Leu residues and a Val within the C-terminal helix mediate self-association (Table 2 (22)). Mutation of these residues leads to a predominantly insoluble protein (22). Our structure shows that the side chains of two of these residues, Val-149 and Leu-156 (Ile-145 and Leu-152 in CsPriC), form part of the hydrophobic core of the protein, packing against ␣2 and ␣4, suggesting that these residues are not likely to mediate oligomerization directly. The side chains of the remaining residues, Leu-163 and Leu-170 (Ile-159 and Leu-166 in CsPriC), are more surface-exposed and could potentially be involved in oligomerization (Fig. 4C).
In addition to supporting data from previous studies, the structure also highlights evolutionarily conserved PriC surfaces that have not yet been investigated for their contribution to activity (Fig. 2B, bottom row). One of these regions is the extended loop, which includes the surface-exposed side chain of Arg-33. Additionally, there are conserved residues located on the C-terminal end of helix 3 (including Glu-89 and Arg-96) that could also be important for PriC function. The effects of altering these sites are explored further below.
connecting PriC to DnaB has not been established. A proteomic study has shown that EcPriC co-purifies with affinity-tagged DnaB, along with DnaC, suggesting that PriC may interact with DnaB and/or the DnaB⅐DnaC complex (32). To test the possible interaction between PriC and DnaB or DnaC, we first used a yeast two-hybrid approach. Plasmids expressing the Gal4 DNA binding domain fused to the N terminus of PriC (pGBD-PriC (23)) and the Gal4 activation domain fused to the N terminus of DnaB or DnaC (pGAD-DnaB, pGAD-DnaC) were co-transformed into an S. cerevisiae strain in which expression of HIS3 and ADE2 is under control of the GAL1 and GAL2 promoters, respectively. Interaction between PriC and DnaB or DnaC would support expression of the HIS3 and ADE2 reporter genes, allowing growth of the strain on His-and Ade-deficient media. Consistent with a direct interaction between PriC and DnaB, the strain transformed with pGBD-PriC and pGAD-DnaB was able to grow on selective media (Fig. 5A). Control transformations with plasmid pairs lacking either EcPriC or EcDnaB failed to support growth. In addition, co-transformation of pGBD-PriC with pGAD-DnaC did not support growth, suggesting that EcPriC interacts with DnaB but not EcDnaC (Fig. 5A). Next, ITC was used to determine whether a direct interaction between PriC and DnaB could be detected in vitro and to measure the stability and stoichiometry of the complex. We first tested PriC binding to the DnaB⅐DnaC complex because this best represents the relevant cellular complex that PriC would be expected to recruit to replication forks. PriC bound to the DnaB⅐DnaC complex with a K d of 64 Ϯ 21 nM and a stoichiometry of 1.0 Ϯ 0.02 molecule of PriC per molecule of DnaB (Fig. 5B). We next tested for binding between PriC and DnaB or DnaC individually. Consistent with the yeast two-hybrid results, no interaction was detected between PriC and DnaC (data not shown). We were able to observe an interaction between PriC and DnaB; however, the heat of dilution of DnaB alone was so large that it prevented the reliable measurement of binding parameters (data not shown). These data are consistent with PriC binding directly to DnaB within the DnaB⅐DnaC complex. Given that the stoichiometry of the DnaB⅐DnaC complex is 6:6 (33) and PriC directly interacts with DnaB (Fig. 5A), PriC appears to be able to bind to each DnaB subunit within the DnaB⅐DnaC complex.
Mutagenesis Screening of the Surface of PriC-Examination of the PriC structure showed that residues tested for activity thus far are clustered near the SSB-and ssDNA-binding sites, whereas the rest of the structure remains underexplored. Moreover, contributions of the ssDNA-binding region to PriCmediated replication restart have not yet been characterized. To expand our understanding of the roles of different surfaces of PriC and to more precisely define the importance of residues in proximity to known binding sites, we created a panel of single-site EcPriC variants that alter conserved surface residues for examination in vitro and in vivo (Fig. 6A). This panel included PriC variants that alter residues near the ssDNA binding site (K111A, H161A, and R175A) and others that map to regions away from previously established binding surfaces on PriC (R33E, W74A, E89A, and R96A). Each of the PriC variants (Table 3). In addition, the variants all bound to ssDNA with an affinity similar to that of wild type PriC (Table 3 and Fig. 6 (B and C)). Under conditions lacking NaCl, all variants had K d values of Ͻ10 nM. The probe concentration in these experiments was 5 nM, so our analysis of variant DNA binding could only provide an upper limit for many of the K d values. Under conditions with 75 mM NaCl, ssDNA binding affinities were weakened relative to the NaCl-free conditions and were similar to that of wild type EcPriC. Only one variant, K111A, exhibited a modest 2-fold weaker ssDNA binding affinity compared with wild type EcPriC (Table 3 and Fig. 6 (B and C)). This suggested that although this region plays a role in ssDNA binding, single residue changes do not drastically alter the ability of PriC to bind ssDNA. We next tested the PriC variants in the reconstituted DnaB loading assay. Interestingly, all but two of the EcPriC variants displayed at least a 2-fold reduction in DnaB loading activities (Fig. 6D). The decreased DnaB loading ability of the K111A variant and others within the putative DNA binding tract could suggest that DNA binding is important in PriC-mediated replication restart, although the impact of the mutation on DnaB loading is clearly greater than that on DNA binding affinity. The defects observed with the R33E and E89A variants indicate that previously uncharacterized regions outside of the SSB and ssDNA binding sites are also important in DnaB loading.
Because all of the variants tested retained their ability to bind to SSB and ssDNA but demonstrated diminished abilities to load DnaB in vitro, we sought to determine whether the variants interfered with PriC binding to DnaB. Mutations were made in the Gal4-binding domain-PriC fusion plasmid (pGBD-PriC (23)) to express each of the single-site PriC variants along with the Gal4 activation domain-DnaB fusion protein in our two-hybrid assay. All of the variants were able to support growth when co-transformed with pGAD-DnaB, indicating that these individual residue changes do not abolish DnaB/PriC complex formation (Fig. 6E). Thus, the reduced in vitro DnaB loading abilities of PriC variants are not due to their inability to bind DnaB as measured by the two-hybrid assay.
To determine the in vivo functionality of the PriC variants, each was screened for its ability to complement a priC deletion and allow the deletion of priB as described earlier (24). Surprisingly, R96A was found to be non-functional, despite demonstrating wild-type levels of DnaB loading in vitro (Table 3 and Fig. 6D). Of the variants that displayed a decreased ability to load DnaB in vitro, only R33E and E89A were also unable to complement in vivo (Table 3). This indicates that surfaces outside of the previously defined SSB and ssDNA binding sites are essential for PriC function in vivo and that some variants with reduced in vitro activity still retain in vivo functionality. In addition, the inability of PriC variants with normal in vitro functions to complement priC cellular phenotypes suggests that current models accounting for the activities required for PriC functions in DNA replication restart are incomplete.
Discussion
DNA replication restart is an essential process in bacteria, and PriC is unique among the restart proteins in its ability to load DnaB from DnaB⅐DnaC complexes onto SSB-coated DNA substrates without assistance from additional replication factors (20). This property makes PriC a model for defining the minimal requirements needed for abandoned DNA replication fork recognition and remodeling, as well as for DnaB reloading. To better define the structural mechanisms underlying PriCmediated replication restart, we have determined the NMR structure of the full-length C. sakazakii PriC. Our structure reveals a compact monomeric fold for PriC that is defined by five interacting ␣-helices. A biochemical analysis of PriC revealed a direct interaction with the replicative helicase, DnaB, in isolation and in DnaB⅐DnaC complexes. Our analysis also identified conserved residues in previously unexplored PriC surfaces that are essential for cellular function.
Previous proteolytic mapping studies suggested that EcPriC could be a two-domain protein, comprising N-terminal (residues 1-97) and C-terminal (residues 98 -175) domains (22). However, the NMR structure of full-length CsPriC is consistent with the protein forming a compact single domain with a continuous hydrophobic core (Fig. 2). The arrangement of helices observed in an earlier solution structure of the EcPriC N-terminal region differs significantly from that observed in the fulllength CsPriC structure (Fig. 3) (25). In the N-terminal EcPriC fragment, the N-terminal-most helix, ␣1, substitutes for the position of ␣4 in the full-length structure. This alters the tertiary structure and topology of the fragment to the extent that similarity between the two structures is not recognized by the DALI structural search algorithm. The position of this helix may shift to compensate for the absence of ␣4 in the N-terminal fragment. However, it remains possible that the differences between the two structures reflect structural flexibility in PriC.
The full-length CsPriC NMR structure resolves the protein's binding sites for both SSB and ssDNA (21)(22)(23). Residues involved in binding to these molecules map to two adjacent regions on PriC. The SSB-Ct binding site, which is formed predominantly by residues from ␣4 and ␣5, shares electrostatic similarities with other known SSB-Ct binding sites. These similarities include conserved hydrophobic residues surrounded by basic side chains that bind hydrophobic and electronegative SSB-Ct elements, respectively, in other complexes (26 -30). One distinguishing feature of the PriC SSB binding site is that it is remarkably flat, which leads to the hypothesis that a conformational rearrangement may be necessary to form a pocket for SSB-Ct binding. A similar remodeling occurs in E. coli ribonuclease HI, which lacks an identifiable pocket in the absence of SSB-Ct but creates a cavity to accommodate the peptide in the complex (26).
Residues implicated in ssDNA binding map to an electropositive tract that is adjacent to the SSB binding site. These include Lys-111 (Table 3 and Fig. 6) (21) along with Arg-107 and Arg-161 that were identified in an earlier study (21). Sequence changes within this region lead to decreased DnaB loading activity levels in vitro (Fig. 6D), suggesting that ssDNA binding is important for PriC-mediated DNA replication restart. Slight perturbations of PriC ssDNA binding affinity do not appear to alter the activity of PriC in vivo (Table 3); however, it is possible that PriC variants with more significantly reduced ssDNA binding affinities could have reduced cellular activity. It has been reported that residues Arg-121 and Phe-118, which are important for SSB binding, also contribute to ssDNA binding (21). This could be important for the mechanism of PriC in potentially facilitating a hand-off of substrates, or it could arise from the similar electrostatic characteristics of SSB-Ct and ssDNA binding sites.
Additionally, the structure illuminated additional regions of conservation outside of the characterized ssDNA and SSB binding sites. Mutation of conserved residues within these areas of PriC demonstrated that these regions are important for PriC activity both in vitro and in vivo (Table 3 and Fig. 6). Possible roles for these regions include direct binding to additional cellular factors and/or productive coordination of PriC interactions to facilitate DnaB reloading.
Our investigation further revealed an interaction between PriC and the replicative helicase, DnaB. PriC appears to bind to DnaB both in isolation and in the DnaB⅐DnaC complex (Fig. 5), the latter being the form that would probably be most relevant for replication restart. A simple model explaining the role of PriC binding to DnaB⅐DnaC is that the interaction would help in recruitment and loading of helicase⅐loader complexes at abandoned replication forks. It is also possible that PriC binding to DnaB could stimulate release of DnaC from the DnaB⅐DnaC complex to allow DnaB to be activated. A similar mechanism has been noted for DnaG primase, in which binding of DnaG stimulates DnaC release from DnaB⅐DnaC (34).
Integrating the results described here with previous observations leads to a refined model for PriC function (Fig. 7). Bacterial DNA replication restart systems appear to share three core functions: abandoned replication fork recognition, lagging strand remodeling to generate a DnaB loading site, and DnaB reloading. PriC recognizes and remodels stalled forks through a combination of interactions with SSB and ssDNA (21,23). The structure reported here shows that the respective binding sites for these ligands are adjacent to one another on the PriC surface. This arrangement could facilitate PriC binding to SSB and ssDNA in a coordinated manner. Previous studies examining the interaction between PriC and SSB indicated that complex formation can unwrap ssDNA from SSB by altering the SSB DNA binding mode to expose a potential site for DnaB loading (23). It is also possible that once ssDNA is exposed, PriC no longer binds the SSB-Ct and utilizes both the ssDNA-and SSBbinding sites for coordinating ssDNA binding. Having both SSB and ssDNA binding sites adjacent to one another on PriC may aid in driving this process.
The final step of DNA replication restart is DnaB recruitment and loading. One key component required for this activity is a direct physical link connecting PriC with DnaB. We have detected interactions between PriC and DnaB, both in isolation and in DnaB⅐DnaC complexes. It could be that PriC first binds to abandoned DNA replication forks and then recruits free DnaB⅐DnaC to the loading site. Alternatively, it is possible that free PriC binds to the DnaB⅐DnaC complex in cells and that this ternary complex is recruited to abandoned forks through PriC/ PriC interactions mediated by oligomerization/cooperative ssDNA binding. Either mechanism could explain how the PriC/ DnaB interaction functions in DNA replication restart.
Taken together, our data provide further insight into how replication restart pathways function in the recruitment and loading of DnaB. Although PriC is not well conserved among bacterial species, similar steps could mediate the more well conserved PriA-mediated restart pathways in bacteria. Interactions between DnaB and either PriB or DnaT have not been detected (35) 5 ; however, these proteins form part of a larger PriA⅐PriB⅐DnaT complex that may have the ability to bind DnaB. The ability to interact with SSB, ssDNA, and DnaB can be used as a set of characteristics to identify functional homologs that mediate replication restart in diverse bacteria.
The NMR data were processed and analyzed with NMRPipe software (38). PIPP/STAPP software (39) was used to manually assign the backbone and side-chain resonances. The TALOSϩ program (40) was used to provide pairs of / backbone torsion angle restraints and to identify the secondary structural elements (confirmed by local NOEs). Two distance restraints of 1.9 and 2.9 Å per involved pair of residues were used to represent hydrogen bonds for H N -O and N-O, respectively (41). NOE peak intensities in three-dimensional NOESY spectra were assigned using the PIPP/STAPP package and converted for Xplor-NIH into a continuous distribution of 2,397 approximate interproton distance restraints, with a uniform 40% distance error applied to take into account spin diffusion.
Structure calculations and refinements made use of the torsion angle molecular dynamics and the internal variable dynamics modules of Xplor-NIH (40) to ensure preservation of the correct peptide geometry when applying residual dipolar coupling and distance constraints simultaneously. PyMOL (DeLano Scientific, LLC) and VMD-XPLOR (42) were used to analyze the structures. There were no consistent (i.e. in Ͼ40% of the calculated structures) NOE violations larger than 0.5 Å in the 100 calculated structures. A subset of 20 lowest energy structures (of 100) were selected for further refinement using an implicit solvation potential (43). The structure statistical quality indicators and agreement with experimental residual dipolar couplings are found in Table 1.
Yeast Two-hybrid Analysis-Assays were performed as described previously (23). Plasmids expressing Gal4 DnaB and DnaC fusion proteins were generated by cloning dnaB and dnaC open reading frames into the pGAD vector backbone to (20)) facilitates binding of additional monomers through oligomerization/cooperative binding (shown here binding to both strands), one monomer of which has the potential to bring with it DnaB⅐DnaC, thus localizing DnaB to the stalled fork. Additionally, binding of PriC to SSB results in a structural change to the SSB⅐DNA complex that results in the exposure of a small tract of ssDNA. Once this ssDNA is exposed, DnaB is loaded, and DnaC dissociates, resulting in a fork poised to recruit the remaining replisome components. fuse the Gal4 activation domain to the N terminus of each protein to be tested (44).
Isothermal Titration Calorimetry-ITC of the PriC/SSB-Ct interaction was performed as described previously (23). Briefly, PriC variants were concentrated to 8 -23 M in a buffer containing 10 mM HEPES-HCl, pH 7.0, 0.1 M NaCl, and 3% glycerol. SSB-Ct peptide (WMDFDDDIPF) was dissolved in an identical buffer at a concentration of 525 M. Titrations were performed on a VP-ITC instrument (Microcal) with 20 1.5-l injections. Data were fit using a single-site model using Origin software (Microcal).
For the PriC⅐DnaB⅐DnaC titrations, all proteins were dialyzed against a buffer containing 20 mM Tris-HCl, pH 8.5, 0.2 M NaCl, 5% glycerol, 5 mM MgCl 2 , 1 mM 2-mercaptoethanol, and 1 mM ATP. PriC was concentrated to 5 M, and DnaB and DnaC were concentrated to Ͼ200 M. For analysis of the DnaB⅐DnaC complex with PriC, DnaB and DnaC were combined at final concentrations of 100 M DnaB and 120 M DnaC. For analysis of DnaB and DnaC individually with PriC, the final concentration for DnaB was 80 M, and that of DnaC was 96 M. Titrations were performed on a VP-ITC instrument (Microcal) with 25-37 1-l injections at 20°C. Data were fit using a single-site model using Origin software (Microcal).
DNA Binding Assays-DNA binding reactions were performed in 10 mM HEPES-HCl, pH 7.0, with either 0 or 75 mM NaCl. 5Ј-Fluorescein-labeled dT 15 ssDNA (5 nM) was incubated with the indicated concentrations of PriC for 30 min at 25°C. Fluorescence polarization was measured at 25°C using a BioTek Synergy2 plate reader with 490-nm excitation and 535-nm emission wavelengths for three replicates. The average polarization value was plotted with one S.D. value of the mean shown as error. The polarization of dT 15 alone was subtracted from each of the data points, and the data were fit with a singlebinding site model with Hill coefficient using GraphPad Prism software.
Co-transduction Tests of priC Mutants-Assays were performed as described previously (23). | 2018-04-03T04:36:20.179Z | 2016-07-05T00:00:00.000 | {
"year": 2016,
"sha1": "14b5f1926e8bdd1fb7b5ce4cb37ad957ccc05c6e",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/291/35/18384.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "70fafb0b8be378153791d62d4fbdd87f7b2f5a42",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
2251229 | pes2o/s2orc | v3-fos-license | Multidisciplinary diagnostic and therapeutic approaches to pancreatic cystic lesions
Pancreatic cystic lesions are commonly encountered today with the routine use of cross-sectional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). The majority of patients discovered to have a pancreatic cyst are completely asymptomatic; yet the presence of such a finding instills fear in the minds of both patient and physician, as the concern for malignant transformation to pancreatic cancer is great despite the relatively low overall likelihood of cyst progression. Not all cysts in the pancreas represent pancreatic cystic neoplasms (PCNs), and not all PCNs have significant malignant potential. Mucinous PCNs are the most concerning, as these lesions have the greatest potential for cancerous transformation to adenocarcinoma. Within the group of mucinous PCNs, intraductal papillary mucinous neoplasms (IPMNs) involving the main pancreatic duct are the most worrisome, and surgical resection should be pursued if the patient has appropriate operative risks. IPMN lesions involving the branch ducts, and mucinous cystadenomas, have a lower likelihood for malignancy, and they may be closely followed for the development of any worrisome or high-risk features. Surveillance of known PCNs is performed with a combination of CT, MRI and endoscopic ultrasound (EUS). EUS-guided fine-needle aspiration (EUS-FNA) may be used to assess cyst fluid cytology, and also to detect cyst fluid amylase level, carcinoembryonic antigen level, and DNA molecular analysis in certain cases. The presence or absence of specific cyst morphological features, as well as the cyst fluid analysis, is what enables the physician to guide the patient towards continued surveillance, versus the pursuit of surgical resection.
Introduction
The diagnosis and management of pancreatic cystic lesions has become an area of developing interest over the past decade. Increasing use of advanced abdominal imaging modalities has resulted in the discovery of previously unrecognized pancreatic cystic neoplasms (PCNs). As these lesions have become a more common finding, health care providers should be familiar with the different types of cystic pancreatic lesions in order to assess the potential for malignancy within a cyst. This allows providers to effectively risk-stratify patients for surveillance, surgery, or expectant management. The purpose of this review is to provide both general practitioners and specialists with evidence-based data to aid in the management of patients found to have PCNs. of fluid without a true epithelial lining, usually the result of acute pancreatitis. PCNs are true pancreatic cysts, and they represent at least 50% of all pancreatic cystic lesions.
PCNs are generally divided into four subtypes: mucinous cystic neoplasms (MCNs), intraductal papillary mucinous neoplasms (IPMNs), serous cystadenomas (SCAs), and solid pseudopapillary neoplasms (SPNs) ( Table 1). [1][2][3] The MCN and IPMN types are further classified as mucin-producing lesions, while the remainder are non-mucinous. 4 Differentiating between the different cyst types can be challenging; however, certain radiographic, histological, and pathological features may help distinguish these cysts from one another, and thus help guide management for the patient.
SCAs represent roughly 30% of PCNs. They occur more commonly in women, and their peak incidence is in the seventh decade of life. The cyst lining is composed of a simple, glycogen-rich cuboidal epithelium. On imaging, these cysts appear as honeycomb-like microcysts, often with the presence of a "central scar" on computed tomography (CT) or magnetic resonance imaging (MRI). The tiny microcystic spaces may often coalesce, and the lesion can begin to appear as a solid mass-like structure ( Figure 1). The most common location is in the body and tail of the pancreas. These lesions are usually benign, with a very low potential for malignant transformation, and thus they are typically managed conservatively unless the patient is symptomatic from the cyst (eg, abdominal pain). 3 However, cases of malignant transformation of a serous cystic lesion into a serous cystadenocarcinoma have been reported. In a retrospective review of 158 resection specimens of serous cystic pancreatic lesions from a single institution, one case of histologically confirmed malignancy was identified. 5 Also, three of these cases were classified as locally aggressive benign lesions, one of which later developed metachronous metastatic lesions. Additionally, a literature review of serous cystic lesions in 2009 indicated that an average lesion size of 10 cm was associated with carcinoma. 6 Therefore, consideration should be made to treat larger and locally advanced lesions aggressively.
A relationship between SCAs and von Hippel-Lindau (VHL) disease has been noted. In one study, a histopathological analysis of pancreatic cysts from nine VHL patients was performed. A total of 21 benign serous lesions, 63 microscopic microcystic (serous) adenomas, and 35 macroscopic microcystic (serous) adenomas were found. 7 All lesions displayed similar histology, and deoxyribonucleic acid (DNA) extracted from the cysts showed allelic deletions in the VHL gene. Another study showed evidence of VHL gene alterations not only in VHL-disease-associated cysts, but also in sporadic microcystic (serous) adenomas; thus implying that changes in the VHL tumor suppressor gene play an important role in the pathogenesis of these types of cysts, regardless of whether or not the individual has VHL disease. 8 MCNs comprise 10%-45% of PCNs, occur mostly in the female population, and are typically discovered in the fifth and sixth decades of life. The location is usually in the pancreatic body or tail. MCNs commonly exhibit macrocystic spaces with thin septations. The MCNs are histologically very similar to IPMNs, as both lesions have a mucin-producing epithelial lining. However, a distinguishing feature between MCNs and IPMNs is the characteristic histopathological dense
83
Pancreatic cystic lesions mesenchymal "ovarian-like stroma" seen in MCNs. Also, MCNs do not communicate with the pancreatic ductal system, as they develop out in the periphery of the gland ( Figure 2). All MCNs have a risk for malignant transformation, and therefore resection is generally considered in individuals who are good surgical candidates depending upon their clinical risk factors. 3 The prevalence of invasive carcinoma in MCNs at the time of surgical resection varies from 6% to 36%. 4 However, some studies have not used ovarian-type stroma as a necessary criterion to distinguish MCNs from IPMNs, making these data difficult to interpret. In some studies, the prevalence of invasive carcinoma strictly in MCNs with ovarian-type stroma ranges from 6% to 27%. To avoid mistakenly classifying IPMNs as MCNs, which may have different clinical implications for the patient, the diagnosis of MCN should be limited to cysts containing ovarian-type stroma. 4 IPMNs represent approximately 21%-33% of PCNs. IPMNs occur with equal frequency in both men and women, commonly in the sixth and seventh decades of life, and more often in the head of the pancreas. The cyst lining consists of a mucin-secreting columnar epithelium. A key feature of IPMN is communication with the pancreatic ductal system. Diffuse or segmental dilatation of the main or branch pancreatic ducts may be seen. All IPMNs have malignant potential, and similar to MCN lesions, an algorithm for risk-stratification and management is of paramount importance (discussed below). IPMNs with adenomatous or borderline changes have been shown to have an excellent prognosis if resected; however, the prognosis is less favorable when findings of carcinoma in situ or invasive carcinoma are present. 3 IPMNs can be divided into three different subtypes: 1) main duct IPMN (MD-IPMN), involving dilation of the main pancreatic duct (MPD) only; 2) branch duct IPMN (BD-IPMN), involving cystic dilation of one of the ductal sidebranches; and 3) mixed type, in which both the main duct and side-branch are involved in cystic dilation. These lesions are usually discovered on abdominal imaging studies. Segmental or diffuse dilatation of the MPD .5 mm, in the absence of other secondary causes such as chronic pancreatitis, is suggestive of MD-IPMN ( Figure 3). Mucinous cysts communicating with the pancreatic ductal system without Radiographically, they often appear as a "bunch of grapes" growing from the end of a pancreatic ductal side-branch ( Figure 4). IPMNs which meet criteria for both MD-and BD-IPMN are categorized as mixed type, which can have varying degrees of involvement with both the main duct and branch ducts. 4,9 Definitive diagnosis of an IPMN is made based on the histology of resected cysts. Many BD-IPMNs exhibit some involvement with the main duct microscopically, and therefore grading these lesions in terms of the extent of main duct involvement may be a preferable approach, in contrast to categorizing all IPMNs as strictly MD-IPMN or BD-IPMN. 4,9 Distinguishing between these subtypes is important, as MD-IPMNs are at increased risk for malignant transformation compared with BD-IPMNs. The prevalence of malignancy in resected MD-IPMN lesions ranges from 57% to 92%, in contrast with 6%-46% for BD-IPMN lesions. 4 Interestingly, on cyst fluid DNA analysis studies of IPMNs, the histological grade of dysplasia increases with the frequency of mutations in the k-ras gene. 10 These findings suggest that k-ras gene mutations play a significant role in the process of carcinogenesis for these mucinous PCNs.
SPNs represent less than 10% of PCNs, and they occur predominantly in younger women, with a peak incidence ranging from the second to fourth decades of life. 3 They are most commonly located in the body or tail of the pancreas, but the location is variable. Solid and cystic components may be present, as well as occasional calcifications within the cyst ( Figure 5). Histologically, SPNs contain uniform cells with ovoid nuclei and eosinophilic granules, which are arranged in sheets. SPNs have a low potential for malignant transformation, but in general are considered to have a much higher risk for cancer compared with benign SCAs. These lesions have an excellent prognosis when completely resected, as the overall incidence of malignancy is estimated to be less than 15%. Metastasis to sites including the liver, peritoneum, and lymph
85
Pancreatic cystic lesions nodes has been reported. Even in malignant cases, prolonged survival has been shown in patients with residual disease after surgery, or in patients with unresectable tumors. 3,11 One additional less common class of PCNs is the cystic neuroendocrine neoplasm. These cysts represent less than 10% of PCNs, and they occur with equal distribution among men and women. They are most often seen in the fifth and sixth decades of life. Their malignancy potential is similar to that of solid neuroendocrine neoplasms. Appearance on crosssectional imaging is variable. Cytology of these non-mucinous cysts shows small cells with scant cytoplasm and monomorphic nuclei with "salt-and-pepper" chromatin. 3 Cystic neuroendocrine neoplasms are typically nonfunctioning tumors that have undergone cystic degeneration and can be difficult to distinguish from other PCNs on imaging alone. One study found this cystic degenerative form represented 10% of all pancreatic neuroendocrine tumors. 12 Their location is most commonly in the body and tail of the pancreas. These lesions are generally indolent in behavior and carry a good prognosis; however, surgical resection should be considered in appropriate surgical candidates, given the premalignant nature of these lesions, particularly when they are larger than 2 cm in size. 12,13
Clinical presentation
Most pancreatic cysts are asymptomatic at the time of diagnosis, and they are often discovered incidentally when abdominal imaging is performed for evaluation of an unrelated problem. When PCNs are symptomatic, symptoms are typically the result of pancreaticobiliary duct obstruction. As such, clinical findings may include recurrent pancreatitis, chronic abdominal pain, or jaundice. Other nonspecific symptoms which may be present include nausea, vomiting, back pain, weight loss, or anorexia. The symptoms of advanced PCNs with malignant transformation may mimic those caused by pancreatic ductal adenocarcinoma (eg, jaundice, weight loss, and pain). Obstruction of the MPD (typically from mucin due to an IPMN, or compression of the duct by mass effect from the lesion) may present as acute or chronic pancreatitis ( Figure 6). Development of exocrine and endocrine pancreatic insufficiency is not uncommon, due to atrophy of the distal gland downstream of the obstruction.
PCNs and small pancreatic pseudocysts are often mistaken for one another given their similar presentations and imaging characteristics. The clinical context is often needed to help differentiate between PCNs and pseudocysts, as the latter are more likely to develop in the setting of prior pancreatitis (either recent or remote past), and are typically associated with pain. 3,14 One study investigating 212 patients with pancreatic cystic lesions in a surgical practice showed that 36.7% of the patients were asymptomatic. 1 These asymptomatic cysts were more common in the elderly, smaller in size than symptomatic cysts, and less likely to be pseudocysts on final surgical pathology. Furthermore, greater than half of the asymptomatic cysts were found to be PCNs with dysplastic changes or malignant transformation. Another recent study followed 105 patients who underwent pre operative endoscopic ultrasound (EUS) for cyst evaluation. In this study, only 10% of patients were asymptomatic. Of the 70 patients with EUS cyst size less than 3 cm, 12 patients (17%) had a malignancy as diagnosed 15 The above studies support the role for periodic surveillance in all patients with PCNs, regardless of the size of the lesion or presence of symptoms.
Diagnosis
Pancreatic cystic lesions are often initially detected on crosssectional abdominal imaging such as CT or MRI. These modalities can help characterize the morphological features of pancreatic cysts such as cyst wall calcification, the presence of septae or mural nodules, and concurrent changes consistent with pancreatitis. In contrast to CT scan, MRI with magnetic resonance cholangiopancreatography (MRCP) serves to evaluate the pancreatic ductal system and often establishes the existence of ductal communication with a pancreatic cyst. According to a consensus of surveyed radiologists, the optimal procedure for evaluating pancreatic cystic lesions is a dedicated MRI-MRCP due to superior contrast resolution with better visualization of septae, nodules, and ductal communications. 3,9,16 In another study evaluating the accuracy of CT versus MRI-MRCP in the characterization of IPMN disease, ductal connection was found on 73% of MRCP scans and only 18% of CT scans. 17 CT scans overestimated MPD involvement when compared with MRCP and surgical pathology. MRCP identified multifocal disease in 72% of cases versus 50% on CT. Additionally, MRCP was superior in visualization of branch duct lesions. 17 Although PCNs often appear morphologically similar on cross-sectional imaging, particular characteristic features of specific PCN types can aid in diagnosis without further invasive testing. For example, the presence of a central scar seen on CT or MRI is highly suggestive of a serous lesion (SCA), and noted in roughly 20% of these cysts. For MCN lesions, the CT finding of peripheral "eggshell calcifications" (small calcifications on the periphery of macrocystic spaces within the lesion) is rare but strongly predictive of malignancy. Newer types of CT imaging, such as high-resolution multi-slice helical imaging, can more accurately identify potentially malignant features of IPMNs, such as the presence of mural nodules and segmental or diffuse dilatation of the MPD .15 mm in diameter. Certain characteristics on CT, in combination with clinical history, can distinguish true cysts from pseudocysts. Pseudocysts are more likely to be associated with findings of chronic pancreatitis such as gland atrophy, ductal dilatation, parenchymal calcification, and calculi in the pancreatic duct. 3 When further evaluation of a PCN beyond routine crosssectional imaging is required, EUS with fine-needle aspiration (FNA) may be utilized. EUS-FNA allows for better imaging characterization of cyst morphology (eg, the presence of a mural nodule or a solid component), and it enables aspiration of cyst fluid for further analysis. Despite its unique ability to obtain high quality pancreatic imaging from within the lumen of the upper gastrointestinal tract, previous studies have suggested a limited ability of EUS alone to distinguish between benign and early dysplastic or malignant lesions, especially if features of frank malignancy are not present. 3,16 Furthermore, the accurate interpretation of EUS findings is operator dependent and often varies from one endoscopist to another. One study investigated the ability of endosonographers to establish a diagnosis of a PCN, and also to determine the presence or absence of malignancy solely based on the EUS findings. 18 Poor to fair agreement was observed between endosonographers in both tasks. Another study evaluated the accuracy of preoperative imaging with CT, endoscopic retrograde cholangiopancreatography (ERCP), or EUS in the detection of invasive versus noninvasive IPMN and MCN lesions. 19 The overall accuracy for detecting invasion was less than 80% for all three diagnostic modalities.
ERCP enables inspection of the duodenal papilla for mucin extrusion, a finding which occurs in 20%-50% of main duct IPMNs, and is essentially pathognomonic for the disease. In addition, ERCP allows for the ability to perform pancreatography to assess communication of the cyst with the MPD. That said, with today's high quality (noninvasive) MRCP studies, there is little role for diagnostic ERCP in the work-up of a PCN. Due to the relatively high risk of post-procedural pancreatitis when performing ERCP of the pancreatic duct (20%-30%), this invasive procedure is most often reserved for patients in whom the diagnosis of main duct IPMN is highly suspected, and one is attempting to diagnose frank malignancy within the duct. The combination of intraductal pancreatoscopy with intraductal ultrasound at the time of ERCP can demonstrate malignancy with a high level of precision, and allow for accurate sampling of ductal nodules and other areas of concern. 3,4,16 These procedures are generally performed only at specialized centers by expert interventional endoscopists.
Compared with ERCP, EUS evaluation with FNA is a less invasive and safer endoscopic procedure for the diagnosis of a PCN. Cyst fluid is obtained to assess for the presence of mucin, cytological atypia, carcinoembryonic antigen (CEA), and amylase levels, and DNA for molecular analysis. Studies have shown varying success in the accurate diagnosis of PCNs using cytology alone. However, identifying certain cell types using FNA can help narrow the diagnosis in certain instances. For example, the finding of glycogen-rich cuboidal cells suggests a diagnosis of SCA in the appropriate clinical setting; or the aspiration of inflammatory cells, such as macrophages and neutrophils, usually is suggestive of a pseudocyst. 3
87
Pancreatic cystic lesions the diagnosis of malignancy within PCNs, EUS-FNA with cytology alone is highly specific (approximately 90%); yet, the sensitivity of EUS-FNA for a malignant PCN may be as low as 40%-50%, with high false-negative rates.
In addition to cytology, the cyst fluid from PCNs can be used to measure levels of tumor markers and pancreatic enzymes ( Table 2). CEA is a marker that has been shown to differentiate mucinous from non-mucinous cysts with 80% accuracy using levels .192 ng/mL as a threshold suggestive of a mucinous lesion. 3 However, a low CEA level (,192 ng/mL) does not fully exclude a mucinous cyst. Furthermore, CEA levels have not been shown to distinguish benign from malignant lesions. 9,16 The presence of amylase in cyst fluid may suggest a communication between the PCN and the pancreatic ductal system, typically characteristic of an IPMN. However, elevated amylase levels are also found in pseudocysts. Low amylase concentrations in cyst fluid are associated with noncommunicating lesions such as SCAs and MCNs. 3 Although the tumor marker carbohydrate antigen (CA) 19-9 has been shown to have an association with pancreatic adenocarcinoma, its presence in cyst fluid has not been helpful in distinguishing between mucinous and nonmucinous PCNs. 3 The presence of CA 72-4 in cyst fluid has, however, been shown to be indicative of a mucinous lesion. One study showed that elevated serum CA 19-9 in combination with elevated CA 72-4 in cyst fluid is associated with mucinous neoplasms and ductal adenocarcinomas, and these patients should be considered for resection. 3,20 Recent developments in DNA molecular analysis of cyst fluid have identified genes potentially associated with certain cyst types or PCNs. This can further help with the diagnosis of PCNs when cytological analysis is unrevealing due to scant cellularity in the cyst fluid, especially when a solid component is not present for sampling. A recent study showed that the presence of a k-ras gene mutation is diagnostic of a mucinous cyst. 21 Furthermore, cyst fluid demonstrating large amounts of DNA, high-amplitude mutations, or a mutational sequence of k-ras mutation followed by allelic loss ("loss of heterozygosity") is highly suspicious for malignancy. 9,21 Another recent study demonstrated that GNAS gene mutations were found in 66% of IPMNs, and either GNAS or k-ras mutations were present in 96% of IPMNs. 22 Other biomarkers that have been analyzed include micro-ribonucleic acid (miR). In one study, endoscopically acquired pancreatic cystic fluid was obtained from 38 patients who subsequently had surgical resection of the cystic lesion. 23 Levels of two specific miRs (miR-21 and miR-221) were found in higher concentrations in the malignant versus benign cystic lesions.
Overall, cyst fluid analysis is often times complex. Interpretation of the results can be difficult and variable depending upon the type of lesion, and the amount of fluid able to be aspirated at the time of the EUS procedure. The information obtained from the cyst fluid is used in conjunction with the patient's clinical presentation, as well as specific cyst-related morphological features on imaging, in order to make an overall assessment in terms of the type of PCN present and its inherent risk of malignancy. With the development of new and emerging molecular markers, cyst fluid analysis is likely to become even more complex in its attempt to risk-stratify specific PCNs. However, in the opinion of these authors, current data do not support the widespread use of molecular analysis in cyst fluid interpretation due to low overall specificity and sensitivity of the tests. For example, a recent study characterized the performance of molecular analysis (DNA) in diagnosing mucinous lesions. 24 DNA analysis was performed on cyst fluid and compared with resection specimens. Molecular analysis had a sensitivity of 50% and specificity of 80% in identifying mucinous lesions. Diagnostic performance did increase when combined with CEA and cytology; however, the study clearly shows that currently available molecular analysis studies are insufficient when used alone. In routine clinical practice, we reserve molecular analysis of cyst fluid only for those select patients with "borderline" lesions in which we may be searching for more information to guide a patient toward, or away from, surgical resection (as opposed to continued surveillance).
Management of PCNs
The various diagnostic modalities discussed above can be useful for narrowing down the diagnosis of the exact type of PCN; however, definitive diagnosis is often times difficult without supporting histological evidence (ie, by means of surgical resection). Given this diagnostic challenge, a more practical approach to risk-stratification has been suggested and outlined in a recent paper by Tanaka et al. 9 Within this publication, evidence-based guidelines were devised by expert physicians and surgeons of the International Association of Pancreatology (IAP). The guidelines risk-stratify
88
Clores et al PCNs based on "worrisome features" and "high-risk stigmata" to determine management of these lesions based on their malignant potential. Furthermore, they outline the stepwise use of multiple imaging modalities when further work-up is required in order to determine whether a PCN is appropriate for surgical resection. The evidence-based "worrisome features" based on multiple imaging modalities include cyst size $3 cm, thickening or hyperenhancement of the cyst walls, MPD size of 5-9 mm, mural nodules, abrupt change in the MPD caliber (with distal pancreatic atrophy), and regional lymphadenopathy. Features on CT, MRI, or EUS such as an obstructed common bile duct in a patient with a lesion of the pancreatic head, an enhanced solid component to the cyst, and MPD size $10 mm are highly suspicious for malignancy and are thus termed "high-risk stigmata" (Table 3). 9 Patients who demonstrate "high-risk stigmata" should be strongly considered for surgical resection unless clinically contraindicated due to high operative risk. Furthermore, PCNs that cause symptoms (eg, abdominal pain, pancreatitis, and weight loss) often necessitate strong consideration for surgical resection, as the presence of symptoms has been shown to confer a higher risk of malignant transformation. 9 The following algorithm for risk-stratification and management of PCNs has been proposed: any PCN with features showing "high-risk stigmata" should be considered for surgical resection if no clinical contraindications exist. If "high-risk stigmata" are not present on noninvasive imaging studies, the next step is to assess for "worrisome features." If "worrisome features" are suggested on CT or MRI, an EUS examination should be performed by an experienced endo scopist to assess for these features, looking specifically for the presence of a mural nodule, MPD abnormalities with gland atrophy, thickened or enhanced portions of the cyst wall, and the presence of undetected regional lymphadenopathy. In addition, EUS-FNA may be performed to obtain cyst fluid for cytological analysis which may be suspicious or positive for malignancy. Based on these findings surgery may be considered.
If none of these "worrisome features" are present, different intervals of surveillance should occur based on the cyst size. For cyst size 2-3 cm, patients should be evaluated with MRI or EUS as frequently as every 3-6 months, with consideration for resection in young, surgically fit individuals. For cyst size 1-2 cm, monitoring with CT or MRI annually may be considered, with lengthening of the surveillance interval if no changes in cyst features are present. For small lesions ,1 cm in size, monitoring with CT or MRI can be performed every 2-3 years. 9 Management of suspected IPMN lesions differs depending on the origination of the cystic lesion within the main duct or the branch ducts. According to a published series, main duct IPMNs have a mean frequency of malignancy of 61.6%, and a mean frequency of invasive disease of 43.1%. 4,9 In one series, the three factors most predictive of malignancy in MD-IPMN and mixed-type IPMN lesions were the presence of symptoms, mural nodules, and MPD diameter 15 mm or greater. 4,25 Another large study of resected MD-IPMNs showed malignancy was associated with older age (at least 6 years older than their benign counterparts), as well as the presence of jaundice or worsening diabetes at presentation. 4,26 Several patients, however, in both series had no obvious clinical or radiographic predictors of advanced disease, yet were found to have malignancy at the time of resection. Given the high prevalence of malignancy in MD-IPMN lesions, it is inferred that most MD-IPMNs go on to progress to malignancy. Coupled with the low overall 5-year survival rates following surgical resection (31%-54%), guidelines suggest removal of all MD-IPMNs in surgically fit patients with the aim of complete resection of the cyst with negative margins. Long-term follow-up of patients with resected noninvasive MD-IPMNs has shown good longterm survival rates. Resection of invasive IPMNs results in a 5-year survival rate ranging from 36% to 60%. 4,9 The management of branch duct IPMNs is less clear and depends greatly on the clinical context. The malignant potential of a BD-IPMN is less than MD-IPMN, with a mean frequency of malignancy of 25.5%, and a mean frequency of invasive cancer of 17.7%. 9
89
Pancreatic cystic lesions ,30 mm and the absence of mural nodules are highly unlikely to be malignant. 27 These patients were followed for 33 months, and the majority remained asymptomatic without progression to advanced disease. In addition to size and the presence of mural nodules, a rapid rate of cyst growth is another high-risk factor. 4,9 One study investigated BD-IPMNs ,30 mm and without mural nodules. 28 During follow-up, 17.4% of the patients underwent resection, and the malignant cysts had grown by a greater percentage (69.8% versus 19.4%), and at a greater rate (4.1 mm versus 1.0 mm per year), when compared with the nonmalignant cysts. Overall, a cyst growth rate of more than 2 mm/year was associated with a higher risk of malignancy. 28 In addition, high-grade cellular atypia on EUS-FNA results, as opposed to "positive cytology," was also found to be a high-risk factor for malignancy in BD-IPMNs. 9 The results of these studies and many others are the basis for the algorithmic guidelines suggested by Tanaka et al in regards to the management of BD-IPMN lesions. 4,9 Given the lower risk for malignancy when compared with MD-IPMN lesions, conservative management with periodic surveillance is reasonable, particularly in older patients and those without worrisome features.
MCNs have an overall prevalence of invasive carcinoma of less than 15%. 4,9 Malignancy is usually absent in MCNs ,3-4 cm in size. Since these lesions commonly occur in the body and tail of the pancreas, surgical resection is more often less invasive, since a distal pancreatectomy may be performed as opposed to pancreaticoduodenectomy. In younger patients with cysts located in the distal pancreas, surgery should be strongly considered in the presence of any worrisome features. The prognosis for patients undergoing resection of MCNs prior to the development of invasive disease is excellent. Once an MCN develops into a mucinous cystadenocarcinoma, resectability is difficult, which leads to poor prognosis. 3,4,9 As described above, SCAs have an extremely low potential for malignant transformation. If the work-up of a PCN is strongly suggestive of an SCA, and the patient has no symptoms due to the cyst, these lesions can be managed conservatively with observation. Typically, repeat imaging is only needed if symptoms develop. If the diagnosis is unclear, or if the SCA causes symptoms (usually seen with SCA size .4 cm 29 ), surgical resection should be considered. 3,16 SPNs have a low -but significant -potential for malignancy. One study investigated a group of patients who underwent resection for pathologically confirmed SPNs. A total of 15% of the resected SPNs were malignant, without corresponding preoperative features predictive of malignancy. 30 Given the low-grade potential for malignancy in SPN lesions, and the high cure rate if completely resected, the threshold for surgical resection in appropriate patients should be low; especially since most of these patients are young women below the age of 45 years old. 3,16 The type of resection depends on the location of the PCN and the extent of involvement. Limited resections can be considered for MCNs and BD-IPMNs without findings suspicious for malignancy or invasion. However, limited resections are technically difficult and associated with complications such as leaks and positive margins. Therefore, limited pancreatectomy should only be performed if negative margins can be definitively obtained. 4 The standard treatment of any PCN with an invasive component is pancreaticoduodenectomy, distal pancreatectomy, or total pancreatectomy. In recent years, the mortality rate for pancreatic resection has fallen to less than 2% at high-volume centers. MCNs are typically located in the tail of the pancreas and can be resected using a distal pancreatectomy (with or without splenectomy). IPMNs are frequently located in the head and may require pancreaticoduodenectomy, or total pancreatectomy if more extensive involvement of the ductal system is discovered intraoperatively. 3 Recent studies have examined nonsurgical methods for treating PCNs. EUS-guided mucosal ablation by ethanol injection into the cyst cavity is a novel technique that has been recently investigated. Ethanol induces cell membrane lysis and protein denaturation, which results in coagulative necrosis. Typical candidates for this investigative approach have been patients who are poor surgical candidates with worrisomeappearing cysts that lack communication with the MPD (so as not to inject alcohol directly into the pancreatic ductal system). The initial pilot study performed by Gan et al 31 showed that ethanol ablation is safe and feasible, and a subset of patients (8 of 23 on follow-up) underwent complete resolution of the cyst. A follow-up study by DeWitt et al 32 showed that EUSguided ethanol lavage resulted in a greater decrease in cyst size compared with saline lavage. Follow-up using CT surveillance revealed no cyst recurrence for a median of 26 months, and the percentage of complete pancreatic cyst ablation was 33%. Studies have also investigated EUS-guided ethanol ablation followed by local injection of paclitaxel. 33 Overall, these findings are promising and may present an alternative therapy for patients unwilling or unfit for surgery. However, it should be noted that complications such as pain and pancreatitis are relatively common. More research at high-volume centers is needed before EUS-guided ethanol ablation can be recommended to patients over surgical resection.
Several studies have investigated current trends in the evaluation and management of PCNs among physicians. In one recent study, a comparison of practice habits and awareness of consensus guidelines was examined between general gastroenterologists and surgeons, and a specialist group of EUS experts. 34 Awareness of the existence of published guidelines for the diagnosis and management of PCNs was less common in the general group than the specialist group (64% versus 33% unaware, respectively). The American Society for Gastrointestinal Endoscopy guidelines were more commonly recognized by both groups, rather than the IAP guidelines. Both groups demonstrated only moderate consistency employing the published recommendations into their clinical practice.
Follow-up
Data for surveillance intervals of IPMNs and other PCNs are limited and depend largely on clinical judgment and the perceived risk for malignancy, comorbidities, and patient preference. Surveillance of non-resected IPMNs with EUS or MRI at appropriate intervals based on cyst size and other features as dictated by the IAP guidelines has been discussed above. 9 Interval surveillance imaging after IPMN resection is strongly recommended, as multifocal disease is common and additional lesions may develop in the remnant pancreas. Recurrence rates of new IPMN lesions following resection range from 0%-20%. 4,9,35 In patients with noninvasive disease that was completely resected, at least an annual examination of the remnant pancreas with MRI or EUS is encouraged. However, the risk of developing invasive disease in another IPMN lesion within the gland appears to be very low. 35 Surveillance for invasive IPMNs after resection should mimic follow-up guidelines for pancreatic ductal adenocarcinoma. For MCNs, given the nearly 100% cure rate following resection of noninvasive lesions, continued surveillance is unnecessary in most cases. Malignant MCNs should be followed frequently at 6-12 month intervals with either CT or MRI. 4,9 Data for surveillance guidelines for the other PCN types is limited, and surveillance should be considered on an individual basis.
Summary
The increasing discovery of PCNs is largely due to the widespread use of new, cross-sectional imaging techniques. Physicians and surgeons need to be aware of the different types of pancreatic cysts so that a determination may be made regarding the potential for malignant transformation. Appropriate evaluation of a possible PCN includes a multidisciplinary approach among abdominal radiologists, gastroenterologists with a special expertise in EUS, and pancreatic surgeons. Updated published guidelines exist to help providers recognize higher risk lesions, and provide recommendations in terms of surveillance strategies and the need for possible pancreatic resection. Much is still unknown about PCNs, yet our knowledge on risk-stratification, optimal surveillance intervals, and post-surgical management is rapidly increasing.
Disclosure
The authors have no conflicts of interest in this work. | 2017-06-20T05:30:34.916Z | 2014-02-03T00:00:00.000 | {
"year": 2014,
"sha1": "55a8f75cbe1df348374616a302f864109ab210e4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2147/jmdh.s43098",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6c0b7aa1c2d762464f081b1ceed0b1472d69b1a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244469249 | pes2o/s2orc | v3-fos-license | Impact of Family and Social Network on Tobacco Cessation Amongst Cancer Patients
Continued smoking after a cancer diagnosis adversely affects outcomes, including recurrence of the primary cancer and/or the development of second primary cancers. Despite this, prevalence of smoking is high in cancer survivors and higher in survivors of tobacco-related cancers. The diagnosis of cancer provides a teachable moment, and social networks, such as family, friends, and social groups, seem to play a significant role in smoking habits of cancer patients. Interventions that involve members of patients’ social network, especially those who also smoke, might improve tobacco cessation rates. Very few studies have been conducted to evaluate and target patients’ social networks. Yet, many studies have demonstrated that cancer survivors who received higher levels of social support were less likely to be current smokers. Clinicians should be doing as much as they can to encourage smoking cessation in both patients and relevant family members. Research aimed at influencing smoking behavioral change in the entire family is needed to increase cessation intervention success rate, which can ultimately improve the health and longevity of patients as well as their family members.
Many cancers are directly related to smoking, such as lung, head and neck, bladder, cervix, esophageal, kidney, and pancreatic cancers, accounting for 40% of all cancer-related deaths. 1 Smoking is the most preventable cause of cancer deaths. Continued smoking after cancer diagnosis has been associated with poorer quality of life and psychosocial status. 2,3 It has also been shown to adversely affect outcomes by increasing the risk for treatment complications, recurrence, and second primary cancers. 4,5 On the contrary, smoking cessation results in improved outcomes with surgery, radiation reduction, and systemic therapy. [5][6][7] A systemic review of smoking cessation on early-stage lung cancer prognosis found that five-year survival rates in 65-year-old patients was 33% in continued smokers compared with 70% in those who stopped, thereby highlighting the importance of smoking cessation amongst cancer patients. 6 In addition, in those with early-stage primary lung cancer, results showed that continued smoking was associated with a significantly increased risk of all-cause mortality (HR = 2.94, 95% CI 1.15-7.54) and recurrence (1.86, 95% CI 1.01-3.41). 6 Despite these known facts, continued smoking after diagnosis of cancer is as prevalent in cancer patients as in the general population. 8 Continued smoking is even more prevalent in those with tobacco-related malignancies than others. 9 In one study, smoking prevalence was 27% amongst tobaccorelated malignancies compared to other cancer survivors (16%) and those without cancer (18%). 10 In one large retrospective study prepared by The Korean National Health Insurance Service, 51.6% continued to smoke after cancer diagnosis. 11 Paul found in a cohort of 1444 people that only 37% of the self-reported smokers at diagnosis had quit six months post-diagnosis. 12 In another study, 43.96% of cancer survivors successfully quit smoking at cancer diagnosis. 13 Even amongst cancer survivors who quit smoking after diagnosis, relapse rates are quite high. Studies have reported relapse rates of 50-80% in survivors. 8,14,15 Higher abstinence rates have been seen in patients who received cessation intervention within 3 months of their cancer diagnosis; starting cessation treatment as soon as possible is thought to be critical. 7,16 Diagnosis of cancer provides a teachable moment and a window of opportunity to initiate successful smoking cessation intervention. 17,18 Understanding the challenges and barriers particularly during this early phase of cancer diagnosis is of paramount importance. In this commentary, we focus on two of the frequently cited influences, which are living amongst smokers and the impact of social networks on smoking cessation. [19][20][21] Decisions to quit smoking have been shown to be strongly influenced by social factors, such as friends, family, and social groups. In one study, former smokers living in a smoke-free home had 60% lower odds of relapse compared with those living in homes that allowed smoking (adjusted OR = .40; 95% CI .25-.64). 22 A large prospective study focusing on quit rates containing 53,650 current female smokers in 2001 reported their smoking status 4 years later. 23 The study reported that smokers who were partnered (i.e., cohabitating with someone and/or married) were more likely to quit (OR = 1.13, 99% CI 1.06-1.19) and those who had a non-smoking partner throughout the 4 years were even more likely to quit (OR = 2.01, 99% CI 1.86-2.17). Furthermore, those who had a partner who smoked at baseline and quit during the 4 years had an even higher likelihood to quit (OR = 6.00, 99% CI 5.41-6.67). Social influence is important in socially disadvantaged adults as well. In a study exploring socioeconomically disadvantaged adults, participants were more likely to smoke on days when offered a cigarette compared to days when no such event occurred (OR = 3.3, 95% CI 1.21-9.06). 24 Patients who are diagnosed with cancer face similar challenges, and targeting relatives of cancer patients is a particularly interesting focus since smoking often occurs in social groups, including within family clusters which influence its members through modeling effects and shared social environments. 25 In addition, family members of patients with smoking-related cancers may have higher risk for developing cancers and other smoking-related diseases compared to the general population. [26][27][28] Very few studies have been conducted to evaluate and target patients' social networks. Wells et al. 21 found that smoking cessation support amongst patients with cancer and their relatives are insufficiently integrated into the care pathway. Although patients diagnosed with cancer are often advised to stop smoking, little attention has been directed to reduce tobacco use amongst their social support system. 29 Compared to non-cancer diagnoses, it has been proposed that life-threatening health events create a "teachable moment" where relatives may be more receptive to smoking cessation interventions. 30 However, social support is often not considered in smoking cessation programs and few programs have been designed with relatives in mind, and those that have been piloted have had mixed results, mainly because of ambiguous study methods and inability to complete the study as planned. 31 Some of such studies and their pitfalls are detailed.
A study by Poghosyan et al. 32 found that cancer survivors who received higher levels of social support were less likely to be current smokers than those who received lower levels of social support. However, this study did not specify the details of social support. Perceived social support, as measured by Duke-UNC Functional Social Support Questionnaire, was also found to be positively correlated with smoking cessation in cancer patients in a nationwide, multicenter survey conducted with 493 participants who were smoking at the time of cancer diagnosis. 33 This study also did not specify the details of social support aside from what was measured in the Duke-UNC Questionnaire. The review article by Ehrenzeller et al. 34 identified significant variables among survivors who continued to smoke vs those who successfully quit after a cancer diagnosis. The authors found that survivors who are younger, female, without a partner, and with less self-reported socioeconomic and psychosocial support may be at greater risk for continued smoking. These variables highlight the importance of psychosocial support as a modifiable factor that contributes to continued smoking. However, again, this study did not evaluate smoking habits of those in the social network. Nevertheless, all these studies highlight the impact of social network and support in successful cessation programs. Additionally, some studies have shown that cognitive behavioral therapy and peer counseling can be beneficial, again highlighting the importance of psychosocial support. Simmons et al. 35 performed a study with 412 newly diagnosed cancer patients and randomized them to usual care (UC) or a smoking-relapse prevention (SRP) program. It revealed that for the 2-and 6-month time points, patients who were married or partnered were more likely to be abstinent after SRP than UC (P = .03).
Fewer studies have evaluated the impact of cancer diagnosis on relatives and friends of cancer patients. Schnoll et al. 36 explored how a cancer diagnosis can be a teachable moment for smokers and treating nicotine dependence among patients' relatives. The authors recruited 234 relatives and found that oncology patients' relatives were significantly more likely to enroll in a smoking cessation program compared to a control group of non-cancer orthopedic relatives (75 vs 60%; OR = 1.96, 95% CI 1.07-3.61, P = .03). However, the oncology relatives were not more likely to remain in a cessation program (61 vs 52%; P > .05) or quit smoking (19 vs 26%; P > .05). This study demonstrated that cancer diagnosis of relatives is a teachable moment. However, it also identified challenges in maintaining smoking abstinence, such as levels of psychological distress, nicotine patch adherence, and perceptions of benefits related to smoking that are involved in successfully engaging relatives of smokers in a smoking cessation program. Providing continued support for smokers and support systems of smokers who initially quit smoking following a cancer diagnosis could have a meaningful impact to decrease smoking recurrence. Additional barriers to smoking cessation amongst family members of cancer patients include increased stress experienced following a diagnosis; a desire to maintain personal control and a sense of "normal" self; lack of belief in or acceptance of the connection between smoking, cancer, and health; and lack of meaningful discussions with health professionals about smoking. 21 One study indicated that family members are clearly affected by a cancer diagnosis; however, it did not serve as a completely effective impetus for close family members to quit or reduce smoking. 22 In a small study of 14 families, lack of smoking cessation was attributed to distancing oneself from the diagnosis and belief that quitting is an individual choice. 29 That study highlighted the importance of taking family dynamics, gender roles, and self-identities into account when designing interventions.
A relatively recent study which focused on family dynamics, though not cancer-related, is worthy of mention. This study was presented at European Association of Preventive Cardiology meeting in April 2019. It reported a six-fold increased chance of successful smoking cessation when married and cohabiting couples participated in a smoking cessation program together compared with those who did it alone. 37 In another study conducted at the UNC Tobacco Treatment Program, they examined the feasibility of implementing a family systems approach to quitting. It reported a statistically significant increase after six-month follow-up for patients with family integration 28% (N = 56/200) compared to 23% (N = 67/291) (P = .105) for patients without family integration. 38 Other studies found that among married couples, when one spouse stopped smoking the other spouse was 67% less likely to continue smoking. 39 Further lending support to interventions targeting the patient-family unit, Bottorff et al. 40 found that family members of patients with lung cancer diagnosis often continued to smoke, creating friction and distress between cancer patients and their families, and highlighting the importance of studying interventions that have worked. In this report, the cancer patients failed to confront family to quit, desiring instead to maintain harmony and connections rather than risk relationships. Examining family dynamics and supporting family programs could help create productive dialogue about the importance of families having a united goal.
Additional studies have explored other social support constructs. Westmass recommended focusing research on communitylevel or population-level factors, such as smoking restrictions, advertising, support groups, and individual counseling. Based on the opinion of these authors, these provide emotional, informational, and instrumental support although studies to date have failed to show definitive benefits. It concludes that other social support constructs, including internet and electronic technologies (e.g., text messaging, email, social networking), can tailor individual cessation treatment based on each patient's unique profile. 41 In summary, the prevalence of smoking is high in cancer survivors and higher in survivors of tobacco-related cancers.
Social network seems to play a significant role in smoking habits of cancer patients. Interventions that involve members of patients' social network, especially those who also smoke, might improve tobacco cessation rates. Studies of this nature may also benefit members of social networks who are smokers. Since it has been shown that those family members who quit together have more success, clinicians should be encouraged to target both those with a cancer diagnosis as well as their family members who smoke, so that they may help each other. More research is needed to find better ways to influence smoking behavioral change in the entire family, which could include bespoke smoking cessation interventions that can ultimately improve the health and longevity of patients as well as their family members.
Author Contributions
MN was the main author of this article and wrote the initial draft and revisions. NS and NM provided important feedback and edits. | 2021-11-21T06:16:32.207Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "9c55c6151361116ce778375d340561f4d2e3f65f",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/10732748211056691",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02d2c8465e66694142842443be9270ae6b721cfd",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235293817 | pes2o/s2orc | v3-fos-license | Low-Light Shadow Imaging using Quantum-Noise Detection with a Camera
We experimentally demonstrate an imaging technique based on quantum noise modification after interaction with an opaque object. By using a homodyne-like detection scheme, we eliminate the detrimental effect of the camera's dark noise, making this approach particularly attractive for imaging scenarios that require weak illumination. Here, we reconstruct the image of an object illuminated with a squeezed vacuum using a total of 800 photons, utilizing less than one photon per frame on average.
Quantum imaging [1][2][3][4][5] is capable of outperforming classical alternatives since it's able to utilize non-classical correlations in probing optical fields. Several quantumenhanced imaging methods have been developed and proved useful for biological imaging [6,7] and imaging in the presence of contaminating classical background illumination [8,9]. When imaging in the low-photon regime, it can be difficult to implement direct intensity detection. This is due to the accuracy of such detection being determined by the photon statistics and by technical noise. Some examples may include laser intensity fluctuations or the detector dark noise, and normally requires a long exposure time to allow for statistical averaging.
We experimentally demonstrate an imaging technique based on detecting the quantum noise distribution of the quadrature-squeezed vacuum before and after it interacts with an opaque object. Our homodyne-like detection scheme allows elimination of the detrimental effects of the camera's dark noise and, potentially, is immune to the classical background illumination while keeping the probing intensity low. This approach is particularly attractive for applications requiring weak illumination since the squeezed vacuum inherently has very few photons illuminating the object.
Many recent realizations of quantum imaging use twomode optical fields with correlated intensity fluctuations generated either through parametric down conversion [10][11][12][13] or four-wave mixing in an atomic vapor [14][15][16][17][18]. When an object is placed in one of the optical beams, its shape can be imaged with sub-shot-noise accuracy by subtracting the intensity images of the two quantumcorrelated beams [19]. However, the average intensity of each beam limits the acceptable level of the dark noise. Compared with typical photon-counting detectors, CCD cameras often present a challenge for imaging weak optical fields due to their relatively slow frame rate, making it harder to mitigate low-frequency technical noises) and their intrinsic dark noise [18,20].
Our approach is different. Instead of using quantum * eemikh@wm.edu FIG. 1. A conceptual representation of the proposed quantum shadow imaging using a "+" as the target. The quantum shadow method uses the average quantum fluctuations of the probe and reference fields amplified by a local oscillator and therefore not susceptible to the camera's dark noise. The quantum shadow probe map is on a linear scale. For comparison, we show classical intensity image of the "+" target illuminated with a bright beam on a log scale. The "+" is about 475 µm in width.
enhanced or quantum correlated intensity measurements, our measurements are based on an analysis of quantum quadratures variance. We use a quadrature squeezed vacuum field [21][22][23][24], containing very few photons on average; when such a field interacts with an opaque object, its quantum fluctuations in the obstructed zone are replaced with a regular vacuum. To record the spatial distribution of the resulting noise quadrature without being affected by the camera dark noise, we mix the quantum probe with a classical local oscillator field. This amplifies the probe's quantum noise, realizing a camera-based balanced homodyne detection scheme. Our approach allows us to image the fields with as low as one photon per frame and yet obtain spatial details of the object with significantly less acquisition time, making it attractive to e.g. non-destructive imaging of biological samples [25]. Moreover, in such a method, we can use an anti-squeezed quadrature -increasing the tolerance to optical losses. The concept of the proposed method is illustrated in Fig. 1. A CCD camera detects the number of photons incident on each pixel, N , on top of its internal dark noise N d . For a standard intensity measurement, the boundary between a fully illuminated region (the average photocounts N + N d ) and a fully blocked region (the average photocounts N d ) can be distinguished by the difference between these two photocount values. Moreover, we can estimate the signal-to-noise of such traditional measurements as whereN is an average photon number detected per pixel (or bin), and ∆N d is the standard deviation of the dark noise counts. We propose instead to measure the normalized variance, V , of the quadrature X θ = cos(θ)X 1 + sin(θ)X 2 , where X 1 = a + a † , X 2 = i(a † − a), and a † (a) is the creation(annihilation) operator for the photon state. In this case a similar boundary between the light and darkness can be detected via the deviation of the noise variance for the region illuminated by a quantum probe from unitythe noise variance of the coherent vacuum (shot noise). This method does not work for a coherent illuminating state because the quadrature variance is unchanged by the loss.
For example, if an experiment uses a squeezed vacuum with the squeezing parameter r, the expected variance value for the squeezed and anti-squeezed quadratures are V = e ∓2r , respectively. We can also estimate the noise of such measurements by calculating the variance of the corresponding variance values for such a squeezed vacuum field, yielding the following theoretical signal-to-noise ratio: Note, that for this calculation we can neglect the camera dark noise thanks to the homodyne detection. As a result, we can compare the performance of the two approaches as a ratio of the two signal to noise values for an anti-squeezed vacuum field, and a coherent beam with similar average number of photonsN = sinh 2 (r) 1: It is easy to see that in the limit of the small photon numberN 1, the two methods perform equally well in the case of vanishing dark noise; however, if the dark noise becomes comparable with the average photon number, the advantage of the quantum noise-based measurement becomes more obvious.
With our method, we can produce a quality transmission map from the quantum noise measurements and avoid the detrimental effect of the dark noise. We connect the measured field variance, V ( x) to the object transmission, T ( x) (see Ref. [26] and supplementary materials for detailed derivations): where O( x) = A u LO u * SqV dA is the overlap between the spatial modes of the local oscillator, u LO , and the squeezed vacuum mode, u SqV , and A is the pixel at location x. For the reference beam, where the object is removed, we assume T = 1 everywhere. For the modematched local oscillator and quantum probe, we arrive at the following expression of the transmission map using measured quadrature noise variance V p and V r in the probe and reference beams, respectively: Note that our method of transmission calculation is agnostic to the choice of the squeezed or anti-squeezed quadrature. In this experiment, we work with antisqueezed quadrature, since it is more robust to the optical losses.
The schematics of the experimental realization of the proposed method is shown in Fig. 2a. While the specific method of the squeezed vacuum generation is not important, in the presented experiments we use a squeezer based on the polarization self-rotation in a 87 Rb vapor cell [21,22], details of which are reported at Ref. [23,24]. The principle difference from the previous experimental arrangement is the pulsed squeezer operation. To avoid camera over-exposure, the pump field is turned on for only 1 µs during the 544 µs duty cycle using an acoustooptical modulator (AOM). Right after the squeezer, we detect 1.5 dB of squeezing and 10 dB of anti-squeezing and these parameters are not affected by the pulsed operation. Due to optical losses, after the imaging system we detect (with homodyning photodiodes) only 0.5 dB squeezing and 7.5 dB anti-squeezing.
After the squeezer, the pump and squeezed vacuum (SqV) fields are physically separated using a polarizing beam displacer (PBD). SqV alone passes through the object and then recombines with an attenuated pump field, which now serves as a local oscillator (LO) in the balanced homodyne scheme for imaging. We image the object onto the camera using a 4-f system of lenses (see L1 and L2 in Fig. 2a. We obtain quantum-limited statistics from images of the two beams using a Princeton Pixis 1024 camera that has 13µm×13µm pixels, an average standard deviation of dark noise counts of 10 per pixel and high quantum efficiency (above 95%), cooled to -70 • C. We illuminate our object with an average of 6 × 10 −5 photons per pixel per frame, so we are in the regime where the dark noise is significantly larger than the photon number. Hence, our quantum method has an advantage according to Eq. 3. This camera can only rapidly capture four frames before having to pause for half a second for data transfer. Thus, we collect four frames, separated by 544µs (synchronized with the pulsed laser) that form "kinetic clusters". To extract the information about the quantum noise variance, we subtract the intensities of the two beams after the final beam splitter (labeled "beam 1" and "beam 2" in 2b) to create an amplified noise map -a 2D analog of FIG. 2. a) Experimental setup with two different detection schemes: traditional homodyne and camera. SqV denotes the squeezed vacuum, LO denotes the local oscillator, PR is phase retarder, AOM is an acousto-optical modulator, and PBD is a polarizing beam displacer. Objects may be placed in the path of the squeezed vacuum where lenses L1 (300 mm) and L2 (250 mm) map the object image onto the camera. PDs are photodiodes, SA is a spectrum analyzer, the camera is connected to a computer. b) Visual illustration of our data analysis.
the differential photo-currents in a traditional homodyne detection scheme. Next, we calculate the image of the experimental quantum variance V (R) exp (x, y) normalized to the shot noise and temporally-average over a given kinetic cluster: where the average is taken within the four frames of each kinetic cluster. Finally, we average the variance maps over all the kinetic clusters for a given set of experimental parameters to produce an average normalized quantum noise map of our squeezed vacuum.
To experimentally demonstrate the capabilities of quantum shadow imaging with the squeezed vacuum, we chose a completely opaque rectangle as our mask to block approximately one quadrant of the probe beam as our test object to be inserted only in the squeezed vacuum channel (see Fig. 2 a). For most measurements, we also need to increase the effective detection area to improves the overlap parameter (see Eq. 4) with the characteristic quantum-mode size of the squeezed vacuum beam. To do that, for each point, x = (x, y), we sum all the counts in the radius R (in units of pixels) around it to calculate the total photon counts N (R) 1,2 ( x) -a process commonly refer to as "binning". The situation in which the detection area is much smaller than the mode size of the squeezed vacuum is equivalent to a large optical loss, and thus reduces any non-classical noise down to shot noise. Note, this summation is very different from having large pixel, since the quantum uncertainty of detection within one pixel causes the integration over the field's amplitude across the pixel. While summing over pixels integrates over intensity (or photon number), this is correct only when the avalanche process is unlocalized. When the avalanche is localized even within one pixel the integration is over photon number [27]. Fig. 3 shows the examples of measured variance maps for both reference and probe beams for different binnings. Fig. 3 (column four) shows a cross-section of the experimental quantum shadow transmission map at the location of the red line and compares it with the calculated transmission map of an ideal noiseless beam sampled with the same binning of radius R. When the radius (R) of the bin is small (top) it is impossible to see the quantum shadow, since the detected quantum statistics is indistinguishable from a shot-noise limited beam [28]. However, as we increase the radius of the bin (top to bottom rows), the difference in quantum statistics between the blocked and open regions of the mask becomes more and more pronounced, creating a resolvable "quantum shadow". Such improvement, however, comes with the price of somewhat reduced "sharpness" of the image features. This is because the spatial resolution of the quantum noise maps is inversely proportional to the size of the bin, while the contrast of the edge is proportional to the bin.
The spatial resolution is also tied to the size of the squeezed mode [17], as seen in Eq. (4), since the size of the bin needs to correspond to the size of the mode for the best contrast. Thus, in general, a multimode squeezed field with a small mode size is more attractive for imaging applications, compared to a single-mode optical field. Some information about the mode decomposition of our squeezed vacuum field may be gleaned from the first column of images in Fig. 3. If our reference beam was in the single-mode matching the LO, we would expect it to have a normalized variance proportional to the overlap parameter of a fundamental Gaussian spatial mode with itself according to Eq. 4. However, a clear ring-like structure emerges as we increase the binning radius, suggesting the presence of weaker higher-order modes. Nevertheless, our close to single-mode squeezer demonstrates quite good visibility of the image.
To quantify the quality of our quantum noise images, we calculate the similarity defined as where T exp is the experimentally measured transmission, T o is the true object transmission, and the sum is taken for pixels along a path across the image (we use horizontal straight line shown in red in Fig. 3d). This metric allows us to quantify how well our noise analysis reconstructs the image of an object. We see that the quantum noise images quickly approach the ideal similarity (see Fig. 4) and reflects overall the mask shape well for significantly lower photon numbers (we estimate that we have about 1 photon per frame in the squeezed vacuum field). This is because we can boost our quantum noise above the dark noise using a homodyne-like detection scheme and our squeezed photons have correlations that allow us to reconstruct the image from the noise using less object illuminating resources (photons). It is difficult to compare the noise shadow imaging method to other quantum imaging methods, because they focus on enhancing preexisting techniques and comparing SNRs, but our methods has no direct classical counterpart capable to operate at such low illumination and high dark count noise levels.
In conclusion, we can image an opaque object by illuminating it with a squeezed vacuum. Our scheme can use anti-squeezed quadrature which makes the whole method more robust against optical and detection losses. We can reconstruct the object by analyzing the quantum noise statistics that change spatially depending on the mode structure of the squeezed vacuum and the object. This has application to any imaging scenario where a high photon number could damage the object, such as biological imaging. Also, the overall scheme is quite simple and uses 6 × 10 −5 photons per pixel per frame. We used only 1600 photons in total to reconstruct the object -far less than other low photon methods [29]. We also note that this method has the potential to be generalized to other quantum states, e.g. a thermal state since it only depends on the state's deviation from the shot noise. Since our method is based on analysis of the quantum state variance, it is potentially immune to the parasitic illumination by the classical light sources for which the quadrature variance is independent of transmission.
ACKNOWLEDGMENTS
We would like to thank the late Jonathan Dowling for his work throughout this project. We also thank Morgan Mitchell for helpful discussions and comments. This research was supported by Grant No. AFOSR FA9550-19-1-0066 Appendix A: Quantum homodyning signal at a pixel The following analysis is based on the assumption that both squeezed state and local oscillator are initially within a single spatial mode. Initial state, |Ψ int , is generated from the vacuum by the squeezing operator (Ŝ) in mode 1 and the displacement operator (D) in mode 2. Object to be imaged is placed in mode 1, and is illuminated solely by squeezed vacuum. Corresponding state |Ψ obj is generated by the action of the object operator (T ) on the initial state. |Ψ bs is the state obtained after mixing squeezed light and the local oscillator on a 50:50 beam splitter (B). To calculate variance in the photon number difference at each pixel, we shift from the Hermite-Gaussian mode basis to the pixel basis. This basis transformation is implemented by the opera-torsÛ 1,2 ( x) to give final state, |Ψ .
Calculation of the theoretical variance V th ( x) gives, and normalizing it with the intensity of the local oscillator (as outlined in Eq. (6) of the main text) gives normalized variance V ( x) as, whereN 1,2 is photon number operator and r is the squeezing parameter.
Appendix B: Quantum variance of a composite detector
Binning procedure entails summing the values of all of the neighbouring pixels inside the detection area (A) of binning radius, R. Normalized variance is calculated after applying binning procedure to beam difference and beam sum matrices. Under the condition that the pixel size is smaller than R and x ∈ χ with Substituting the result of Eq.(A6) for the second term, followed by expansion of the first term yields For the special case of mode matching between squeezed light and local oscillator, camera is placed at the focal point of the lens and object is imaged directly on it. Hence object acts only as an intensity mask and does not add any phase to the transmitted light. Therefore, T 1 simplifies to a diagonal matrix comprised of 0s and 1s with the condition: Furthermore, mode matching condition allows us to write U 1 ( x) = T 1 ( x) · U 2 ( x). This simplifies binned vari- ance as and binned normalized variance as th ( x) = 1 + (e 2r − 1) The validity of this approach is tested by calculating the noise map of a map, similar to the second column in Fig. 3 in the main text. 128×128 complex-valued map of field amplitudes, generated by classical Fourier optics simulation of a Gaussian beam and a mask in the path of a Gaussian beam is used for U 1 ( x) and U 2 ( x), respectively. Figure 5 shows the normalized noise maps calculated with and without phase-matching conditions for no binning (R=1) and binning (R=5) cases. Normalized variance values greater than unity in the binned image, as opposed to shot noise limited variance obtained without any binning indicates the detection of anti-squeezed light demonstrated in experimental data. Our assumption of phase matching condition is supported by shot noise limited noise maps even after binning in Figure 5(d). The difference in R=1 images obtained by experiments and those by theory can be attributed to the various experimental sources of noise that were not considered in these calculations. | 2021-06-03T01:16:06.203Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "4caf1355464036cec0f3434ca68cc020c12a70d5",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "4caf1355464036cec0f3434ca68cc020c12a70d5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
266719434 | pes2o/s2orc | v3-fos-license | The Influence of Product Quality, Prices, and Promotions on Customer Loyalty
In today's competitive business era, maintaining customer loyalty is one of the critical factors for the long-term success of a company. This research aims to determine the impact of product quality, Price, and promotion on customer loyalty. This research was conducted at UD.
Findings:
Based on the research conducted, the conclusion of this research states that product quality has a significant influence on customer loyalty.It means that the higher the quality of the product, the greater the consumer's loyalty to the product.
Implication:
Price has an essential impact on customer loyalty.It means that the higher the product price, the higher the consumer's dependence on that product.Promotion has an essential impact on customer loyalty.It means that the better the promotion, the higher the customer loyalty.Cite this as ANIKAWATI, V., HARIYANA, N. ties to it.Therefore, this research investigates the impact of product quality, Price, and promotion on customer loyalty.Using a better understanding of the correlation between these factors, companies can develop more effective marketing management techniques to maintain and increase the loyalty of their customers.This research will use various research methods, including customer surveys and questionnaires, to study the relationship between product quality, Price, promotion, and customer loyalty.This research will likely provide valuable insight for companies in developing effective marketing management techniques.More effective and sustainable, and improve the overall customer experience.Thus, this research can make a positive contribution to the development and growth of businesses in the future.
Theoretical Review, Product Quality.According to Kotler and Armstrong (2012), product quality means "the ability of a product to provide its benefits, including durability, reliability, accuracy, overall ease of use and repair, and its attributes have different values."together with other product features.Product quality from Mullins, Orville, Lar-Reche, and Boyd (2005) includes Performance, which refers to the essential operating characteristics of the product.Durability means how long the product lasts before needing to be replaced.Conformity using specifications, namely the extent to which the product meets specifications.Alternatively, there is no stigma found on the product.Features are product characteristics designed to increase product functionality or consumer interest.Reliability means a product's ability to perform satisfactorily or unsatisfactorily over a certain period.Beauty is related to the external appearance of a product.Perception Quality is often assumed to result from indirect measurements because consumers may need more information about the product.
Price.Price represents the value of something in the form of money that a person must spend as a sacrifice to obtain, own, or maintain a good or service.(Sumadji, 2006).Thus, for each product or service provided, the marketing department has the right to choose a base price that includes all costs related to production, distribution, and promotion.
Price is the only element in the marketing mix that can shape profits and losses for a company.Price impacts financial Performance and significantly impacts the value of brand positioning in customers' minds.Apart from that, Price is also a measure of product quality when customers have difficulty evaluating complex products.Factors that need to be considered when deciding on prices include costs, profits, competitive activity, and changes in market expectations.Determining prices involves at least six steps, namely choosing objectives.Pricing, determining needs, estimating costs, analyzing costs, prices, and competitors' offers, determining pricing methods, and choosing final prices (Kotler & Keller, 2007).
Determining product prices or costs must be careful because inappropriate costs will prevent customers from using the product.Therefore, determining prices or costs must be considered carefully.In this case, there are several bases for determining prices or costs, especially costs and competition (Gate, 2001).The Price of a good or service determines market demand.Price can also influence the marketing program of a company or organization.Therefore, Price is the only marketing mix that can form profits for the company.
Promotion.Promotion is a set of special tools, most of which have a short-term impact, created to stimulate consumers or sellers to buy exclusive products or services more quickly and in large quantities.Promotion is a variety of ways to inform, persuade, and remind consumers, whether exclusive or not.Exclusive, about a product or brand being sold.
Sales promotions are short-term bonuses designed to encourage the trial or use of a product or service.Marketers can target sales promotions at businesses and end consumers.Like advertising, promotions come in various forms, although advertising often conveys a reason for consumers to buy.Buy, the promoter encourages consumers to buy.Therefore, sales promotions are designed to change business behavior so customers actively use and support the brand.Change consumer behavior so they buy the brand for the first time, buy more of it, or buy it more quickly or often.Promotion means a form of communication between the seller and the consumer so that the buyer gets to know the product being sold by the producer and continues to remember it in the future.Advertising can be done in many ways, for example, by producing advertisements using all forms of existing media.
Consumer Loyalty.Consumer loyalty is the primary marketing source and marketers' real goal.Loyalty can convey insight into whether consumers will switch to other products.In tight competition with the increasing availability of substitute products, consumer loyalty to an exclusive brand weakens due to too many attractive offers.Switch to another brand.For marketers, consumer loyalty can be a measure of business continuity.By having loyal consumers, the company is confident that its products will continue to be purchased and the business will run smoothly.Loyal consumers will not switch to another brand even if they benefit from more attractive offers.According to Tjiptono (2000), loyalty is when consumers have positive behavior towards a product or manufacturer (service provider) and are accompanied by repeated and consistent purchasing patterns.
From the definition above, loyalty means a positive consumer attitude towards a product or service accompanied by repeated and consistent purchasing attitudes, as a result of which the consumer or self recommends the company's product or service to others.Loyal consumers are a precious asset for a business; assessing whether consumers are loyal is essential.According to Dick and Basu (2000), there are many types of consumer loyalty, including: 1) No Loyalty.If both consumer and purchasing behavior are weak, loyalty will not be formed.There are two causes: weak (almost neutral) behavior can occur when a new product/service is introduced or when the company cannot communicate the unique benefits of its product.The second reason is related to market dynamics, where well-known brands are also prepared using the same method.2) Spurious Loyalty.If relatively weak behavior is accompanied by a strong repeat purchase attitude, this means false loyalty.For example, a situation like this is characterized by the influence of non-behavioral factors on behavior.This situation can also be seen as inertia, namely the difficulty of consumers distinguishing between various brands in product categories with low involvement.So, repeat purchases are made according to situational considerations, such as familiarity (strategic placement of products on display or location of the store at a busy intersection, casting, or discount point.3) Latent Loyalty: Latent loyalty is reflected when a weak redemption pattern accompanies a strong attitude.
This situation is of great concern to marketers because the impact of 12 non-behavioral factors is as strong or tends to be stronger than behavioral factors in determining repeat purchases.4) Loyalty.This situation is an ideal situation needed by most marketers, where consumers have positive behavior towards the product or producer (service provider) and are accompanied by repeated and consistent purchases.
Type of Research.
A quantitative study using primary data will be carried out in this research.Primary data was obtained from research participants through online Google forms and questionnaires.The Likert scale is used to assess respondents' answers to the list of questions organized in the questionnaire.The sample size for this study, which used 12 indicators, ranged from 75 to 150.The sample size for this study was set at 100 respondents who purchased Mas Brow products at UD. Setya Abadi D.M. Tambaksari District, Surabaya, based on these factors.Sampling strategies were used in this study, including primary random sampling and probability sampling.Table 1 displays the attributes of each respondent.Based on Table 1, almost 60% of respondents were between 21 and 30 years old.Based on work experience, more than 56% of respondents were identified as working as female entrepreneurs.In particular, 57% of respondents who claimed to have used vocational/high school education were the respondents who most often took part in this research.According to respondents, 60% of their knowledge of the Mas Brow brand comes from social media (Instagram, TikTok, website), while the least amount of knowledge comes from direct sales, which is 12% of the total.
This research includes eight independent variables and one dependent variable with operational definitions.The collected data was then subjected to validation, reliability, and classical assumption tests: 1) normality, 2) multicollinearity, and 3) heteroscedasticity.Next, regression analysis was carried out using a graphical regression model using SPSS for Windows software.As an example of multiple linear regression in this research, pay attention to the following: The indicators above are scaled using a Likert scale.The Likert scale measures individual or group awareness, perceptions, and attitudes towards social phenomena.According to the Likert scale, the variable to be evaluated indicates that variable.Then, these indicators are set as temporary measures to help identify questions.Strongly Agree (SS), Agree (S), Neutral (N), Disagree (TS), and Strongly Disagree (STS) will be the answers to each question using a Likert scale.
RESULTS AND DISCUSSION
Test Validity.Validity is used to find out how valid or legitimate a questionnaire is.A questionnaire can be considered valid if the statements contained in the questionnaire are true or accurately represent something to be tested.The results of checking the validity of the questionnaire using IBM SPSS Statistics 26 are as follows: Before carrying out the classical assumption test, the validity and reliability of the questionnaire used in this research were assessed.Validity assessment results.As seen in Table 4, each indicator used to adjust the variables used in this research has a rcount greater than the r table.For a sample of 98 (N-2 = 100-2 = 98) respondents with a significance level of 0.05 or 5%, an Rtable of around 0.165 is obtained.Based on the results of the validity assessment, all correction indicators in the questionnaire are considered valid (see Table 4).
Reliability Test.In other words, reliability/reliability is a measurement that shows some threshold at which an instrument can be trusted or not trusted; therefore, reliability contributes to the instrument's consistency (in understanding).Cronbach's alpha coefficient is used to calculate reliability or gradient.The following are the dependent test results for each variable: Based on Table 5, it can be seen that the Cronbach's Alpha coefficient for each variable is more than 0.06.Thus, all research instruments are reliable and can be used for further analysis.Before hypothesis testing, all data will be tested for the classic assumptions of normality, multicollinearity, and heteroscedasticity.Hypothesis testing uses the T-test and F-test, where each hypothesis is supported if the sig value < 0.05 and the t value > t table, regardless of the direction of the beta coefficient.
Classic Assumptions
Test.This research applies the partial differential equation linear regression model to classical assumptions.Some of the data tested are the data normality test, multicollinearity test, and heteroscedasticity test, each of which has requirements so that the data can be said to meet the tests of these three classical assumptions.The results obtained are as follows.
Normality Test.The first assumption made in this classical assumption is called the normality test.The normality test is used to determine the normality distribution of residual values.When using Kolmogorov-Smirnov analysis, the results of the normality test are as follows: Source: SPSS 26.
Figure 1. Normality Test
The image above shows that the data tested in this study is expected.However, use the Kolmogorov-Smirnov test to learn more about the normality value.The following is a table of the test data output: One-Sample KoImogorov Smirnov Test.Based on the data entry results in the table above, the residual's significance level (Asym sig, 2-tailed) is 0.087 > 0.05, indicating a higher value than alpha.It shows that all the variable data studied follows a normal distribution.As a result, the first classical hypothesis has been fulfilled, and the model is suitable for use as a data analysis tool.
Multicollinearity Test.Multicollinearity refers to the existence of a pure linear relationship between many or all of the variables describing the regression model.Therefore, if the VIF is below or less than <10 and the tolerance value is above or more than >0.1, then multilinearity will not occur.From the table above, it can be seen that the tolerance value is > 0.1, and the VIF value for each variable below is less than <10, thus indicating that there is no multicollinearity in the model.
Heteroscedastasis Test.The heteroscedasticity test aims to determine the variation in residual inequality from a particular experiment.Test data is valid if the results show heteroscedasticity and no inequality in data variations.The results obtained from the heteroscedasticity test are as follows;
Figure 2. Heteroscedasticity test
The analysis results in Figure 8 show that the points are random and not divided into existing patterns, meaning that heteroscedasticity does not occur in this model.
Multiple Linear Regression Analysis.Based on data analysis using the SPSS application, the data output is as shown in the following table: Based on the statistical regression tests carried out in the table above, the mathematical equation for this research can be arranged as follows: Y = -0,559 -0,268 X1 + 0,675 X2 + 0,589 X3 The regression results and interpretation of multiple regression analysis are as follows: The constant value (a) is negative, namely -0.559.It means that if product quality, Price, and promotion are equal to zero, customer loyalty will decrease.The regression coefficient for the product quality variable (X1) is around -0.268, indicating a negative influence of product quality on customer loyalty.The regression coefficient for the variable "X2" is around 0.675, indicating a positive correlation between Price and customer loyalty.The coefficient of determination of the promotion variable (X3) is around 0.589%, indicating that promotion positively influences customer loyalty.
The results of the analysis show that the regression coefficient of determination for product quality (-0.268),Price (0.675), and promotion (0.589) is as follows: because 0.675 > -0.268 and 0.589, Price is the variable that has the most significant influence on customer loyalty.
Results of Partial Regression Test (t-Test).
The following can be obtained from the results of data processing using the SPSS program: If the sign value is <0.05 or the calculated t value is > t table, then there is an influence of variable X on variable Y (and vice versa); Formula: t table = t (a/2: n-k-1) The t-test results for the variable measuring product quality are around -2.236 with a p-value of 0.028 <0.05, meaning that Ho is rejected, showing that product quality significantly negatively affects customer loyalty.Suppose the t-test for the price variable is determined with a t-value of 4.422 and a p-value of 0.000 < 0.05.In that case, Ho is rejected, which means that Price has a statistically significant influence on customer loyalty.The results of calculating the promotion t-test variable produce a t-value of 5.159 with a p-value of 0.000 < 0.05, meaning that Ho is rejected, which shows that promotion has a significant negative effect on customer loyalty.The coefficient of determination determined based on Adjusted R Square is 0.721%.It shows that the influence of product quality, price, and promotion variables on customer loyalty is 72.1%;However, the influence of other variables on customer loyalty is only 27.9% if the p-value is 0.028, more diminutive than 0.05, meaning Ho is rejected.Therefore, product quality has a significant influence on customer loyalty.
The Effect of Product Quality (X1) on Customer Loyalty (Y) Judging from the results of testing the t variable with a t value of around -2.236 and a p-value of 0.028 < 0.05, it can be concluded that Ho is rejected.It supports Hypothesis 1: Product quality significantly influences customer loyalty.It means that the research results vary in terms of product quality and have a significant influence on customer loyalty.It follows the research results of Rahmawati and Nilowardhono (2018), which state that product quality variables significantly influence customer loyalty.
Effect of Price (X2) on Customer Loyalty (Y) The t-test result for the price variable is 4.422 with a p-value of 0.000 < 0.05, which means Ho is rejected.It allows us to conclude hypothesis 2, namely that Price significantly influences.The results of this research show that the price variable has a significant influence on customer loyalty.It follows the results of previous research by Darwin et al. (2019), which shows that price variation significantly influences customer loyalty.
Effect of Promotion (X3) on Customer Loyalty (Y) Based on the results of the t-test, the promotion variable has a t-value of 5.159 and a p-value of 0.000 < 0.05, so Ho is rejected, so hypothesis 3 states that promotion has a significant effect on customer loyalty.The conclusion of this research shows that promotional variables significantly influence customer loyalty.This conclusion follows the results of research conducted by Eferiato (2016), who found that promotional variables significantly influence customer loyalty.
Product Quality (X1), Price (X2) and Promotion (X3) Customer Loyalty (Y) Based on the results of the F test hypothesis testing, it can be concluded that there is a simultaneous influence between independent variables, namely product quality (X1), Price (X2) and promotion (X3), the dependent variable is customer loyalty (Y) as evidenced by the high calculated F value (86,252) which is more significant than table F (2,699).Based on the entire sample, a coefficient of determination (R squared) of 0.721% was obtained, indicating 72.1%.The dependent variables, namely Product Quality (X1), Price (X2), and Forever Promotion (X3), experienced changes.On the other hand, the entire sample (100% -72.1% = 27.9%) was influenced by variables other than these.
CONCLUSION
Based on the results of the study and analysis, the following can be taken: 1. Product quality has a significant effect on customer loyalty.It means that as product quality increases, customer loyalty to the product will also increase.
2. Price has a statistically significant influence on customer loyalty.So, When the Price of a product increases, customer loyalty to the product also increases.3. Promotion has a significant negative effect on customer loyalty.It shows that customer loyalty increases when promotions are carried out more successfully.Suggestion.Research findings show that UD.Setya Abadi D.M should prioritize improving the quality of its products by using premium raw materials and improving service standards to foster customer loyalty and satisfaction.Business people must also concentrate on improving the taste of their products by studying the established product manufacturing SOPs.The pricing is adjusted to the quality standards offered to ensure customer satisfaction with the product at the Price listed.
Currently, promotions using digital platforms such as Instagram, websites, and TikTok, can help increase the visibility of UD.Setya Abadi D.M and position it as an inspiration for Indonesian culinary delights.By taking these actions, companies can make their products more accessible and popular among customers.In addition, it is recommended that further research be able to measure knowledge and research variables consistently and analyze objectives thoroughly to obtain comprehensive information about factors that may influence customers' ability to carry out the purchasing process.
Table 1 .
Characteristics of Respondents
Table 2 .
Indicators and Measurement of Research Variables
Table 3 .
Scoring of Research Instruments
Table 4 .
Validity Test Results X3.1 I can easily get Mas Brow kebab products at the nearest frozen 0,663 0.165 Valid Source: SPSS 26
Table 5 .
Reliability Test
Table 8 .
Multiple Linear Regression Test Source: SPSS 26
Table 10 .
F test | 2024-01-02T16:10:11.668Z | 2023-07-31T00:00:00.000 | {
"year": 2023,
"sha1": "95ea0a08367f5b9efb421beecf207fc760cf56d3",
"oa_license": "CCBYNC",
"oa_url": "https://acityajournal.com/index.php/jebd/article/download/122/139",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b3285b67dc635b3541f1d058ee24b078276d8396",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
12019637 | pes2o/s2orc | v3-fos-license | Direct transbronchial administration of liposomal amphotericin B into a pulmonary aspergilloma
Pulmonary aspergillomas usually occur in pre-existing lung cavities exhibiting local immunodeficiency. As pulmonary aspergillomas only partially touch the walls of the cavities containing them, they rarely come into contact with the bloodstream, which makes it difficult for antifungal agents to reach them. Although surgical treatment is the optimal strategy for curing the condition, most patients also have pulmonary complications such as tuberculosis and pulmonary fibrosis, which makes this strategy difficult. A 72-year-old male patient complained of recurrent hemoptysis and dyspnea, and a chest X-ray and CT scan demonstrated the existence of a fungus ball in a pulmonary cavity exhibiting fibrosis. Although an examination of the patient's sputum was inconclusive, his increased 1-3-beta-D-glucan level and Aspergillus galactomannan antigen index were suggestive of pulmonary aspergilloma. Since the systemic administration of voriconazole for two months followed by itraconazole for one month was ineffective and surgical treatment was not possible due to the patient's poor respiratory function, liposomal amphotericin B was transbronchially administered directly into the aspergilloma. The patient underwent fiberoptic bronchoscopy, and a yellow fungus ball was observed in the cavity connecting to the right B2bi-beta, a biopsy sample of which was found to contain Aspergillus fumigatus. Nine transbronchial administrations of liposomal amphotericin B were conducted using a transbronchial aspiration cytology needle, which resulted in the aspergilloma disappearing by seven and a half months after the first treatment. This strategy could be suitable for aspergilloma patients with complications because it is safe and rarely causes further complications.
Introduction
Aspergillus is commonly found in all environments and causes a variety of diseases depending on the immunological status of the host and the local condition of the lung [1,2]. Pulmonary aspergillomas usually occur in pre-existing lung cavities exhibiting localized immune deficiency [3]. As pulmonary aspergillomas only partially touch the walls of the cavities containing them, they rarely come into contact with the bloodstream, which is the major reason why the systemic administration of antifungal agents is ineffective at eradicating the condition [4]. Most patients with pulmonary aspergillomas exhibit complications such as tuberculosis and pulmonary fibrosis, which makes curative surgical treatment difficult. We report a case of aspergilloma that was successfully treated via the transbronchial administration of liposomal amphotericin B (L-AMB) directly into the aspergilloma using a transbronchial aspiration cytology (TBAC) needle.
Case report
A 72-year-old male patient complained of recurrent hemoptysis and dyspnea, and a chest X-ray and CT scan (Fig. 1) demonstrated the existence of a fungus ball (longest diameter: 28 mm) in a pulmonary cavity exhibiting idiopathic pulmonary fibrosis (IPF)induced traction bronchiectasis. Although an examination of the patient's sputum was inconclusive, he exhibited a high 1-3-beta-Dglucan level (53.8 pg/mL) and an Aspergillus galactomannan antigen index of 2.2, which were suggestive of pulmonary aspergilloma. Voriconazole (VRCZ) was systemically administered for two months, before itraconazole (ITCZ) was systemically administered for a further month; however, this did not have any effect on the patient's symptoms or the size of his aspergilloma. Since surgical treatment was not possible due to the patient's poor respiratory function, topical treatment was adopted.
Fiberoptic bronchoscopy (FOB) was performed, and a yellow fungus ball was observed in the cavity connecting to the right B 2 bibeta ( Fig. 2(A)), a biopsy examination of which detected Aspergillus fumigatus.
Since the fungus ball was visible during the FOB, L-AMB was transbronchially administered directly into the aspergilloma using a TBAC needle. One hundred mg/body (2.5 mg/kg) were administered during each treatment, which was equivalent to the dose that would have been administered during systemic therapy. The L-AMB was dissolved in distilled water at a concentration of 10 mg/mL and was administered through a TBAC needle ( Fig. 2(B)) at a dose of 0.5 mL per instillation, with each instillation site being different from the previous sites in order to ensure the diffuse and appropriate permeation of L-AMB into the fungus ball. After the procedure, the patient was asked to adopt a right-sided posture for 1 h. The procedure was conducted once a week in the outpatient department for four weeks, and after its safety had been confirmed the L-AMB dose was increased to 200 mg/body, and the procedure was conducted a further three times. By the sixth round of treatment, the fungus ball had diminished in size and turned brown ( Fig. 2(C)), and the breakage of the aspergilloma into several parts was observed due to an increase in the internal pressure of the aspergilloma caused by the direct administration of L-AMB ( Fig. 2(D)). Surprisingly, during the subsequent treatment period the aspergilloma fragments re-assembled into a single structured fungus ball. At three months after the seventh treatment round, the diameter of the aspergilloma had decreased to 14 mm ( Fig. 3(A, B)). Then, the L-AMB dose was reduced to its initial level due to the shrinkage of the fungus ball, and two further rounds of treatment were performed. In the end, the aspergilloma disappeared at two months after the ninth round of treatment; i.e., seven and a half months after the start of treatment ( Fig. 3(C, D)).
The patient's 1-3-beta-D-glucan level gradually decreased to 28.0 pg/mL, and his Aspergillus galactomannan antigen index was 0.4 at three months after the start of treatment.
During the study period, the fibrotic pulmonary cavity enlarged (Figs. 1 and 3), and the patient's pulmonary function deteriorated in accordance with the progression of his IPF. Chemically-induced bronchitis and drug-induced interstitial lung disease were considered to be potential side effects of the abovementioned treatment regimen, but neither of these conditions developed. In addition, no L-AMB-related renal dysfunction or hypokalemia were observed.
The abovementioned treatment was so effective that the patient's hemoptysis disappeared within two weeks and his aspergilloma shrank within three months and had completely disappeared within seven months.
Discussion
Aspergillus is a ubiquitous fungus, and all human beings breath in its conidia during everyday life. However, any conidia that attach to the lower respiratory tract are removed by mucociliary clearance, and those that reach the alveoli are phagocytosed by alveolar macrophages [5]. Furthermore, even when the conidia sprout hyphae they are sterilized by neutrophils [6], resulting in healthy hosts escaping from fungal infection. Aspergillus can cause a variety of diseases depending on both the immunological status of the host and the local condition of the lung [1,2]. Pulmonary aspergillomas usually occur in pre-existing lung cavities exhibiting local immunodeficiency, such as those caused by tuberculosis, bronchiectasis, emphysema, pneumoconiosis, sarcoidosis, and interstitial pneumonia [3].
Pulmonary aspergillomas are classified into simple and complex aspergillomas [7], and the latter type is more prevalent because it is associated with underlying diseases. Surgery such as cavernostomy with muscle transposition, partial resection, segmentectomy, or lobectomy [9e11] is recommended as a curative treatment [8]. Although less invasive surgical strategies such as cavernostomy have been developed, underlying diseases can make the optimal surgical procedure very difficult.
For those patients who are unsuitable for surgery, amphotericin B (AMPH-B), L-AMB, VRCZ, ITCZ, and micafungin sodium are utilized as systemic antifungal agents because they are effective against invasive aspergillosis and chronic necrotizing pulmonary aspergillosis [12e14]; however, there is no evidence from randomized controlled studies to support the use of these drugs against aspergillomas, with some reports suggesting that systemic AMPH-B administration is ineffective [15] and oral ITCZ only achieves limited outcomes [16]. The optimal treatment duration has Fig. 2. Endoscopic findings of the aspergilloma. Just prior to the first administration of L-AMB, the fungus ball was covered with a yellowish mucinous liquid layer (A), into which L-AMB was administered through a transbronchial aspiration cytology needle (B). An image taken during the sixth round of treatment shows a bare brown aspergilloma without any yellowish coating (C), which broke into fragments after the cavity that contained it had been soaked in L-AMB solution (D). Fig. 3. Chest CT scan obtained three months after the seventh administration of L-AMB into the aspergilloma demonstrating the shrinkage of the aspergilloma (longest diameter: 14 mm) (A, B). The aspergilloma disappeared two months after the ninth round of treatment (C, D); i.e., seven and a half months after the initial treatment. not been established and varies from several months to years, even in cases in which treatment is effective. The limited response rates of systemic antifungals are due to poor drug delivery to saprophytic fungus balls [4], and severe side effects can sometimes lead to treatment cessation. However, all treatments should aim to cure the condition for the reasons outlined below. Balls of fungal mycelia are not static and can invade the surrounding lung tissue, leading to chronic necrotizing pulmonary aspergillosis [3], although spontaneous aspergilloma lysis occurs in 7e10% of cases [17]. Furthermore, hemoptysis of bronchial arterial origin can arise and is sometimes lethal in partially treated cases, with the mortality rate ranging from 2 to 26% [18].
When systemic antifungal agents fail to eradicate an aspergilloma, resulting in continuing hemoptysis and fever, topical treatment with antifungals should be considered [19] and could be a viable option in patients with life-threatening aspergillomainduced hemoptysis who exhibit risk factors for a poor prognosis [20]. There are two approaches that can be employed to reach aspergillomas during topical treatment, the transbronchial and percutaneous approaches. Both methods involve the instillation of antifungals into the target cavity to soak the fungus ball. Percutaneous approaches have been vigorously investigated [21e23]; however, they can sometimes cause fungal spread into the thoracic space, resulting in fungal empyema, which should be carefully avoided. The most commonly used antifungal agent is AMPH-B, but its reported efficacy varies from study to study, ranging from 65 to 80% [19,21e23]. Although topical treatments have been described by several investigators, no evidence-based conclusion regarding the optimal approaches and antifungals have been established.
We adopted a transbronchial approach in the current case since the fungus ball was visible during FOB. There is one previous report about the instillation of AMPH-B into an aspergilloma-containing cavity using the balloon occlusion technique [24]. Since AMPH-B can irritate bronchi and can cause chemically-induced bronchitis or drug-induced interstitial lung disease, this method is not applicable to patients with underlying IPF, as it can lead to the acute exacerbation of their IPF. Therefore, we decided to transbronchially administer L-AMB directly into the aspergilloma in order to ensure effective drug delivery. L-AMB is a unilamellar liposomal formulation of AMPH-B, in which AMPH-B is securely incorporated within a liposomal bilayer, which disintegrates when it comes into contact with fungal cell walls and releases AMPH-B at sites expressing ergosterol [25,26]. L-AMB does not diffuse through blood vessel endothelia, which prevents it damaging normal tissues. On the other hand, at infection sites exhibiting increased permeability it spreads through the endothelium toward the fungal surface, which is advantageous for systemic drug delivery [27]. L-AMB is considered to be less irritable to bronchi and lung tissue as it has no detrimental effects on the surface activity of surfactants when administered topically [28].
In the present case, we decided to directly inject the L-AMB into the fungus ball rather than employ intracavitary instillation since the direct injection method ensures that the L-AMB binds to the cell walls of the fungus, leading to the disintegration of the fungus ball.
Surprisingly, after the fungus ball had been broken into fragments by the L-AMB treatment (Fig. 2(D)) the remaining fragments recombined into a structured fungus ball each time. This suggests that there is a tendency towards fungus ball formation in pulmonary Aspergillus infections and provides clues regarding the mechanism responsible for this phenomenon.
The creation of pulmonary aspergillomas is said to start with the attachment and proliferation of fungi on the pulmonary or bronchus wall due to localized immunodeficiency [1e3]. During the initial phase, the thickening of the pulmonary wall and the detachment of parts of the wall into the cavity are observed, and the detached necrotic fragments then act as the nucleus for the creation of a fungus ball [1e3,6]. Taking this information into account, directly administering a drug into a fungus ball might both mechanically destroy it and invade the fungal structure, resulting in smaller segments being left intact each time, although these intact segments act as the nucleus for the formation of a smaller fungus ball. When the fungus ball becomes small enough to allow L-AMB to fully diffuse through the broken fragments, the remaining fragments are too small to act as a fungus ball nucleus, resulting in the disappearance of the fungus ball.
The treatment strategy employed in the present case did not result in the proliferation of Aspergillus from the original cavity to other bronchi or alveoli, and chemically-induced bronchitis and pneumonia, which have been reported to occur during AMPH-B instillation, were not observed either.
The treatment strategy described in this report seems to be suitable for patients with complications, especially those with pulmonary fibrosis, in terms of both the effectiveness of drug delivery and the scarcity of side effects. | 2016-08-09T08:50:54.084Z | 2014-01-24T00:00:00.000 | {
"year": 2014,
"sha1": "ec58af6076490af1aa0b28de8d7bfef6eaa989d4",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rmcr.2013.12.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec58af6076490af1aa0b28de8d7bfef6eaa989d4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220809418 | pes2o/s2orc | v3-fos-license | Associations between polymorphisms in IL-10 gene and the risk of viral hepatitis: a meta-analysis
Background The relationships between polymorphisms in interleukin-10 (IL-10) gene and the risk of viral hepatitis remain inconclusive. Therefore, the authors conducted so far the very first meta-analysis to robustly assess the relationships between polymorphisms in IL-10 gene and the risk of viral hepatitis by integrating the results of previous works. Methods Medline, Embase, Wanfang, VIP and CNKI were searched throughly for eligible studies, and 76 genetic association studies were finally included in this meta-analysis. Results We noticed that rs1800871 (− 819 C/T), rs1800872 (− 592 C/A) and rs1800896 (− 1082 G/A) polymorphisms were all significantly associated with the risk of viral hepatitis in Asians, whereas only rs1800896 (− 1082 G/A) polymorphism was significantly associated with the risk of viral hepatitis in Caucasians. In further analyses by disease subtypes, we noticed that the three investigated polymorphisms were all significantly associated with the risk of both HBV and HCV. Conclusions This meta-analysis demonstrates that rs1800871 (− 819 C/T), rs1800872 (− 592 C/A) and rs1800896 (− 1082 G/A) polymorphisms may influence the risk of viral hepatitis in Asians, while only rs1800896 (− 1082 G/A) polymorphism may influence the risk of viral hepatitis in Caucasians. In further analyses by disease subtypes, we noticed that the three investigated polymorphisms may influence the risk of both HBV and HCV.
Background
Viral hepatitis refers to a type of infectious disorder that is caused by hepatitis viruses which include HAV, HBV, HCV, HDV and HEV [1,2]. In addition to acute liver injury, these hepatitis viruses may also lead to life-threatening conditions such as liver cirrhosis or hepatocellular carcinoma (HCC) [3,4]. The clinical course of viral hepatitis is resulted from a complex interaction between pathogen, host and environmental factors, some patients may be asymptomatic the whole life, but some patients may eventually develop liver cirrhosis or even HCC [5,6]. Therefore, there is no doubt that individual anti-viral immunity is vital for the onset and development of viral hepatitis.
Interleukin-10 (IL-10) serves as one of the most important anti-inflammatory and immunosuppressive factor, and it plays a crucial role in regulating anti-viral immune responses [7][8][9]. Considering the immune-regulatory effects of IL-10, over the last decade, investigators all over the world have repeatedly attempted to explore the relationships between polymorphisms in IL-10 gene and the risk of viral hepatitis, yet the relationships between these polymorphisms and the risk of viral hepatitis are
Open Access
Gut Pathogens *Correspondence: chenhuixin00111@163.com Department of Digestive Diseases, Huizhou Municipal Center Hospital, No. 41 of North Yuling Road, Huizhou 516001, China still inconclusive. So a meta-analysis was conducted to robustly analyze the relationships between polymorphisms in IL-10 gene and the risk of viral hepatitis by integrating the results of previous works.
Methods
The PRISMA guideline was strictly followed by the authors when designing and implementing this study [10].
Literature search and inclusion criteria
Medline, Embase, Wanfang, VIP and CNKI were throughly searched by the authors with the below terms: (Interleukin-10 OR IL-10 OR Interleukin 10 OR IL 10) AND (Polymorphism OR Polymorphic OR Variation OR Variant OR Mutant OR Mutation OR SNP OR Genotypic OR Genotype OR Allelic OR Allele) AND (Viral hepatitis OR Chronic hepatitis OR Acute hepatitis OR Hepatitis A OR Hepatitis B OR Hepatitis C OR Hepatitis D OR Hepatitis E OR HAV OR HBV OR HCV OR HDV OR HEV). Moreover, we also manually screened the reference lists of retrieved publications to make up for the potential incompleteness of electronic literature searching.
Selection criteria of this meta-analysis were listed below: (1) Studies of case-control or cohort design; (2) Give genotypic or allelic frequencies of IL-10 polymorphisms in cases with viral hepatitis and population-based controls; (3) The full manuscript with required genotypic or allelic frequencies of IL-10 polymorphisms is retrievable or buyable. Articles would be excluded if one of the following three criteria is satisfied: (1) Studies without complete data about genotypic or allelic frequencies of IL-10 polymorphisms in cases with viral hepatitis and population-based controls; (2) Narrative or systematic reviews, meta-analysis or comments; (3) Case series of subjects with viral hepatitis only. If duplicate publications were retrieved from literature search, we would only include the most complete one for integrated analyses.
Data extraction and quality assessment
The authors extracted the following data items from eligible studies: (1) Last name of the leading author; (2) Publication year; (3) Country and ethnicity of study population; (4) The number of cases with viral hepatitis and population-based controls; (5) Genotypic frequencies of IL-10 polymorphisms in cases with viral hepatitis and population-based controls. Hardy-Weinberg equilibrium was then tested by using genotypic frequencies of IL-10 polymorphisms, and the threshold of derivation from HWE was set at 0.05. The quality of eligible publications was assessed by the Newcastle-Ottawa scale (NOS) [11], and those with scores of 7-9 were considered to be publications of good quality. Two authors extracted data and assessed quality of eligible publications in parallel. A thorough discussion until a consensus is reached would be endorsed in case of any discrepancy between two authors.
Statistical analyses
All statistical analyses in this meta-analysis were performed by using the Cochrane Review Manager software. Relationships between IL-10 gene polymorphisms and the risk of viral hepatitis were explored by using odds ratio and its 95% confidence interval. The statistically significant p value was set at 0.05. The authors used I 2 statistics to evaluate heterogeneities among included studies. The authors would use DerSimonian-Laird method, which is also known as the random effect model, to integrate the results of eligible studies if I 2 is larger than 50%. Otherwise, the authors would use Mantel-Haenszel method, which is also known as the fixed effect model, to integrate the results of eligible studies. Meanwhile, the authors also conduct subgroup analyses by ethnic groups and disease subtypes. Stabilities of integrated results were tested by deleting one eligible study each time, and then integrating the results of the rest of eligible studies. Publication biases were evaluated by assessing symmetry of funnel plots.
Characteristics of included studies
Three hundred and seventy-four literatures were retrieved by the authors by using our searching strategy. One hundred and thirty-nine literatures were then selected to screen for eligibility after omitting unrelated and repeated items. Six reviews and 48 case series were further excluded, and another nine literatures without all necessary genotypic or allelic data were further excluded by the authors. Totally 76 studies met the inclusion criteria, and were finally enrolled for integrated analyses (Fig. 1). Data extracted from eligible studies were summarized in Table 1 (Additional file 1).
Integrated analyses for rs1800871 polymorphism and the risk of viral hepatitis
Thirty-seven eligible literatures assess the relationship between rs1800871 polymorphism and the risk of viral hepatitis. The integrated analyses demonstrated that rs1800871 polymorphism was significantly asso- Table 2). analyses by disease subtypes revealed similar positive results for rs1800871 polymorphism in both HBV and HCV subgroups (see Table 2). Table 2).
Sensitivity analyses
The authors examined stabilities of integrated analyses results by deleting studies that violated HWE, and then integrating the results of the rest of studies. The trends of associations were not significantly altered in sensitivity analyses, which indicated that from statistical perspective, our integrated analyses results were reliable and stable.
Publication biases
The authors examined potential publication biases in this meta-analysis by assessing symmetry of funnel plots. Funnel plots were found to be overall symmetrical, which indicated that our integrated analyses results were not likely to be severely deteriorated by publication biases.
Discussion
This meta-analysis, for the first time, robustly assessed associations between polymorphisms in IL-10 gene and the risk of viral hepatitis. The integrated analyses results demonstrated that rs1800871 (− 819 C/T), rs1800872 (− 592 C/A) and rs1800896 (− 1082 G/A) polymorphisms were all significantly associated with the risk of viral hepatitis in Asians, whereas only rs1800896 (− 1082 G/A) polymorphism was significantly associated with the risk of viral hepatitis in Caucasians. In further analyses by disease subtypes, we noticed that the three investigated polymorphisms were all significantly associated with the risk of both HBV and HCV.
The following three points should be considered when interpreting our integrated findings. First, based on the findings of previous observational studies, it is believed that the three investigated IL-10 polymorphisms may alter mRNA expression level of IL-10 gene, impact antiviral immune responses, and then influence the risk of viral hepatitis [12,13]. Nevertheless, it should be noted that future experimental studies are still required to reveal the exact molecular mechanisms underlying the observed positive findings of this meta-analysis. Second, we wish to study all polymorphic loci of IL-10 gene. However, our comprehensive literature searching did not reveal sufficient eligible literatures to warrant integrated analyses for other polymorphic loci of IL-10 gene, so we only assessed associations with the risk of viral hepatitis for the three most commonly investigated polymorphisms of IL-10 gene in this meta-analysis. Third, although we aimed to investigate all subtypes of viral hepatitis in this meta-analysis, it is worth noting that the majority of eligible studies were about HBV or HCV. So future studies should continue to explore associations between polymorphisms in IL-10 gene and the risk of other subtypes of viral hepatitis. The three major limitations of our integrated analyses were listed below. Firstly, our integrated analyses results were only derived from unadjusted pooling of previous works. Without access to raw data of eligible studies, we can only estimate associations based on re-calculations of raw genotypic frequencies, but we have to admit that lack of further adjustment for baseline characteristics may certainly impact reliability of our findings [14]. Secondly, environmental factors may also affect relationships between polymorphisms in IL-10 gene and the risk of viral hepatitis. However, most of the authors only paid attention to genetic associations in their publications, so it is impossible for us to explore genetic-environmental interactions in a meta-analysis based on these previous publications [15]. Thirdly, we did not enroll grey literatures for integrated analyses because these literatures are always incomplete and it is impossible for us to extract all required data items from these literatures or assess their quality through the NOS scale. Nevertheless, considering that we did not include grey literatures for integrated analyses, despite that funnel plots were found to be overall symmetrical, it should be acknowledged that publication biases still may affect the robustness of our integrated analyses results [16].
Conclusion
In conclusion, this meta-analysis demonstrates that rs1800871 (− 819 C/T), rs1800872 (− 592 C/A) and rs1800896 (− 1082 G/A) polymorphisms may influence the risk of viral hepatitis in Asians, while only rs1800896 (− 1082 G/A) polymorphism may influence the risk of viral hepatitis in Caucasians. In further analyses by disease subtypes, we noticed that the three investigated polymorphisms may influence the risk of both HBV and HCV. However, future studies should continue to investigate associations between polymorphisms in IL-10 gene and the risk of other subtypes of viral hepatitis. | 2020-07-28T15:04:24.394Z | 2020-07-28T00:00:00.000 | {
"year": 2020,
"sha1": "216fcf1d83e84b8e1b8298f512defe1a82eccc5d",
"oa_license": "CCBY",
"oa_url": "https://gutpathogens.biomedcentral.com/track/pdf/10.1186/s13099-020-00372-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "216fcf1d83e84b8e1b8298f512defe1a82eccc5d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266646971 | pes2o/s2orc | v3-fos-license | PMD Core Ontology: Achieving semantic interoperability in materials science
)–via MSE community-based curation procedures is presented. The illustrated findings show how the PMDco bridges semantic gaps between high-level, MSE-specific, and other science domain semantics. Additionally, it demonstrates how the PMDco lowers development and integration thresholds. Moreover, the research highlights how to fuel it with real-world data sources ranging from manually conducted experiments and simulations with continuously automated industrial applications.
Introduction
The wide field of Materials Science and Engineering (MSE) is currently undergoing a dynamic digital transformation [1,2].Several national initiatives aim to achieve an integral understanding of the entire materials life cycle, from raw materials to the operating components and beyond. 1,2,3,4,5Automation, high-throughput methods, and databased algorithms revolutionize production and characterization facilities.
It is of fundamental significance that material and process data, generated coherently in and by each step along entire value chains are comprehensively acquired, understandably processed, and shared in a controlled manner.If such data are continuously available at any point in the process chain, maximum efficiency of the entire cycle can While these diverse data formats are a direct consequence of the inherent interdisciplinary nature of MSE, they further complicate communication within the MSE landscape.The variety in formats and structures, often incompatible with one another, hinders the seamless fusion of data, thereby impacting the exchange of information and knowledge [4].This complexity hampers the automated acquisition, processing, and analysis of data and also impedes the advancement of data-driven approaches in material development [5,6].
In addition to the lack of uniform formats and structures, MSE data is frequently sparse or incomplete.Contextual information, including metadata and provenance, is often inadequately captured due to several reasons, such as the absence of proper experimental design information.As a result, processes, experiments, and simulations details are missing to represent them in a reproducible manner, limiting data reuse [7].
Addressing these challenges is vital for the successful long-term design of digital transformation in MSE, as this is expected to lead to improvements in existing value chains [8].
In this context, the MaterialDigital Initiative plays a pivotal role in addressing questions related to enhancing efficiency in the development of materials and products.It focuses on establishing the fundamental principles of digital methods and tools for MSE, addressing sustainability concerns, and applying them in an application-oriented manner [9].Within this initiative, the Platform MaterialDigital (PMD) 6 provides support by developing prototype infrastructure and tool solutions for digital transformation, with the primary goal of assisting applied MSE with a focus on the industry.
Promoting semantic interoperability, which enables consistent data interpretation and exchange across platforms, in the broad field of MSE and among various drivers of digitalization is essential.This focus is evident at both national and international levels, particularly in initiatives such as Industry 4.0 and the National Research Data Infrastructure (NFDI). 7The NFDI-Matwerk, 8 as part of this infrastructure, concentrates on the research field of MSE [10], underscoring the importance of a semantic interoperable approach.This approach fosters synergies and normalizes efforts across different domains, enabling the MSE field to benefit from data exchange in an agreed format that is distinct, shared, and well-defined.As digitalization encompasses more diverse systems, the need for a unified and scalable approach becomes increasingly vital, facilitating effortless access and utilization of information across various platforms.
To support information sharing and knowledge discovery, it is further recommended to adopt the Findable, Accessible, Interoperable, and Reusable (FAIR) principles, which outline the characteristics that modern data sources and infrastructures, tools, and vocabularies should possess [11,12].In this context, the Semantic Web [13] offers existing technological capabilities and solutions for advanced data management, making its implementation highly beneficial for the MSE landscape [14].
To cope with the rapid progress of automation and the increasing volume of data, the creation and utilization of ontologies (ontology definition see [15]) are considered essential.This view is supported by recent discussions emphasizing the need for updated data management practices, including in particular the implementation of ontologies [16].Ontologies are formal collections of concepts and their relationships, systematically and explicitly organizing knowledge in various domains, such as in the MSE domain [17], for both humans and machines, often employing a commonly agreed-upon, though not technically required, shared and consistent vocabulary.Ontologies reduce language barriers and ambiguities through standardized terminology, facilitating efficient data exchange and providing a clear mapping to the domain's context [18].Future ontology-supported (meta)data acquisition, facilitated by software solutions such as laboratory information management systems (LIMS) and electronic lab notebooks (ELNs), promotes the establishment of complete and uniform data structures.
Ontologies facilitate the transformation of data into machineunderstandable Resource Description Framework (RDF) triples, enabling seamless integration of materials data and promoting interoperable exchange [19][20][21].This integration is further enhanced through the utilization of the SPARQL Protocol and RDF Query Language (SPARQL) 9 query language, which allows for automated and flexible retrieval of information from (meta)data triples stored in repositories, commonly referred to as triple stores.Moreover, reasoners 10 can derive valuable insights by analyzing the logical connections between ontological entities.While these technological tools contribute to efficient data handling and retrieval, it's important to note that the quality of data is determined by its original collection and curation processes.Access to high-quality data, critical for the progress of materials development, is thus dependent not only on these advanced technologies but also on the robustness of the underlying data generation and management practices [22].
Despite numerous efforts to develop ontologies for the MSE domain, many of them suffer from issues such as being unknown, inaccessible, poorly curated and maintained, and inadequately documented.Furthermore, these ontologies are often tailored for specific niches, lacking precise and domain-appropriate term definitions necessary for effective application and reuse [23,24].
Top-level ontologies (TLOs), such as the Basic Formal Ontology (BFO) [25] and the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) [26], are considered facilitators of crossdomain interoperability.Their high level of abstraction yields domainindependent, general concepts.However, the transition from domainspecific MSE application ontologies (AOs), which semantically represent specific processes, experiments, and simulations, to the abstract TLOs can be excessively complex.
To address this gap in MSE, we propose the PMD Core Ontology (PMDco) as a mid-level ontology aimed at promoting domain interoperability (see Fig. 1).Developed through continuous collaboration with the MSE community, the PMDco provides a selection of essential domain key terms within an intermediate semantic layer that is easily understandable and usable.It serves as an enhancer for future domainspecific AOs, facilitating connectivity to the PROV Ontology (PROV-O).PROV-O was developed by the World Wide Web Consortium (W3C) as a powerful tool for representing and exchanging provenance information across different systems and contexts [27].Being on a higher domain-independent concept abstraction level, it is particularly useful in aligning observational data and models, providing a flexible model for process chain representations [28], and in mapping various ontologies [29].Furthermore, the potential of PROV-O in identifying and relating entities and activities was shown in the generation of simulation models [30].Therefore, PROV-O is a sound basis for MSE process and related materials data descriptions.
The PMDco supports the systematic creation of FAIR, high-quality materials data and plays an indispensable role in advancing semantic interoperability in MSE.Moreover, the PMDco holds significant potential in supporting international collaboration efforts, ensuring the consistent and efficient sharing of information and knowledge on materials.Further, by facilitating seamless data exchange and promoting a shared understanding, innovative and sustainable MSE research and development can be enabled.
In the following, a more detailed description of the requirements for the PMDco and its community-driven development process is being provided.In Section 3, the key specifications of the PMDco are presented, and its usage is explained through several examples.The sustainable im-Fig.1. Interoperable materials and process data.Relevant materials information across entire value chains is made consistently available through continuous process representations based on the Platform MaterialDigital Core Ontology (PMDco), enabling informed decision making at any given time.plementation of maintenance and curation of the PMDco is described.In Section 4, the necessary actions and factors contributing to establishing the PMDco as an integral part of MSE knowledge representation are being discussed.Finally, in Section 5, this article is concluded with general remarks and an outlook for future work.
Ontology design and development
Key aspects and requirements that are of particular pertinence to the wide field of MSE and that strongly influence the design and development of the PMDco are presented below.It is further shown which connections to relevant ontologies are established.
Requirements for the PMDco
A primary goal of the digital transformation in the field of MSE is the comprehensive acquisition and transfer of materials information across the entire life cycle, ensuring its constant availability for retrieval and MSE knowledge extraction.As carriers of this information, (meta)data are generated at every step of the process chain and are represented via ontologies, enabling its subsequent (re)use.
Building on this, the MSE community aims to address additional aspects, according to the insights of a unpublished survey conducted among the 13 projects (involving MSE and ontology experts) funded in the first call of MaterialDigital.[31] Of particular interest to digitizers is the understanding of process-structure-property relationships, such as the profound influence of heat treatment parameters on the yield strength of steel materials.Another goal is the efficient transfer of prestructured data, such as in steel and copper keys, into knowledge graphs that provide novel query functionalities.
To meet these objectives, it is necessary to articulate specific requirements that will guide the development of the PMDco as a facilitator of domain interoperability.As such, the PMDco, positioned as a mid-level ontology for MSE, must encompass a broad spectrum of fundamental concepts within the field.This inclusivity empowers users to systematically formulate domain-specific ontologies describing their processes and link their process chains establishing semantic interoperability.
Ensuring clear and unambiguous term definitions is essential for maintaining consistent and coherent representations of MSE knowledge that can be comprehended by both human and machine intelligence.
The PMDco should be publicly accessible and should also aim for optimal usability, which in this context refers to its ease of use and practicality.Eliminating barriers to entry, such as through detailed usage descriptions, provision of best practice examples, and interactive workshops, can facilitate adoption.A curation process involving the MSE scientific community will ensure the incorporation of necessary modifications and additions and turn it into a collaborative and community-supported endeavor.This collective effort is pivotal for fostering the healthy growth of the semantic foundation for MSE.
Diligent efforts should also be made to repurpose existing highquality ontologies from related fields, such as chemistry.The use of the NeOn methodology is recommended in this context [32].Aligning with established standards is crucial to enable seamless integration of data across diverse domains, thereby promoting knowledge exchange.
The utilization of the PMDco should enhance the reproducibility of processes and process chains, thereby catalyzing the systematic creation of rich FAIR datasets.Identifying recurring modeling patterns can progressively simplify query complexity, providing long-term benefits and optimizing the overall system.
Development process of PMDco
The PMDco development is based on collaborative efforts, involving continuous engagement with the MSE community, particularly the 20 PMD partner project consortia 11 from MaterialDigital funding phases 1 and 2. In developing the PMDco, our collaborative efforts concentrate on facilitating discussions, resolving modeling challenges, and gathering feedback from application ontology (AO) development and workshops.This approach is central to our methodology, which enables issues to be identified and solved together through constant exchange between ontology and domain experts.The goal is to iteratively evolve the PMDco and associated technologies to create a widely applicable 11 https://www .materialdigital.de/projects/.ontology framework for all MSE sub-domains.This process includes an interactive exchange using modeling examples to establish generalpurpose representations.These representations are then incorporated into the PMDco documentation to facilitate usability.As a result of this effort, the PMDco 2.0.7 was recently published (see Section 3).
The PMDco development process utilizes various programming languages and tools.When working with MSE domain experts and ontology engineers in collaborative environments, tools that provide visualization capabilities for concepts and relationships play a significant role.These tools include Concept Board 12 and Miro. 13OntoPanel is another graphical tool, based on a plug-in for the diagrams.net,designed to simplify ontology building for MSE domain experts [33].
The Protégé14 ontology editor [34] was utilized to facilitate the design of semantically more expressive parts of the PMDco.Protégé is a widely used tool that supports the OWL 2 Web Ontology Language, 15and it has the capability to run reasoners such as Pellet, 16 HermiT, 17and FaCT++. 18These reasoners help to reveal implicit information to users, enabling them to draw conclusions, make inferences, and identify inconsistencies, among other functionalities.
Python 19 -based libraries such as rdflib [35] and Owlready2 [36] are employed to facilitate semantic data processing within the PMDco and its associated AOs.The software development platform GitHub20 is leveraged for publishing, continuous maintenance, and evolution of the PMDco through an implemented curation process.GitHub's integrated version control, bug tracking, and code review features are particularly beneficial in this regard.Additionally, GitHub houses the documentation for using the PMDco and its AOs.
PMDco basic layout aligned with PROV-O framework
The aforementioned key aspects, requirements, and ongoing engagement with the MSE community were considered in the selection of the PROV Ontology (PROV-O) framework for alignment of the PMDco (see Section 1).
As a mid-level extension of the PROV-O, the PMDco enables the representation and description of processes and process chains in a MSE-specific manner, ensuring full traceability of generated data points.Ontology-supported systematic information collection enables process reproducibility and increase quality in the long-term.The PMDco builds upon the three more abstract classes of the PROV-O, namely prov:Activity, prov:Entity, and prov:Agent, and enriches them using basic MSE terms.For example, it includes a direct subclass of prov:Activity, called pmd:Process, which serves as a superclass for more specific processes such as pmd:AnalyzingProcess, pmd:Assem-blingProcess, and others (see Fig. 2).The PMDco comprises a vocabulary for describing PMDco (meta)data-generating processes, facilitating the development and integration of AOs.Specific AOs can extend the PMDco with additional terms and relationships.In future versions, these semantic boundaries can be redefined.
Reuse of other popular domain-and task ontologies
The PMDco follows an underspecified design on purpose being a versatile and extendable MSE mid-level.Valuable complementary ontological collections extend the expressive capabilities of the PMDco.For example, the Quantities, Units, Dimensions and Types (QUDT) [37] ontology collection can be used for expressing and converting units of measurement.Molecular entities such as atom, molecule, ion, ion pair, radical, radical ion, complex, conformer, etc. and chemical substances can be represented using the Chemical Entities of Biological Interest (ChEBI) [38].
The PMD core ontology
As a main contribution of this paper, details about the PMDco21 are presented as results in this section.The PMDco satisfies the requirements presented in Section 2.1.Corresponding key features are summarized in the following.
• Comprehensive MSE vocabulary build on community consensus
The PMDco offers a comprehensive MSE vocabulary developed through community consensus and in collaboration with MSE experts.It is highly comprehensible for domain experts and serves as a standardized foundation for representing MSE concepts and knowledge in a structured manner.• Various mid-level classes to connect domain-specific AOs with toplevel ontologies (TLOs) The PMDco incorporates mid-level classes that serve as connectors between domain-specific AOs and common TLOs.This linkage facilitates the integration of domain-specific knowledge with technical specifications, resulting in a more comprehensive representation of MSE processes and phenomena.• Persistent unique identifiers for long-lasting referencability These identifiers, accessible at the PMDco namespace, 22 enhance the sustainability and interoperability of the knowledge representation by enabling reliable and persistent referencing and linking of concepts within the ontology.
PMDco design
The PMDco is designed and applied based on its core classes: pmd:Process, pmd:ProcessingNode, pmd:Object, and pmd:Val-ueObject as well as their relations to each other (Fig. 3).
Processing nodes in the PMDco enable the execution of a process (step).They are semantically decoupled to be used for different types of processes, while the same process can be executed involving different nodes.Processing nodes are typically identifiable assets such as stationary experiment equipment, a steel mill, or a high performance simulation cluster.They are associated with processes via the pmd:executes property.Processing nodes may consist of additional components which is semantically implemented by using the pmd:component object property that relates processing nodes to other processing nodes or components (class pmd:Component).Analogously, objects can be composed of other objects using the pmd:composes property.Objects, such as engineered materials, blanks, samples, etc. are linked to processes as pmd:input or pmd:output.Multiple processes can be linked together via pmd:nextProcess and pmd:previousProcess.Processes can also be represented as hierarchies using the pmd:subor-dinateProcess property (see Section 3.2.4).
Processes, processing nodes and associated input and output objects are linked to specific characteristic (meta)data using the generic pmd:ValueObject class in the PMDco.Processes also require value objects to be an input or output.This design approach enables seamless and flexible traceability of meta-and materials data between processes and objects throughout value chains.It provides a solid semantic framework in support of the FAIR principles in MSE.
Process chain modeling
As indicated in the previous section, using the PMDco, enables the linking of several processes to form process chains (see Fig. 4).In this way, all contextual information required for data reproducibility can be included.
Fig. 4. Schematic representation of a typical MSE process chain:
In this example, the T-Box is simplified to include basic classes of the PMDco for clarity.In the A-Box, the corresponding instances are linked to model a process chain: a steel sheet undergoes heat treatment.Subsequently, a tensile test is performed to determine the tensile strength of the heat-treated steel.If the mechanical properties improve, the sheet can be trimmed for further processing.
In the example shown a heat treatment process (process 1) is applied to a steel sheet (object 1) using a furnace (processingNode 1).The output of this process is the heat-treated steel sheet (object 2).The temperature curve (valueObject 1) is a measured output of this process (measurement 1).The next process in the sequence is the extraction of a test piece (object 3) for tensile testing.Object 3 and the slightly shortened heat-treated steel sheet (object 4) are derived from object 2. The required test part dimensions are specified as a set point input (setPoint 2) for the process.The tensile test (process 3) determines the tensile strength (valueObject 3) of the test piece (object 3) using a tensile testing machine (processingNode 3).In the final process (process 4), the shortened heat-treated steel sheet (object 4) is cut into pieces of equal width (object 5 -99).The width (valueObject 4) is input for this process as a set point (setPoint 4).
PMDco users have the flexibility to choose the level of detail in their modeling.To provide further guidance for implementation, detailed excerpts based on the Fig. 4 are discussed below.
Process and processing node modeling
Fig. 5 demonstrates the modelling of a heat treatment process and its associated processing nodes.Processes execute processing nodes, which can be multi-component.The terminological box (T-Box) illustrates the subclass relationships, such as pmd:HeatTreatmentProcess being a subclass of the process, and pmd:Furnace and pmd:Thermocouple being subclasses of the processing node.The heat treatment temperature is the measured output of the process.In the assertional box (A-Box), the temperature value is provided in degrees Celsius, following the QUDT.Individual temperatures are categorized as a type of temperature, which is a subclass of both the value object and the value scopeś measurement subclass.Processing nodes can also have metadata directly assigned to them, as seen in the example with the depiction of the thermocoupleś node series.Further details on value and data scope modelling can be found in Section 3.2.5.
Process and object modeling
Processes have objects as input and output.In Fig. 6, a manufacturing process is illustrated where a heat-treated sheet is used as the input.During the process, a part of the sheet is cut off to produce a tensile test piece.Consequently, the output includes both the test piece and the shortened heat-treated sheet.These two output objects have their origin in the heat-treated sheet, which is expressed with prov:hadDerivation.To provide additional information, objects can be assigned characteristic metadata.For example, the test piece is given a string name value "TT42aaa", categorized as an identifier and a value object.The original thickness of the test piece serves as a input set point for the manufacturing process and is represented as a new class, also typified as primary data for enhanced differentiability.The value of the original thickness is specified in millimeters using the QUDT unit, utilizing a float data type.
Process sequence modeling
The PMDco design allows for the effective modelling of processes as sequences using properties like pmd:nextProcess and pmd:previ-ousProcess. Additionally, to further partition individual processes, properties such as pmd:subordinateProcess and pmd:superor-dinateProcess can be leveraged.Time information, such as start and end times, can be captured using the xsd:dateTime datatype.In Fig. 7, an example of a process chain is depicted, involving a measuring process, a two-step assembly process, and a mechanical testing process.This modelling approach can accommodate arbitrarily complex process chains while also allowing for a less detailed representation.
Value scope and data scope modeling
In the PMDco, value objects play a crucial role in representing specific values associated with processes, processing nodes, and objects.The pmd:characteristic and pmd:input/output properties are used to establish these associations (as shown in Fig. 3).Value ob-Fig.5. Process and processing node modeling.A furnace executes a heat treatment process.The thermocouple serves as a component of the furnace.Metadata can be specified to provide information about processing nodes, such as the series of the thermocouple.The temperature measurement is output of the heat treatment process.
Fig. 6. Processes have objects as input and output.
A manufacturing process is depicted as having a heat-treated sheet as input.During this process, a portion of the sheet is cut off to create a test piece for a tensile test.The shortened sheet is also produced as an output of the process.Metadata, such as the name of the test piece, can be specified to provide additional information about the objects involved.The required dimensions of the test piece are linked to the process as a set point.Fig. 7. Process sequences and process chain modeling.Processes in the PMDco can be linked to subsequent processes using the concept of next process.In the given example, the measurement process is followed by a mounting process, which in turn precedes the mechanical testing process.Furthermore, the mounting process consists of two subordinate processes.The manufacturing process involves cutting a sheet into equal pieces with a specified width of 2438 mm, identified as a set point.However, it is observed that the last piece of the sheet, when measured, has a width of only 2391 mm, making it unsuitable for use.Fig. 9. Data scope modeling.The assignment to the pmd:DataScope subclasses allows for differentiation between primary, secondary and metadata.In a measurement process, the sheet width and sheet length are measured as primary data.The sheet area is then calculated from these input measurements, resulting in secondary data.Additionally, the identifier of the sheet is considered metadata.jects can represent various types of values, including numeric, textual, and complex data structures.Literal values are represented using the pmd:value data type property.Units from the QUDT ontology can be linked to value objects using the pmd:unit property.The pmd:resource property allows for linking value objects to URIs.
To ensure proper differentiation of value objects, the ontology introduces value scope subclasses, including pmd:Measurement and pmd:SetPoint (see Fig. 8).This classification gains particular significance when processes are constrained to specific input set points.The measurement subclass indicates that the value objects have been measured or determined, enabling correlations and relationship establishment.
The PMDco also provides data scope subclasses, including pmd: Metadata, pmd:PrimaryData, and pmd:SecondaryData, for further classification of value objects (shown in Fig. 9).Metadata include contextual information, as well as provenance details, which are essential for a comprehensive understanding of processes and steps.Primary data or raw data are acquired directly by a process, experiment or simulation.Secondary data can subsequently be deduced from these.
Reuse of existing ontologies
Reusing existing ontologies is an important practice in the development of the PMDco.By bridging semantic gaps and identifying equivalent or related concepts across different ontologies, meaningful communication and collaboration can be facilitated among disparate data sources.This allows for enhanced data interoperability, knowledge sharing, and a more comprehensive domain understanding.Reusing well-defined concepts from established ontologies saves time and effort and promotes consistency, standardization, and knowledge accumulation within a broader community [39,40].
The PMDco incorporates concepts from well-known ontologies to enhance its functionality and interoperability.The QUDT, as a wellengineered and comprehensive ontology and vocabulary, is used to express physical and mathematical units in the field of MSE, including metric prefixes.It provides a unified model for quantities, dimensions, units, and instances data.The conversion functionality between single and complex types of units can be considered particularly useful.
The ChEBI ontology is utilized for the ontological representation of chemical entities, providing a comprehensive dictionary and ontol-ogy for small molecular entities.It describes various types of atoms, molecules, ions, radicals, and more.Furthermore, ChEBI incorporates an ontology, in which relationships between compounds, groups or classes of compounds and their parents, children and siblings are specified.Therefore, it is predestined for reuse in terms of referring to chemical entities, especially using the chemical entity class obo:CHEBI_24431. 23In this way, chemical compositions, which represent an important material property, can be described.The linkage with entities of the ChEBI establishes important cross-domain connections.
The Character-Separated Values on the Web (CSVW) 24 standard is used to describe primary data originally available in Character-Separated Values (CSV) format, which is commonly used in MSE measurements.It clarifies the content of CSV tables, including the file source (csvw:url), schema (csvw:schema), and metadata information (e.g., csvw:name, csvw:datatype). 25he DataCite26 ontology enables the description of metadata properties for resource identification and citation purposes (e.g., datacite:Identifier), 27 aligning with the DataCite Metadata Scheme Specification.
Maintenance and curation
The PMDco is continuously updated and maintained through ongoing interaction with the MSE community.A vital building block in support of community interaction is the Ontology Playground.The Ontology Playground28 functions as a collaborative space that is an open forum for discussion and feedback from experts in the field.It is usually attended by around 20 participants from the MSE community, including all participant projects.The insights gained from this forum form an important basis for the PMDco curation process.The curation process is carried out using GitHub functionalities and is an essential part of active participation in the development of the PMDco.This involves updating terms and definitions, adding new concepts, and removing obsolete ones.To ensure quality and usefulness, the ontology is continuously curated through structural improvements and the identification of gaps and inconsistencies.To manage ontology maintenance effectively, the following aspects and considerations should be implemented: • Version control: GitHub is used as a version control system to track and manage changes, allowing for easy comparison and reversion to previous versions if needed.• Documentation: Thorough documentation is provided, including the PMDcoś purpose, scope, design decisions, and any known limitations.Guidelines for usage and contribution are also provided to help users and maintainers understand the ontology and its updates.• User feedback: Active solicitation and encouragement of user feedback play a crucial role in identifying errors, ambiguities, or missing information.The GitHub environment serves as a platform for users to report issues and suggest improvements.During the development of the PMDco, all but one of the aspects illustrated above were implemented.The only exception that is still work in progress is the aspect of quality assurance.The definition of appropriate and automatable quality checks requires a large volume of instantiated named individuals in knowledge graphs.While being complete and consistent on a case-to-case basis, the quantity of available real-world use cases the PMDco is based on did not reach the critical mass required to evaluate the feasibility and applicability of corresponding quality checks.
In general, implementing these strategies ensures the ongoing integrity, usefulness, and quality of the PMDco for the MSE community.
Continuous, community-driven advancement
The PMDco is continuously being engineered to enable detailed modeling of process chain(s) and constituent processes, so that materials data can be comprehensively acquired and shared across entire value chains.To achieve this, the PMDco provides an appropriate midlevel framework for the MSE domain.Its easy-to-use and generic MSE vocabulary and comprehensive documentation support the usage and creation of domain-specific AOs.These AOs connect and enrich the provided mid-level concepts, with specific MSE vocabulary relevant to their use cases.As a demonstration of this process, a standards-compliant AO 29 was designed to represent the tensile test of metals at room temperature as defined in ISO 6892-1:2019-11 [41], serving the purpose of providing consistent and FAIR structures for tensile test data.
Further aspects have to be considered for positioning the PMDco as a robust and widely accepted framework for the generation of FAIR data structures and further, as one of the enablers for digital transformation in MSE in the long run.
The curation and maintenance process outlined in Section 3.4 requires close monitoring and scalable implementation in progressive exploitation.Similar to other ontologies, the PMDco is subject to continuous improvement and refinement to reflect the latest advances and modifications in the domain.Thus, active engagement with the community is essential for establishing the PMDco as a valuable and actionable standard for the MSE domain.Vital interaction with users and interested stakeholders is facilitated through the aforementioned Ontology Playground.The establishment and sustainability of the GitHub-based curation process make it feasible for individuals to actively contribute to shaping the PMDco.
Enrichment and interoperability
With the goal of expanding PMDcoś range of applications, it is essential to evaluate and incorporate existing works into future versions.For instance, integrating detailed material structure information can significantly enhance PMDcoś versatility by enabling more precise material characterization and analysis.A comprehensive collection of microstructure descriptors is available in the reference [42].Beyond, the Elementary Multiperspective Material Ontology (EMMO) 30 offers valuable insights for modelling distinct physical materials.Establishing mappings and alignments with existing ontologies, as well as reuse of concepts of other ontologies, are important practices that can further facilitate seamless interactions and data exchange between ontologies [43][44][45].
Beyond domain boundaries, mappings to BFO top-level concepts become desirable for achieving cross-domain interoperability.Although there are different proposals for mapping the PROV-O to the BFO, a definite solution is yet to be established [28,46,47].The mapping of prov:Activity to bfo:Occurent and prov:Entity to bfo:Continuant has emerged as the most promising option.Further alignment with the BFO will be addressed in future PMDco versions.
International collaborations
Active participation in significant work and interest groups will be particularly supportive for interdisciplinary exchange and future collaborations.The PMDco's involvement and positioning in the newly founded MSE working group, of the Industrial Ontologies Foundry (IOF), 31 is particularly valuable in terms of realizing data interoperability in the entire field of digital manufacturing in the industrial domain.Similarly, contributions to the Materials Data, Infrastructure and Interoperability IG 32 and MaRDA 33 working groups, of the Research Data Alliance (RDA), 34 form cornerstones of international collaborations and exchange of knowledge between standardization bodies, from which new insights and definitions of common MSE data standards are emerging.
Incentives and amplification effects
An amplifier for the discoverability and reusability of the PMDco and related AOs, ontology repositories such as MatPortal 35 or the terminology service of NFDI4Ing 36 are playing a crucial role.Automated mechanisms of ontology sharing across different projects and domains could be established for reducing development efforts.Repositories foster harmonized growth of ontological knowledge by providing various capabilities, such as identification of ontological entities, and consequently facilitating AO developments.Further incentives can be created by integrating the PMDco and its AOs with already established tools in use.ELNs enable the linking of input fields to ontological entities.Through a script, the inputs are then directly transformed into RDF triples.The compiled ELN templates are easily distributed and utilized, and as a consequence facilitate low-threshold technological implementation for the creation of uniform, FAIR data structures with improved process and experiment reproducibility.Obviously, in the future, more video tutorials and best practice examples have to be produced and published.The same applies to on-going interactive workshops for using the PMDco. 30https://github .com/emmo -repo /EMMO. 31https://www .industrialontologies.org/. 32https://www .rd-alliance .org/node /939. 33https://www .marda-alliance .org/. 34https://www .rd-alliance .org/. 35https://matportal .org/. 36https://terminology .nfdi4ing.de/ts/.
Conclusion
In conclusion, the PMD Core Ontology (PMDco) represents a significant advancement in the digital transformation of Materials Science and Engineering (MSE).MSE, being a multidisciplinary field, faces challenges in effectively exchanging information and knowledge due to diverse perspectives, specialized terminology, and incompatible data formats.These hurdles impede the seamless fusion of (meta)data and hinder progress in data-driven approaches in materials development.
To overcome these challenges, the PMDco serves as robust mid-level ontology that promotes domain interoperability in MSE.It provides a shared and consistent vocabulary, enabling the transformation of process and materials data into machine-processable RDF triples, facilitating their integration and exchange following FAIR principles.The PMDco supports the creation of high-quality data structures, enhancing reproducibility and reusability of MSE processes, experiments, and simulations as well as materials data.
One baseline contribution of the PMDco is its pivotal role in enhancing semantic interoperability.It bridges the semantic gap between domain-specific MSE ontologies and upper-level ontologies as well as domain-independent modules, such as the PROV-O, facilitating crossdomain connections.The PMDco establishes a stable intermediate semantic layer that is easily understandable and usable, promoting efficient exchange of (meta)data and a shared understanding among MSE researchers and practitioners.
To ensure the usability and evolution of the PMDco, a transparent and community-driven curation process on GitHub enables active participation from the MSE community in advancing the PMDco.This process, akin to other community-driven processes such as paper reviews or the collaborative refinement in Wikipedia curation, needs to establish itself within the community to function effectively.By connecting AOs and incorporating domain-specific terms and concepts, the PMDco expands and enriches itself, accommodating the diverse aspects of MSE.
The development of a standard-compliant AO for the tensile test of metals at room temperature, following ISO 6892-1:2019-11 [41], exemplifies how the PMDco can be utilized in AO development.This demonstrates the practical usage of the PMDco and its extension to domain-specific terms and concepts across other AOs within the PMD project and beyond.
To ensure ongoing maintenance and sustainability of the PMDco, a committee of MSE and ontology experts will need to review proposed changes.This collaborative approach encourages community involvement and supports the continuous evolution of the PMDco.Furthermore, collaborative work on ontologies within the MSE community is inclined to lead to the emergence of advanced tools that facilitate ontology development and data mapping processes, benefiting the scientific community as a whole, as can be seen in recent tool developments in connection with digitalization initiatives such as OntoPanel [33] and Fast OntoDocker. 37ooking ahead, the success of the PMDco relies on the active involvement from the MSE community.Integrating AOs enables the PMDco to capture a broader range of MSE knowledge and expand its capabilities.The curation process on GitHub allows experts to contribute, ensuring transparency, version control, and community engagement.Collaborations with ongoing PMD partner projects and other communities offer opportunities for further research and improvements, such as integrating the PMDco with EMMO ontology mappings.
In summary, the PMDco represents a significant milestone in advancing semantic interoperability and knowledge sharing in MSE.By providing a common vocabulary, supporting FAIR data principles, and promoting collaboration, the PMDco serves as a valuable resource for researchers and practitioners, enabling scientific discovery and inno-
Fig. 2 .
Fig. 2. Tensile test performed on a tensile testing machine and presentation of exemplary ontology components.Each class is a subclass of the top concept owl:Thing.Class connections are expressed via the property rdfs:subClassOf.PROV-O upper-level classes form the top layer.PMDco classes extend the class tree with MSE terms, providing necessary domain-specific semantics for connecting through tensile test vocabulary.Individuals are linked to the corresponding classes via the property rdf:type.
Fig. 3 .
Fig. 3.The basic PMDco classes and their relations to each other.In the schematic arrangement, each pmd:Process is associated with pmd:ProcessingNode and pmd:Object.pmd:ValueObject is then allocated to these process-involved classes, serving as carrier for meta-and materials data.(For details on definitions of PMDco ontological entities, see https://w3id .org/pmd /co.) and promote interoperability.This reuse of ontologies establishes a bridge between different knowledge domains, expanding the applicability of the PMDco and facilitating the representation of interdisciplinary MSE concepts.•Enabling reproduction of MSE processes and stepsThe PMDco enables the reproduction of MSE processes and materials properties by capturing relevant information and relationships.It supports the documentation and reconstruction of experiments, simulations, and other processes, enhancing transparency, reproducibility, and reliability in MSE research.This feature promotes scientific advancement and collaboration.
Fig. 8 .
Fig. 8. Value scope modeling.In this figure, the concept of differentiating value objects into measurement and set point subclasses of pmd:ValueScope is depicted.The manufacturing process involves cutting a sheet into equal pieces with a specified width of 2438 mm, identified as a set point.However, it is observed that the last piece of the sheet, when measured, has a width of only 2391 mm, making it unsuitable for use. | 2023-12-30T16:12:40.257Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "6fe5a68d60a35baa3441b33f42b49b7f1553b2c7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.matdes.2023.112603",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f181e41441b999c6c8b60e65415d20f8a1117735",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
29030976 | pes2o/s2orc | v3-fos-license | Effect of spices formulations on the physicochemical and sensory properties of Nnam gon, a Cameroonian dish prepared with cucurbitaceae seeds
Abstract Nnam gon, a cake made by steam cooking a mixture of Cucurbitaceae seeds paste and others ingredients specially spices, is a highly prized dish in Central African region. A preliminary investigation conducted as part of this study highlighted that formulations used in the processing of Nnam gon vary according to the spices used. This study was carried out to determine the best formulation for the preparation of this dish. For this purpose, Nnam gon samples were produced from four formulations which differ according to the number of spices used: F0 (no spices); F1 (0.91 g of Allium cepa paste); F2 (0.88 g of A. cepa paste, 0.35 g of Allium sativum paste, 0.41 g of fresh Officinale zingiber paste, 0.41 g of fresh Petroselinum crispum paste, 0.33 g of Monodora myristica, 0.48 g of fresh Celery graveolens paste, 1.19 g of fresh Allium porrum paste, 0.13 g of Allium lepidophyllus powder, and 0.13 g of Piper nigrum), and F3 (0.90 g of A. cepa paste, 0.35 g of A. sativum paste, and 0.42 g of fresh O. zingiber paste). The samples were evaluated for their physicochemical characteristics and sensory profile (Quantitative Descriptive Analysis). The results revealed that proteins (16.56–17.38%), carbohydrates (4.71–5.10%), lipids (23.14–24.25%), ash (4.03–5.92%), and fibers (2.17–2.68%) increased significantly (p < .05) with spices adding. The increase in polyphenols (310.55–592.80 mg/100 g FM) and phytates (2.23–12.49 mg/100 g FM) contents was positively correlated with antioxidant properties of Nnam gon which also increased with spices adding. Significant differences were observed between the samples for all attributes generated (appearance, odor, taste, flavor, texture, and oral texture). Spices adding induced a decrease in hardness, cohesivity, elasticity, and granulous of cake but enhanced oily. Nnam gon produced with spicy formulation (F2 and F3) had higher mean score for general acceptance which was highly correlated (p < .05) with spice odor (r = .99), spice taste (r = .92), and color (r = .84). From this study, it is suggested that spicy Cucurbitaceae paste could improve nutritional value, antioxidant properties, and general acceptance of Nnam gon.
Despite this high nutritional potential, the cultivation and use of cucurbits seeds remain limited in food industry. In fact, the world production of these seeds was estimated at 184 million tons per year (FAO, 2014), but in Cameroon the annual production is around 146,000 tons (FAO, 2010). Cameroon production is still very low compared to other countries such as South Africa (378.776 tons), Egypt (690.000 tons), and China (5.767.700 tons) (FAO, 2014). An average cost of kilogram of dry Cucurbitaceae seeds is 2.29 euros in Cameroon, and that price is one and half times the average price of kilogram of cocoa and half times that of coffee which are the main industrial crop in that country (Minader, 2012). Then, the sector of cucurbit seeds could be an important activity for agriculture in Cameroon. One way to boost the underutilized seeds production could be to increase demand for these seeds by using them as raw materials in industry or by industrializing the production of foods made from these seeds (Ebert, 2014). In this respect, improving the production of Cucurbitaceae cake could be a solution to stimulate the production of cucurbit seeds.
In fact, Cucurbitaceae cake, called Nnam gon in Cameroon, is a highly prized traditional dish formulated with Cucurbitaceae paste mixed with water and other ingredients such as salt, fresh eggs, oil, fish, or meat. Spices (Allium cepa, Allium sativum, Officinale zingiber, Petroselinum crispum, Monodora myristica, Celery graveolens, etc.) are also used to season the paste. The seasoned paste is packed in the leaves of katemfe (Thaumatococcus daniellii) and steam cooked (Ponka et al., 2005). This cake is prepared in all regions of Cameroon and has an important value in traditional societies of this country. It is usually offered at ceremonies like wedding and funeral (Ponka et al., 2005).
Moreover, Nnam gon is a street food whose production is mainly ensured by women, but its consumption in the street is often linked to some digestive disorders that could probably be due to poor conditions of cooking and storage. Few studies have already been done on Cucurbitaceae cake, nevertheless the study of the nutritional value of this food realized by Ponka et al. (2005) noted that it contains around 8.96% of proteins, 13.5% of lipids, 1.86 of fibers, 1.77% of ash, and minerals such as Magnesium (108.9 mg/100 g DW), Iron (2.99 mg/100 g DW), and Zinc (3.29 mg/100 g DW).
The valorization of Cucurbitaceae seeds by industrial production of Nnam gon requires improving the production process, packaging, and storage conditions. In this respect, the preinvestigation conducted as part of this study has identified that the ingredients used for the formulation of Nnam gon vary according to household in Cameroon. Thus, it would be interesting to determine the formulation mostly accepted by the consumers. For that, chemical composition, sensory characteristics, and general acceptability of different formulated cake must be studied. For this purpose, the Quantitative Descriptive Analysis (QDA) methodology which is one of the most used descriptive approaches is recommended for sensory analysis of Nnam gon.
As part of main study aimed on improving the quality of this dish, this study was carried out to determine the chemical composition of Nnam gon produced with different formulations, their sensory characteristics, and overall acceptability.
| Sampling of Cucurbitaceae seeds
Dried samples of Cucurbitaceae seeds (Cucumeropsis mannii) harvested on February 2015 were purchased on March 2015 from local market in Ngaoundere, Cameroon. They were sorted manually to remove foreign matter and immature and damaged seeds. Then, the seeds were soaked for 5 min in order to facilitate dehulling which was manual. The dehulled seeds were milled using an electric grinder. Then, the flour obtained was directly used for production of Nnam gon samples.
| Production of Nnam gon samples
The Cucurbitaceae seeds flour was divided into four subsamples corresponding to the different formulations (F0, F1, F2, and F3). The formulations (Table 1) and Nnam gon production process (Figure 1) are the results of a preliminary investigation prior to this study. As reported in Figure 1, flour and water were introduced in a large plastic bowl and mixed with a wooden spatula during 10 min. Then, salt, spices, and other ingredients were added to the mixture which was remixed for 5 min. About 300 g of the final mixture was packaged in cleaned leaves of katemfe (Thaumatococcus daniellii) which are traditionally used for the packaging of Nnam gon. Packaged samples were filled into aluminum pot (10 L of capacity, with tripod) containing 1.5 L of tap water. Five samples were deposited on the tripod and were not in direct contact with boiling water. Cooking was carried out for 90 min on hearth of three stones fed with wood. The samples of the different formula were cooked separately. At the end of cooking, the samples were removed from the pot and cooled on the bench. The cakes were separately processed into three subsamples. One subsample was used for chemical analysis, another subsample for textural analysis, and the last was used for sensorial analysis.
| Proximate analysis of Nnam gon samples
Moisture, crude protein, crude fat, and crude ash contents were determined according to the AOAC procedures (1990). Protein was calculated as N × 6.25. Total carbohydrate was determined after digestion in concentrated sulfuric acid (Dubois, Gilles, Hamilton, Rebers, & Smith, 1956).
| Analysis of bioactive and antinutritional factors
Polyphenol content was determined using a colorimetric method as described earlier by Mang et al. (2015). Tannin level in the cakes was determined by the colorimetric method of Makkar, Siddhuraju, and Becker (2003). The phytic acid content was evaluated using a colorimetric method as reported by Gao et al. (2007). The oxalate content was determined by a titrimetric method according to AOAC procedures (AOAC 1990).
| Antioxidant properties analysis
The reducing power of cakes was measured according to the method described by Oyaizu (1986). Evaluation of DPPH Free radical scavenging activity was determined following De Ancos, Sgroppo, Plaza, and Cano (2002) with some modifications. Chelating power of cakes was measured according to the method described by Decker and Welch (1990).
| Textural analyses
For the texture evaluation, the freshly cooked samples of cakes were equilibrated at ambient temperature for 5 min and submitted to a compression test as described by Nourian, Ramaswamy, and Kushalappa (2003) using a computer interfaced universal testing machine (Lloyd Model LRX-2500N) equipped with a 500 N load cell.
One measurement was made per cake, three cakes were tested per formulation, and their average values taken to represent the mean texture value of test samples. From the generated texture profile, the hardness was obtained from the peak force of the first compression. Adhesiveness was obtained from the final force of the first compression. Viscoelasticity index was the ratio between the peak force of the first compression and the peak force of the second compression.
| Preselection of panelists
Twenty panelists (10 women and 10 men) drawn from a subuniversity population were preselected for the descriptive sensory evaluation, based on their capacity to detect sensory differences in this kind of product.
| Development of sensory descriptors
The judges previously selected, after four meetings, have developed sensory descriptors used in the definitive tests. For this, Repertory Grid Kelly's Method (Moskovitz, 1983) was used. The four formulated cakes (F0, F1, F2, and F3) were evaluated using a sensory evaluation form constituted of nonstructured 20 cm scales for each sensory descriptor. From a common agreement, the judges chose reference materials to determine the extreme points of the intensity scales and to help in the identification of sensory characteristics of the samples.
| Selection and training of panel
Previously judges were trained following QDA (Stone & Sidel, 1993) during three sessions with a view to select the definitive panel. The final panelists were selected based on their ability to discriminate different samples and the repeatability of their results as reported by Bannwart, Bolini, Toledo, Kohn, and Cantanhede (2008).
| Sensory evaluation
The selected and trained judges participated in the sensory evaluation of Nnam gon. The cake samples were evaluated in individual booths under white light and provided with room temperature, water, and unsalted crackers. One cake from each formulation was used for sensory analysis. The products (5 g each) were presented in glass plate coded with three digit random numbers. All the products were assessed in four random repetitions.
| Statistical analysis
All the results were carried out in triplicate determinations. Analysis of variances was used to determine the effect of formulation on the dependent variables. Duncan's Multiple Range Test was performed to classify samples at the significant level of 5%. Statgraphics 6.0 Program was applied for the statistical analysis. Principal component analysis (PCA) and Pearson analysis were applied using Minitab Statistical Software. Table 2 presents the proximate composition of Nnam gon samples.
| Proximate composition of Nnam gon
The moisture content in these cakes ranged around the average value of 46%. The lack of significant difference observed between moisture content of the different formulated cakes suggested no significant effect of spices on moisture content of Nnam gon. But the water content observed was lower compared to those of Ekomba (61.1%) and Ekwang (77.6%) which are others Cameroonian cakes produced with maize flour and Cocoyam flour, respectively (Ponka, Fokou, Beaucher, Piot, & Gaucheron, 2016).
The ash content of cakes was in the range 4.03-5.92% of DM.
This is significantly higher than ash content of Akara (3.1%), a traditional bean cake prepared in Nigeria (Okeke & Eze, 2006 Lipids are the main components of Nnam gon. The fat content in the samples ranged from 23.14 to 24.25%. These values were higher than fat content of 2.7% in Jeqe (cake made with white flour) consumed in rural KwaZulu-Natal, South Africa (Spearing et al., 2013).
Adding spice had no effect on lipid content of these cakes, it would probably be linked to their low-fat content. But addition of soy oil in the formulation also contributes to increase oil content of the cakes.
After lipids, proteins are the third major constituents of Nnam gon. The highest crude protein content was recorded for the cake from F2 formulation (17.38%). Apart from control cake (F0) which had a lower crude protein (16.56%), no significant difference was observed between crude protein contents of others formulated cakes, but spices induced an increasing in crude protein of cake probably linked to their protein content. These protein contents are close to that of moinmoin (17.71%) (Olapade & Adetuyi, 2007) indicating that Nnam gon is a good source of proteins. doubled phenolic content of cake. In addition to the foregoing ingredients, adding Garlic and ginger pastes (F3) also slightly increased phenolic content of Nnam gon. We noted that phenolic content of the cake increased with the number of spices introduced in the cake formulation. Studies have shown that spices such as disc, leek, and celery are rich in phenolic compounds (Belewu, Olatunde, & Giwa, 2009;Uhegbu et al., 2011). In this vein, the phenolic content of cakes produced with F2 formulation was significantly higher (p < .05) than that of others formulations, probably due to the several spices of this formulation.
| Antinutrient content of Nnam gon
Comparatively, adding spices in formulation also induces an increase in the oxalate content of Nnam gon which ranged between 1.41 and 3.30 mg/100 g DM. The oxalate content also appears to be related to the number of spices introduced in the formulation, thus, F2 presented a high content of oxalate. In fact, some health problems (corrosive gastroenteritis, shock convulsive symptoms, low plasma calcium, high plasma oxalates, and renal damage) are caused by a consumption of high levels of oxalates (Kelsay, 1985). Specifically, oxalate content increasing would be linked to the addition of garlic (2.6 mg/g), ginger (0.7 mg/g), and onion (0.3 mg/g) which have high oxalate content compared with other spices according to the literature (Nwinuka et al., 2005;Belewu et al., 2009;Oluwatoyin, 2014 (2014) and Nwinuka et al. (2005). Similar increase in tannins levels was observed with addition of spice in Nnam gon formulation. Cake produced from formula 2 had a high amount of tannins (527.44 mg/100 g DM), but low level was observed for control cake (131.59 mg/100 g DM).
| Antioxidant properties of Nnam gon
It has already been established that the consumption of foods rich in antioxidants helps to prevent the diseases related to oxidative stress (Boskou, 2006;Saikat, Raja, Sridhar, Reddy, & Biplab, 2010).
In general, as shown in Figure was comprised between 0.67 and 5.84 g Vit C/100 g DM. The strong correlations observed between phenolic compounds and DPPH free radical scavenging (r = .99; p < .05), reducing power (r = .99; p < .05) and chelating power (r = .97; p < .05) indicate that these compounds whose antioxidant activity is well established contribute to raising the antioxidant properties of cakes which are enhanced by spices.
| Textural properties of Nnam gon
Texture is the most important quality attributes that affect consumer acceptability of cake (Aboua, Konan, Kossa, Agro, & Kamenan, 1989;Aboubacar, Yacizi, & Hamaker, 2006). According to the results presented in Table 4, textural parameters of cakes varied significantly with the formulations. Irrespective of the probe used, Formulation
| Sensory characteristics of Nnam gon
The candidates used for sensory analysis were selected based on their ability to discriminate samples. As presented in One of the most important characteristics of good food is its odor.
In this respect, general acceptance of Nnam gon was highly correlated with spice odor (r = .99; p < .05). In this study, cake produced with formula 2 and 3 presented the highest means score of spices odor but lower means Cucurbitaceae seeds odor. It could probably be due to the fact that spices odor mask that of Cucurbitaceae seeds odor, hence the negative correlation observed between these attributes (r = −.79; p < .05). However, formula 1 cake had higher mean score of fish and shrimp's odor, whereas musty odor was observed on control cake F0. Taste and Cucurbitaceae seeds flavor were highly represented in control cake, but addition of spices in cake formulation seems to reduce this attributes. As observed in Table 6, cakes produced with formula 2 and 3 presented the higher mean score of taste of spice. The attributes bitter and bitter after taste were not related for all the samples.
Regarding texture, the sample produced with formula 1 presented the higher mean score of hardness. Spices adding appears to induce a decrease in hardness, cohesivity, elasticity, and granulous of cake, but it seems to enhance the oiliness. This previous observation could mean that spices increase oil absorption capacity of Cucurbitaceae paste. Another parameter that could affect the texture of cake is the moisture content; however, it was not correlated with any texture attribute. Moreover, not only the spicy of paste seems to increase juiciness of cake but it also induced a decrease in cake masticability.
In order to study the relations between sensory attribute and F0) was characterized by the attributes musty odor, Cucurbitaceae seeds taste and flavor, and persistence of Cucurbitaceae seeds taste, whereas F1 formulated cake was highly correlated with dried shrimps flavor and odor. The attributes that characterized cake produced with formula 2 were color, spice odor, and spice taste. However, formula 3 cake was characterized by fresh fish odor, wrapping leaves taste, and odor. It is interesting to note that the samples were also somewhat separated in terms of spice adding. Then, as seen in Figure 4, cakes produced with spicy Cucurbitaceae paste (formula 2 and 3) had higher mean score for general acceptance which was highly correlated with spice odor (r = .99; p < .05), spice taste (r = .92; p < .05), and color (r = .84; p < .05).
| CONCLUSION
This study aimed at evaluating the effect of formulation on proximate composition, and textural and sensory properties of Nnam gon.
The overall differences observed among the samples evaluated are directly related to spices adding in the formulations. Then, the study revealed that proteins, carbohydrate, lipid, ash, and fibers contents of Nnam gon increased significantly with spices adding. In addition, the polyphenols and phytates contents of cake increase and are positively correlated with antioxidant properties of cake which also increases with spices adding. The general acceptance of cake is highly correlated with spice odor, spice taste, and color. Cake produced with spicy formulation had higher mean score for general acceptance. However, several species of Cucurbitaceae are edible. Thus, it would be interesting to include this aspect in order to know the species of Cucurbitaceae seeds that are more accepted by consumers for the production of this dish. In this respect, further investigation could be done to determine the influence of species of Cucurbitaceae seeds on proximate composition, and textural and sensory properties of Nnam gon. | 2018-04-03T03:46:55.285Z | 2016-11-23T00:00:00.000 | {
"year": 2016,
"sha1": "5ea3f20214da0fcf7ce391fe70452ecbfb9b969c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.447",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ea3f20214da0fcf7ce391fe70452ecbfb9b969c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
17263652 | pes2o/s2orc | v3-fos-license | Quantum Coherence in an Exchange-Coupled Dimer of Single-Molecule Magnets
A multi-high-frequency electron paramagnetic resonance method is used to probe the magnetic excitations of a dimer of single-molecule magnets. The measured spectra display well resolved quantum transitions involving coherent superposition states of both molecules. The behavior may be understood in terms of an isotropic superexchange coupling between pairs of single-molecule magnets, in analogy with several recently proposed quantum devices based on artificially fabricated quantum dots or clusters. These findings highlight the potential utility of supramolecular chemistry in the design of future quantum devices based on molecular nanomagnets.
Considerable effort has focused on finding building blocks with which to construct the quantum logic gates (qubits) necessary for a quantum computer (1,2). Most proposals utilizing electronic spin states take advantage of nano-fabrication methods to create artificial molecules, or magnetic quantum dots (3,4). A Heisenberg-type exchange coupling between dots is achieved by allowing the electronic wavefunctions to leak from one dot to the next. It is this coupling which is the essential ingredient in a quantum device because, unlike classical binary logic, it enables encoding of data via arbitrary superpositions of pure quantum states, e.g. |0 and |1 (2). These superposition states can store information far more efficiently than a classical binary memory. Furthermore, they permit massively parallel computations, i.e. many simultaneous quantum logic operations may be implemented on a single superposition state. For a quantum device to become a viable technology, it should be possible to perform a reasonably large number of quantum operations (∼ 10 4 ) on a single qubit without the superposition states losing phase coherence. Herein lies one of the main technical challenges, as most quantum systems are highly susceptible to decoherence through coupling to their environment (5).
We demonstrate that single-molecule magnets (SMMs) may be assembled to form coupled quantum systems of dimers (or chains, etc.), with many of the attributes of quantum-dot-based schemes. Most importantly, our electron paramagnetic resonance (EPR) investigations of crystals (large, highly ordered 3D arrays) containing exchange-coupled dimers of SMMs show that decoherence rates are considerably less than the characteristic quantum splittings (∆/h ∼ GHz, where ∆ is the energy splitting and h is the Planck constant) induced by the exchange couplings within the dimers, representing a step forward in the drive towards potential applications involving molecular magnets. Several proposals have suggested possible quantum computing schemes utilizing molecular magnets (7,6,8). The supramolecular (or "bottom-up") approach to materials design is particularly attractive, as it affords control over many key parameters required for a viable qubit: simple basis states may be realized through the choice of molecule; exchange couplings may then be selectively designed into crystalline arrays of these molecules; finally, one can isolate the qubits to some degree by attaching bulky organic groups to their periphery.
The subject of this investigation is the compound [Mn
propionate, py is pyridine, and C 6 H 14 is hexane) (9), a member of a growing family of Mn 4 complexes which act as SMMs (10,11), having a well defined ground state spin of S = 9 2 . This compound crystallizes in a hexagonal space group (R3) with the Mn 4 molecules lying head-to-head on a crystallographic S 6 axis. The resulting [Mn 4 ] 2 supramolecular dimer is held together by six weak C−H· · ·Cl hydrogen bonds ( Fig. 1), leading to an appreciable antiferromagnetic superexchange coupling (J ∼ 10 µeV) between the Mn 4 units within the dimer, which influences the low-temperature quantum properties of related [Mn 4 ] 2 dimers (12). Like all SMMs, [Mn 4 ] 2 displays superparamagnetic-like behavior at high temperatures, and magnetic hysteresis below a characteristic blocking temperature (∼ 1 K). The hysteresis loops exhibit steps, which are due to magnetic quantum tunneling (MQT). However, unlike isolated SMMs, there is an absence of MQT at zero-field, due to a static exchange bias field which each molecule experiences due to its neighbor within the dimer (12). The effect of the bias is to shift the field positions of the main MQT steps by an amount of order −JS 2 /µ (where µ is the magnetic moment of a Mn 4 monomer), so that the first step is observed on the hysteresis loop before reaching zero-field. However, the exchange bias by itself does not quantum mechanically couple the SMMs within the dimer.
Before presenting experimental evidence for the coupled nature of the dimers, we develop a quantum mechanical model which takes this coupling into account. Neglecting off-diagonal crystal field terms and inter-molecular interactions, the effective spin Hamiltonian (to fourth order) for a magnetic field (B z ) applied parallel to the easy (z-) axis of a single isolated SMM has the form (11) whereŜ zi is the z-axis spin projection operator, and the index i (= 1, 2) is used to label the two For the case of two quantum mechanically coupled SMMs, the effective dimer Hamiltonian (Ĥ D ) may be separated into the following diagonal and off-diagonal termŝ whereĤ 1 andĤ 2 are given by Eq. 1, the cross terms describe the exchange coupling between the two SMMs within the dimer, and the J values characterize the strength of this coupling.
The diagonal zeroth order Hamiltonian (Ĥ 0D , in square brackets) includes the exchange bias J zŜz1Ŝz2 which has been considered previously (12). The zeroth order eigenvectors for the dimer may be written as products of the single-molecule eigenvectors |m 1 and |m 2 (abbrevi- In Fig. 2, we display a schematic of the energy level shifts and splittings (not to scale) caused by the exchange bias, and by the full exchange, for the lowest lying levels at high magnetic fields (M = −9 to −6). The states are numbered for convenient discussion of the data. For clarity, higher lying states with M > −6, including the zero-field | ± 9 2 , ∓ 9 2 ground states, are not shown in Fig. 2. Application of a magnetic field parallel to the easy axis merely shifts all of the zeroth order levels by an amount gµ B B z M. Thus, δM= ±1 EPR transition matrix elements may be accurately calculated using the eigenvectors in Fig. 2. The magnetic dipole perturbation only allows transitions between states having the same symmetry. The strongest of these transitions are shown in Fig. 2, labeled (a) through (g).
In the left-hand panel of Fig. 3, we display temperature dependent high-frequency EPR spectra obtained at 145 GHz, with the magnetic field applied parallel to the easy (z-) axis of a small (< 1 mm 3 ) single-crystal sample; details concerning our high-frequency EPR setup are given elsewhere (13). The inset shows a single 6 K spectrum (f = 140 GHz) for a related monomeric Mn 4 complex without head-to-head interactions (14). The monomer data are typical of most SMMs, showing a series of more-or-less evenly spaced resonances, and a smooth variation in intensity from one peak to the next. By contrast, the dimer spectra exhibit considerable complexity. In spite of this, the simulated dimer spectra (colored traces in the right-hand panel The simulated spectra (Fig. 3) are mainly limited to transitions among the levels displayed in Fig. 2 [(a) through (g)]; we have also included the (7) S,A → | − 9 2 , − 1 2 and | − 9 2 , − 1 2 → | − 9 2 , + 1 2 transitions, labeled (h) and (i) respectively. Resonance (x), meanwhile, corresponds to the degenerate | + 9 2 , − 9 2 → | + 9 2 , − 7 2 and | − 9 2 , + 9 2 → | − 7 2 , + 9 2 transitions. The only significant differences between the experimental data and simulated spectra are seen in the 2−3 T region, which is due to fact that we did not consider several moderately strong transitions involving higher lying (M > −6) states. We deliberately avoid reference to superposition states in discussing resonance (x), as the interaction between the | ± 9 2 , ∓ 9 2 states is extremely weak (9th order inĤ ′ D ). Consequently, even the weakest coupling to the environment would likely destroy any coherence associated with the 2 −1/2 | + 9 2 , − 9 2 S,A superposition states. Resonance (x) is observed only over a narrow low-field region (< 0.7 T) over which the | ± 9 2 , ∓ 9 2 levels represent the ground states of the dimer. By following the relative intensities of resonances (x) and (a), one obtains an independent thermodynamic estimate of the exchange bias which is in excellent agreement with the value obtained above, and with independent hysteresis measurements for the same complex (17). We note that the previously published mea- from the exchange bias, which is in agreement with the published value (12).
The inset to the right panel of Fig. 3 shows that it is the transverse part of the exchange (Ĥ ′ D ) which brings the simulations into excellent agreement with the data. Indeed, there is no way to obtain anything closely resembling the experimental data without includingĤ ′ D in the calculation, thus providing compelling evidence that the molecules are coupled quantum mechanically.
The issue of quantum coherence is best illustrated by examining the splitting of resonances (f) and (g) − this splitting is directly proportional to J xy , and corresponds to the ∼ 9 GHz shift of the (4) S level relative to (5) A (Fig. 2). If the phase decoherence rate (τ −1 φ ≡ characteristic rate associated with the collapse of a quantum mechanical superposition state) were to exceed 9 GHz, one would expect broad EPR peaks due to transitions between bands of incoherent states; these bands would occupy the gaps between the energies given by the exchange bias picture and the full exchange calculation in Fig. 2, thereby smearing out most of the sharp features in the observed spectrum. In principle, τ φ is the same as the transverse spin relaxation time T 2 , which can be estimated from EPR linewidths (∆M = ±1 transitions) (18). However, we know that these widths are dominated by weak dimer-to-dimer variations (strains) in the Hamiltonian parameters, i.e. the actual τ −1 φ is buried within the inhomogeneous EPR linewidths (14,15,16), and is probably much less than 9 GHz. As a worst case, the narrowest EPR lines would imply a decoherence time on the order of 1 ns. In order to determine the real T 2 (≡ τ φ ), one should carry out time resolved (pulsed) EPR experiments, e.g. the free-induction-decay of an initially saturated EPR transition, or Rabi spectroscopy (18). Time resolved experiments in this frequency range are technically challenging but, nevertheless, represent a future objective.
The magnitudes of the quantum splittings (in frequency units) provide a rough estimate of the rates at which one could perform computations. In comparison to many competing technologies [e.g. NMR (19)] these rates are high for electronic spin states, i.e. GHz rather than kHz or MHz. The largest quantum splittings (∆/h) for the dimer are on the order of a few tens of GHz.
In fact, ∆τ φ /h represents a rough figure of merit for a quantum device, as it gives an estimate of the number of qubit operations one could perform without loss of phase coherence. For the worst case given above, ∆τ φ /h ∼ 30 − 100; in reality, it may well be 10 4 or greater. The most useful coupled states of the dimer would be the antiferromagnet zero-field 2 −1/2 | + 9 2 , − 9 2 S,A ground states, or Bell states (2). As already discussed, the tunnel splitting of these states is negligible in zero-field (∼ Hz). However, it is possible to increase this splitting to a practical range (∼ GHz) with a transverse magnetic field. While there remain technical challenges along the road map towards molecule-based quantum devices (e.g. low operating temperatures, methods for addressing nanometer-sized molecules, etc.), the present study demonstrates that the "bottom-up" (molecular) approach provides excellent opportunities to study coherent quantum superposition states. Future materials design strategies will, therefore, explore the following possibilities: optical control of the exchange coupling between the two halves of a dimer; increased isolation of the dimers in order to further reduce decoherence; and the inclusion of some form of asymmetry within the dimer (e.g. uncompensated electronic spins, or selective nuclear spin labeling), thereby facilitating readout of the state of the system. | 2014-10-01T00:00:00.000Z | 2003-11-07T00:00:00.000 | {
"year": 2003,
"sha1": "d41ea9579c988819e6475fba982fa081fff4d71f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0311209",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d41ea9579c988819e6475fba982fa081fff4d71f",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Physics"
]
} |
79579755 | pes2o/s2orc | v3-fos-license | Total ischemic time and short, intermediate and long term mortality of patients with STEMI treated by primary percutaneous coronary intervention: Analysis of data from 2004–2013 ACSIS registry
Objective: Our objective was to evaluate correlation between mortality and total ischemic time in a large cohort of ST Elevation Myocardial Infarction (STEMI) patients. Background: Several previous studies demonstrated positive correlation between door to balloon time and mortality. However, several recent studies failed to find improvement in mortality with shortened door to balloon time. It is possible that further reduction in mortality of STEMI patients in the modern era of PPCI and adjuvant pharmacotherapy could be achieved only by means of reduction of total ischemic time. Methods: We analyzed data of 2254 consecutive patients with STEMI treated by PPCI and enrolled in the Acute Coronary Syndrome Israeli Survey (ACSIS) registry. We divided our cohort into tertiles based on the total ischemic time: less than 150 minutes (group 1, n=730), between 150 and 265 minutes (group 2, n=758) and above 265 minutes (group 3, n=766). Our primary end points were 30-day, 1-year and 5 years mortality. Our secondary end point was LVEF less than 40% at discharge. Results: There was no difference in 30-day or 1-year mortality between the three study groups (30 –day mortality: 3% vs. 4% vs. 5%; 1-year mortality: 6% vs. 7% vs. 7%). There was significantly lower 5-year mortality in the shortest (less than 150 minutes) total ischemic time group (11% vs. 16 % vs. 19 %). Conclusions: Shortening of total ischemic time below 150 minutes is associated with improved long term survival. Measures should be undertaken to reduce total ischemic time. Correspondence to: Lubovich A, Department of Cardiology, Center Medical Zion Bnai, Haifa 33394, Israel; E-mail: alla.lubovich@b-zion.org.il
Introduction
During the last decade, emergent performance of primary Percutaneous Coronary Intervention (PCI) became the gold standard treatment for patients with ST elevation myocardial infarction.
The timely performance of the primary PCI as measured by door to balloon time has become one of the main quality measures in the treatment of patients with ST elevation myocardial infarction and is endorsed by both American College of Cardiology and European Society of Cardiology guidelines [1,2].
However, several recent studies, failed to demonstrate improvement in mortality with shortened door to balloon time [3,4].
It is possible that further reduction in mortality of STEMI patients in the modern era of PPCI and adjuvant pharmacotherapy could be achieved only by means of reduction of total ischemic time, which is defined as the time that elapses from chest pain onset until the restoration of coronary blood flow by balloon inflation (or pain to balloon time).
In this study, we sought to examine if there was any correlation between total ischemic time and short, intermediate and long term mortality of patients with STEMI.
The ACSIS registry
Our study was based on the Acute Coronary Syndrome Israeli Survey (ACSIS) registry in which data was prospectively collected from all acute coronary syndrome patients in the State of Israel and operated in collaboration with the Israel Heart Society.
There was significant reduction in 5-years mortality in the group 1 ( Table 3).
The shortest (less than 150 minutes) total ischemic time was associated with significantly better 5 years survival ( Figure 1). The cox crude model for 5 years mortality demonstrated that total ischemic time longer than 150 minutes was associated with more than 50 percent increase in 5 years mortality (HR=1.51) and total ischemic time longer than 265 minutes was associated with more than 80 percent increase in 5 years mortality (HR=1.82) as compared with total ischemic time of less than 150 minutes ( Table 4).
The frequency of LVEF less than 40% at discharge
There was significantly less LV dysfunction at discharge in the group 1 than in the groups 2 and 3 (32 percent vs 34 percent, p<0.005).
Discussion
Since the establishing of the benefit of primary PCI over thrombolytic therapy for the treatment of acute STEMI at the beginning of the 21 century, both ACC/AHA and ESC guidelines endorsed door to balloon time less than 90 minutes being the main performance measure in the treatment of STEMI patients [1,2].
However, several recent studies failed to demonstrate reduction in mortality with shortened door to balloon time. The ACSIS registry was conducted biennially for a 2-month period during which data on all acute coronary syndrome patients admitted to all coronary care units (n=25) in Israel was provided by each participating center by means of the case report forms (CRF).
The 1-year and 5-years mortality data was adjudicated by a Central Data Coordinating Center through the ministry of internal affairs data set. A Central Data Coordinating Center was responsible for the data collection and analysis.
Study population
We analyzed data of consecutive patients (n=2254) who presented with STEMI and were treated by PPCI from the 2004,2006,2008,2010 and 2013 ACSIS registries.
Definitions
Total Ischemic Time was defined as the time that elapsed from chest pain onset until the restoration of coronary blood flow by balloon inflation (or pain to balloon time).
End points
Our primary end points were 30-days, 1-year and 5-year's mortality.
Our secondary end point was LVEF less than 40 % at discharge. Finally the Kaplan Meier crude survival curve was graphically presented along with the p-value from the pairwise log-rank-test. No correction for multiple testing was performed and p-value<0.05 was considered statistically significant.
The basic clinical and demographic characteristics
The basic clinical and demographic characteristics of the three study groups are presented in Table 1. There was a significant difference in mean age, gender, frequency of prior PCI, smoking, diabetes mellitus and family history of CAD between the three study groups. There was higher mean age and frequency of diabetes mellitus and lesser frequency of male gender in the group 3 and higher frequency of prior PCI, smoking and family history of CAD in the group 1.
The short, intermediate and long term mortality
The 30 day mortality data was available for all patients.
The 1 year mortality data was available for 2233 out of 2254 patients.
The 5 years mortality data was available for 1627 out of 2254 patients because it was not available yet for the 2013 ACSIS cohort.
There was no difference in 30 day and 1 year mortality between the three groups ( Table 2). Table 2. 30 days and 1-year mortality in the three study groups overall change in unadjusted and risk-adjusted in-hospital mortality and in unadjusted 30-day mortality [3].
The total ischemic time represents the time interval which elapses from chest pain onset till successful opening of the artery and coronary flow restoration by balloon angioplasty.
In this large cohort of STEMI patients, we demonstrated that the shortest, less than 150 minutes, total ischemic time, was associated with reduced long term mortality. They demonstrated that 30-day mortality rate significantly increased across the total ischemic time groups but did not correlate with the door to balloon time groups [6].
Based on our data, we suggest that further national efforts should be concentrated on reduction of total ischemic time rather than on door to balloon time which is only one of its components. The goal should be to get rid of all the unnecessary steps in the care of STEMI patients and develop the systems of care with the focus of decreasing the total ischemic time.
We suggest several measures which could assist in achieving the goal of reduction of the total ischemic time. First of all, we should implement educational measures by using all available social media to decrease the pain onset to EMS call time to less than 30 minutes and to prevent self-referral of patients with chest pain to the ER. Second, we should implement protocols for direct cath lab activation by the field EMS team. Thirdly, we should endorse the first medical contact to balloon time rather than door to balloon time as a national medical services quality measure.
Limitations
Our study has several important limitations. First of all, it was not a randomized control study. Secondly, the long term mortality data was available only for 72 percent (1627 out of 2254) of the patients. Thirdly, the pain perception is a very subjective matter so the beginning of the pain as perceived by the patient could not represent the exact moment of the coronary artery occlusion.
Conclusion
Shortening of total ischemic time below 150 minutes is associated with improved long term survival. Measures should be undertaken to reduce total ischemic time. | 2019-03-17T13:12:30.541Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "d7de85bb2cd599daee157c9819969334c5eab263",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/JIC-4-234.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "aed0c6766dc8f7b9ba6e33c468cad4c6c48069ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.